question
stringlengths
15
100
context
stringlengths
18
412k
is a star is born 2018 a remake
A Star Is Born (2018 film) - wikipedia A Star Is Born is an upcoming American musical romantic drama film produced and directed by Bradley Cooper, in his directorial debut. Cooper also wrote the screenplay with Will Fetters and Eric Roth. A remake of the 1937 film of the same name, it stars Cooper, Lady Gaga, Andrew Dice Clay, Dave Chappelle, and Sam Elliott, and follows a hard - drinking country musician (Cooper) who discovers and falls in love with a young singer (Gaga). It marks the third remake of the original 1937 film (which featured Janet Gaynor and Fredric March), which was adapted into a 1954 musical (starring Judy Garland and James Mason) and then remade as a 1976 rock musical with Barbra Streisand and Kris Kristofferson. Talks of a third A Star Is Born remake began in 2011 with Clint Eastwood attached to direct and Beyoncé set to star. The film languished in development hell for several years with various actors, including Leonardo DiCaprio, Will Smith and Tom Cruise, approached to co-star. In March 2016, Cooper signed on to star and make his directorial debut, and Lady Gaga joined the cast in August 2016. Principal photography began at the Coachella music festival in April 2017. A Star Is Born will premiere at the 75th Venice International Film Festival in August 31, 2018 and is scheduled to be released in the United States on October 5, 2018, by Warner Bros. Pictures. In January 2011, it was announced that Clint Eastwood was in talks to direct Beyoncé in a third remake of the 1937 film A Star Is Born; however, the project was delayed due to Beyoncé 's pregnancy. In April 2012, writer Will Fetters told Collider that the script was very inspired by Kurt Cobain. Talks with Leonardo DiCaprio, Will Smith, and Christian Bale failed to come to fruition, while Tom Cruise still remained in talks. On October 9, 2012, Beyoncé left the project, Johnny Depp also rejected the film, and Bradley Cooper was in talks to star. Eastwood was interested in Esperanza Spalding to play the female role. On March 24, 2015, Warner Bros. announced that Cooper was in final talks to make his directorial debut with the film, and potentially to star with Beyoncé, who was in talks to rejoin. Cooper would eventually star alongside Lady Gaga in the film. On August 16, 2016, it was reported that Gaga had been officially attached and the studio had green - lit the project to begin production early 2017. On November 9, 2016, it was reported that Ray Liotta was in talks to join the film in the role of the manager to Lady Gaga 's character, a role that later went to Sam Elliott. On March 17, 2017, Elliott was added to the cast, with Andrew Dice Clay entering negotiations to play Lorenzo, the father of Lady Gaga 's character. In April 2017, Rafi Gavron, Michael Harney, and Rebecca Field joined the cast of the film. Filming began on April 17, 2017. In May, Dave Chappelle was cast in the film. In April 2018, it was announced Halsey had been cast in the film. After seeing him perform at Desert Trip festival, Cooper approached Lukas Nelson (son of country music legend Willie Nelson) and asked him to help work on the film. Nelson agreed and wrote several songs, which he sent to the producers. Nelson subsequently met Lady Gaga and began writing songs with her and she, in turn, provided backing vocals on two tracks on his self - titled 2017 album. The first trailer was released on June 6, 2018. A Star Is Born will have its world premiere at the 75th Venice International Film Festival on August 31, 2018. It will also screen at the Toronto International Film Festival in September 2018. It is scheduled to be released in the United States on October 5, 2018, by Warner Bros. Pictures, after initially being set for the September 28, 2018, and May 18, 2018, releases.
can professional athletes play in the winter olympics
Amateur sports - wikipedia Amateur sports are sports in which participants engage largely or entirely without remuneration. The distinction is made between amateur sporting participants and professional sporting participants, who are paid for the time they spend competing and training. In the majority of sports which feature professional players, the professionals will participate at a higher standard of play than amateur competitors, as they can train full - time without the stress of having another job. The majority of worldwide sporting participants are amateurs. Sporting amateurism was a zealously guarded ideal in the 19th century, especially among the upper classes, but faced steady erosion throughout the 20th century with the continuing growth of pro sports and monetisation of amateur and collegiate sports, and is now strictly held as an ideal by fewer and fewer organisations governing sports, even as they maintain the word "amateur '' in their titles. Modern organized sports developed in the 19th century, with the United Kingdom and the United States taking the lead. Sporting culture was especially strong in private schools and universities, and the upper and middle class men who attended these institutions played as amateurs. Opportunities for working classes to participate in sport were restricted by their long six - day work weeks and Sunday Sabbatarianism. In the UK, the Factory Act of 1844 gave working men half a day off, making the opportunity to take part in sport more widely available. Working class sportsmen found it hard to play top level sport due to the need to turn up to work. On occasion, cash prizes, particularly in individual competitions, could make up the difference; some competitors also wagered on the outcomes of their matches. As professional teams developed, some clubs were willing to make "broken time '' payments to players, i.e., to pay top sportsmen to take time off work, and as attendances increased, paying men to concentrate on their sport full - time became feasible. Proponents of the amateur ideal deplored the influence of money and the effect it has on sports. It was claimed that it is in the interest of the professional to receive the highest amount of pay possible per unit of performance, not to perform to the highest standard possible where this does not bring additional benefit. The middle and upper class men who dominated the sporting establishment not only had a theoretical preference for amateurism, they also had a self - interest in blocking the professionalization of sport, which threatened to make it feasible for the working classes to compete against themselves with success. Working class sportsmen did n't see why they should n't be paid to play. Hence there were competing interests between those who wished sport to be open to all and those who feared that professionalism would destroy the ' Corinthian spirit '. This conflict played out over the course of more than one hundred years. Some sports dealt with it relatively easily, such as golf, which decided in the late 19th century to tolerate competition between amateurs and professionals, while others were traumatized by the dilemma, and took generations to fully come to terms with professionalism even to a result of causing a breakdown in the sport (as in the case of rugby union and rugby league in 1895). Corinthian has come to describe one of the most virtuous of amateur athletes -- those for whom fairness and honor in competition is valued above victory or gain. The Corinthian Yacht Club (now the Royal Corinthian Yacht Club, RCYC) was established in Essex in 1872 with "encouragement of Amateur Yacht sailing '' as its "primary object. '' To that end, club rules ensured that crews consisted of amateurs, while "no professional or paid hand is allowed to touch the tiller or in any way assist in steering. '' Although the RCYC website derives the name Corinthian from the Isthmian Games of ancient Corinth, the Oxford English Dictionary derives the noun Corinthian from "the proverbial wealth, luxury, and licentiousness of ancient Corinth '', with senses developing from "a wealthy man '' (attested in 1577) through "a licentious man '' (1697) and "a man of fashion about town '' (1819) to "a wealthy amateur of sport who rides his own horses, steers his own yacht, etc '' (1823). Dixon Kemp wrote in A Manual of Yacht and Boat Sailing published in 1900, "The term Corinthian half a century ago was commonly applied to the aristocratic patrons of sports, some of which, such as pugilism, are not now the fashion. '' The "Corinthian ideal '' of the gentleman amateur developed alongside muscular Christianity in late Victorian Britain, and has been analysed as a historical social phenomenon since the later 20th century. The Corinthian Football Club founded in 1882 was the paragon of this. In the United States, "Corinthian '' came to be applied in particular to amateur yachtsman, and remains current as such and in the name of many yacht clubs; including Seawanhaka Corinthian Yacht Club (founded 1874, added "Corinthian '' to name in 1881) and Yale Corinthian Yacht Club (likewise 1881 and 1893). By the early 21st century the Olympic Games and all the major team sports accepted professional competitors. However, there are still some sports which maintain a distinction between amateur and professional status with separate competitive leagues. The most prominent of these are golf and boxing. In particular, only amateur boxers could compete at the Olympics up to 2016. Problems can arise for amateur sportsmen when sponsors offer to help with an amateur 's playing expenses in the hope of striking lucrative endorsement deals with them in case they become professionals at a later date. This practice, dubbed "shamateurism '', was present as early as in the 19th century. As financial and political stakes in high - level were becoming higher, shamateurism became all the more widespread, reaching its peak in the 1970s and 1980s, when the International Olympic Committee started moving towards acceptance of professional athletes. The advent of the state - sponsored "full - time amateur athlete '' of the Eastern Bloc countries further eroded the ideology of the pure amateur, as it put the self - financed amateurs of the Western countries at a disadvantage. The Soviet Union entered teams of athletes who were all nominally students, soldiers, or working in a profession, but many of whom were in reality paid by the state to train on a full - time basis. Where professionals are permitted, it is hard for amateurs to compete against them. Whether this is a triumph of capitalism or an example of corruption depends on the viewer 's perspective. To some an amateur means an incompetent or also - ran, and to others it means an idealist. To say that the athlete should not be paid can prevent performances only possible for an athlete who is free to pursue the sport full - time without other sources of income. All North American university sports are conducted by amateurs. Even the very most commercialized college sports, such as NCAA football and basketball, do not financially compensate competitors, although coaches and trainers generally are paid. College football coaches in Texas and other states are often the highest paid state employees, with some drawing salaries of over five million US dollars annually. Athletic scholarship programs, unlike academic scholarship programs, can not cover more than the cost of food, housing, tuition, and other university - related expenses. In short, a school can pay an athlete to attend classes but can not pay an athlete to play. In order to ensure that the rules are not circumvented, stringent rules restrict gift - giving during the recruitment process as well as during and even after a collegiate athlete 's career; college athletes also can not endorse products, which some may consider a violation of free speech rights. Some have criticised this system as exploitative; prominent university athletics programs are major commercial endeavors, and can easily rake in millions of dollars in profit during a successful season. College athletes spend a great deal of time "working '' for the university, and earn nothing from it at the time aside from scholarships sometimes worth tens of thousands of dollars; basketball and football coaches, meanwhile, earn salaries that can compare with those of professional teams ' coaches. Supporters of the system say that college athletes can always make use of the education they earn as students if their athletic career does n't pan out, and that allowing universities to pay college athletes would rapidly lead to deterioration of the already - marginal academic focus of college athletics programs. They also point out that athletic scholarships allow many young men and women who would otherwise be unable to afford to go to college, or would not be accepted, to get a quality education. Also, most sports other than football and men 's basketball do not generate significant revenue for any school (and such teams are often essentially funded by football, basketball, and donations), so it may not be possible to pay athletes in all sports. Allowing pay in some sports but not others could result in the violation of U.S. laws such as Title IX. Through most of the 20th century the Olympics allowed only amateur athletes to participate and this amateur code was strictly enforced - Jim Thorpe was stripped of track and field medals for having taken expense money for playing baseball in 1912. Later on, the nations of the Communist bloc entered teams of Olympians who were all nominally students, soldiers, or working in a profession, but many of whom were in reality paid by the state to train on a full - time basis. Near the end of the 1960s, the Canadian Amateur Hockey Association (CAHA) felt their amateur players could no longer be competitive against the Soviet team 's full - time athletes and the other constantly improving European teams. They pushed for the ability to use players from professional leagues but met opposition from the International Ice Hockey Federation (IIHF) and the International Olympic Committee (IOC). Avery Brundage, president of the IOC from 1952 to 1972, was opposed to the idea of amateur and professional players competing together. At the IIHF Congress in 1969, the IIHF decided to allow Canada to use nine non-NHL professional hockey players at the 1970 World Championships in Montreal and Winnipeg, Manitoba, Canada. The decision was reversed in January 1970 after Brundage said that ice hockey 's status as an Olympic sport would be in jeopardy if the change was made. In response, Canada withdrew from all international ice hockey competitions and officials stated that they would not return until "open competition '' was instituted. Günther Sabetzki became president of the IIHF in 1975 and helped to resolve the dispute with the CAHA. In 1976, the IIHF agreed to allow "open competition '' between all players in the World Championships. However, NHL players were still not allowed to play in the Olympics, because of the unwillingness of the NHL to take a break mid-season and the IOC 's amateur - only policy. Before the 1984 Winter Olympics, a dispute formed over what made a player a professional. The IOC had adopted a rule that made any player who had signed an NHL contract but played less than ten games in the league eligible. However, the United States Olympic Committee maintained that any player contracted with an NHL team was a professional and therefore not eligible to play. The IOC held an emergency meeting that ruled NHL - contracted players were eligible, as long as they had not played in any NHL games. This made five players on Olympic rosters -- one Austrian, two Italians and two Canadians -- ineligible. Players who had played in other professional leagues -- such as the World Hockey Association -- were allowed to play. Canadian hockey official Alan Eagleson stated that the rule was only applied to the NHL and that professionally contracted players in European leagues were still considered amateurs. Murray Costello of the CAHA suggested that a Canadian withdrawal was possible. In 1986, the IOC voted to allow all athletes to compete in Olympic Games starting in 1988, but let the individual sport federations decide if they wanted to allow professionals. After the 1972 retirement of IOC President Avery Brundage, the Olympic amateurism rules were steadily relaxed, amounting only to technicalities and lip service, until being completely abandoned in the 1990s (In the United States, the Amateur Sports Act of 1978 prohibits national governing bodies from having more stringent standards of amateur status than required by international governing bodies of respective sports. The act caused the breakup of the Amateur Athletic Union as a wholesale sports governing body at the Olympic level). Olympic regulations regarding amateur status of athletes were eventually abandoned in the 1990s with the exception of wrestling, where the amateur fight rules are used due to the fact that professional wrestling is largely staged with pre-determined outcomes. Starting from the 2016 Summer Olympics, professionals were allowed to compete in boxing, though amateur fight rules are still used for the tournament. English first - class cricket distinguished between amateur and professional cricketers until 1963. Teams below Test cricket level in England were normally, except in emergencies such as injuries, captained by amateurs. Notwithstanding this, sometimes there were ways found to give high performing "amateurs '', for example W.G. Grace, financial and other compensation such as employment. On English overseas tours, some of which in the 19th century were arranged and led by professional cricketer - promoters such as James Lillywhite, Alfred Shaw and Arthur Shrewsbury, a more pragmatic approach generally prevailed. In England the division was reflected in, and for a long time reinforced by, the series of Gentlemen v Players matches between amateurs and professionals. Few cricketers changed their status, but there were some notable exceptions such as Wally Hammond who became (or was allowed to become) an amateur in 1938 so that he could captain England. Professionals were often expected to address amateurs, at least to their faces, as "Mister '' or "Sir '' whereas the amateurs often referred to professionals by their surnames. Newspaper reports often prefaced amateurs ' names with "Mr '' while professionals were referred to by surname, or sometimes surname and initials. At some grounds amateurs and professionals had separate dressing rooms and entered the playing arena through separate gates. After the Second World War the division was increasingly questioned. When Len Hutton was appointed as English national cricket captain in 1952 he remained a professional. In 1962 the division was removed, and all cricket players became known as "cricketers ''. In Australia the amateur - professional division was rarely noticed in the years before World Series Cricket, as many top level players expected to receive something for their efforts on the field: before World War 1 profit - sharing of tour proceeds was common. Australian cricketers touring England were considered amateurs and given the title "Mr '' in newspaper reports. Before the Partition of India some professionalism developed, but talented cricketers were often employed by wealthy princely or corporate patrons and thus retained a notional amateur status. Women 's cricket is, and always has been, almost entirely amateur. Boot money has been a phenomenon in amateur sport for centuries. The term "boot money '' became popularised in the 1880s when it was not unusual for players to find half a crown (corresponding to 121⁄2 pence after decimalisation) in their boots after a game. The Football Association prohibited paying players until 1885, and this is referred to as the "legalisation '' of professionalism because it was an amendment of the "Laws of the Game ''. However, a maximum salary cap of twelve pounds a week for a player with outside employment and fifteen pounds a week for a player with no outside employment lingered until the 1960s even as transfer fees reached over a hundred thousand pounds; again, "boot money '' was seen as a way of topping up pay. Today the most prominent English football clubs that are not professional are semi-professional (paying part - time players more than the old maximum for top professionals; this includes all the major existing women 's clubs, in which full professionalism has not taken root yet) and the most prominent true amateur men 's club is probably Queen 's Park, the oldest football club in Scotland, founded in 1867 and with a home ground (Hampden Park) which is one of UEFA 's five - star stadia. They have also won the Scottish Cup more times than any club outside the Old Firm. Amateur football in both genders is now found mainly in small village and Sunday clubs and the Amateur Football Alliance. Sailing has taken the opposite course. Around the turn of the 20th century, much of sailing was professionals paid by interested idle rich. Today, sailing, especially dinghy sailing, is an example of a sport which is still largely populated by amateurs. For example, in the recent Team Racing Worlds, and certainly the American Team Racing Nationals, most of the sailors competing in the event were amateurs. While many competitive sailors are employed in businesses related to sailing (primarily sailmaking, naval architecture, boatbuilding and coaching), most are not compensated for their own competitions. In large keelboat racing, such as the Volvo Around the World Race and the America 's Cup, this amateur spirit has given way in recent years to large corporate sponsorships and paid crews. Like other Olympic sports, figure skating used to have very strict amateur status rules. Over the years, these rules were relaxed to allow competitive skaters to receive token payments for performances in exhibitions (amid persistent rumors that they were receiving more money "under the table ''), then to accept money for professional activities such as endorsements provided that the payments were made to trust funds rather than to the skaters themselves. In 1992, trust funds were abolished, and the International Skating Union voted both to remove most restrictions on amateurism, and to allow skaters who had previously lost their amateur status to apply for reinstatement of their eligibility. A number of skaters, including Brian Boitano, Katarina Witt, Jayne Torvill and Christopher Dean, and Ekaterina Gordeeva and Sergei Grinkov, took advantage of the reinstatement rule to compete at the 1994 Winter Olympics. However, when all of these skaters promptly returned to the pro circuit again, the ISU decided the reinstatement policy was a failure and it was discontinued in 1995. Prize money at ISU competitions was introduced in 1995, paid by the sale of the television rights to those events. In addition to prize money, Olympic - eligible skaters may also earn money through appearance fees at shows and competitions, endorsements, movie and television contracts, coaching, and other "professional '' activities, provided that their activities are approved by their national federations. The only activity that is strictly forbidden by the ISU is participating in unsanctioned "pro '' competitions, which the ISU uses to maintain their monopoly status as the governing body in the sport. Many people in the skating world still use "turning pro '' as jargon to mean retiring from competitive skating, even though most top competitive skaters are already full - time professionals, and many skaters who retire from competition to concentrate on show skating or coaching do not actually lose their competition eligibility in the process. Rugby has provided one of the most visible and lasting examples of the tension between amateurism and professionalism during the development of nationally organised sports in Britain in the late - 19th century. The split in rugby in 1895 between what became rugby league and rugby union arose as a direct result of a dispute over the pretence of a strict enforcement of its amateur status - clubs in Leeds and Bradford were fined after compensating players for missing work, whilst at the same time the Rugby Football Union (RFU) was allowing other players to be paid. Rugby football, despite its origins in the privileged English public schools, was a popular game throughout England by around 1880, including in the large working - class areas of the industrial north. However, as the then - amateur sport became increasingly popular and competitive, attracting large paying crowds, teams in such areas found it difficult to attract and retain good players. This was because physically fit local men needed to both work to earn a wage - limiting the time that they could devote to unpaid sport - and to avoid injuries that might prevent them working in the future. Certain teams faced with these circumstances wanted to pay so - called ' broken time ' money to their players to compensate them for missing paid work due to their playing commitments, but this contravened the amateur policy of the Rugby Football Union (RFU). Following a lengthy dispute on this point during the early 1890s, representatives of more than 20 prominent northern rugby clubs met in Huddersfield in August 1895 to form the Northern Rugby Football Union (NRFU), a breakaway administrative body which would permit payments to be made to players. The NRFU initially adopted established RFU rules for the game itself, but soon introduced a number of changes, most obviously a switch from 15 to 13 players per side. It became the Rugby Football League in 1922, by which time the key differences in the two codes were well established, with the 13 - a-side variant becoming known as rugby league. The RFU took strong action against the clubs involved in the formation of the NRFU, all of whom were deemed to have forfeited their amateur status and therefore to have left the RFU. A similar interpretation was applied to all players who played either for or against such clubs, whether or not they themselves received any compensation. Such players were effectively barred sine die from any involvement in organised rugby union. These comprehensive and enduring sanctions, combined with the very localised nature of most rugby competition, meant that most northern clubs had little practical alternative but to affiliate with the NRFU in the first few years of its existence. Rugby football in Britain therefore became subject to a de facto schism along regional - and to some extent class - lines, reflecting the historical origins of the split. Rugby league - in which professionalism was permitted - was predominant in northern England, particularly in industrial areas, and was viewed as a working class game. Rugby union - which remained amateur - was predominant in the rest of England, as well as in Wales and Scotland. Rugby union also had a more affluent reputation, although there are areas - notably in South Wales and in certain English cities such as Gloucester - with a strong working - class rugby union tradition. Discrimination against rugby league players could verge on the petty - former Welsh international Fred Perrett was once excluded in lists of players who died in the First World War due to his ' defection ' to the league code. One Member of Parliament, David Hinchliffe, described it as "one of the longest (and daftest) grievances in history '' with anyone over the age of 18 associated with rugby league being banned forever from rugby union. The Scottish Rugby Union was a particular bastion of amateurism and extreme care was taken to avoid the ' taint ' of professionalism: a player rejoining the national team after the end of the Second World War applied to be issued with a new shirt and was reminded that he had been supplied with a shirt prior to the outbreak of hostilities. In Wales the position was more equivocal with clubs attempting to stem the tide of players going north with boot money, a reference to the practice of putting cash payments into player 's footwear whilst they were cleaning up after a game. Sometimes payments were substantial. Barry John was once asked why he had n't turned professional and responded, "I could n't afford to. '' Rugby union was declared "open '' in August 1995 - almost exactly 100 years after the original split occurred - meaning that professionalism has been permitted in both rugby codes since that date. However, while the professional - amateur divide remained in force, there was originally very limited crossover between the two codes, the most obvious occasions being when top - class rugby union players ' switched codes ' to rugby league in order to play professionally. Welsh international Jonathan Davies was a high - profile example of this switch. Since professionalism has been allowed in rugby union the switches have started to come the opposite way. Union has swiftly grown to embrace the professional game with many league players joining union to take a slice of the larger amounts of money available in the sport. Nowadays, while rugby union no longer makes the professional - amateur distinction, the professional - amateur split still exists within rugby league with the British Amateur Rugby League Association (BARLA) strictly amateur, though it allows some ex-professionals to play provided they are no longer under contract. The most recent club to get a ban for fielding a contracted professional was Brighouse Rangers who were expelled from the National Conference League during 2007 - 2008 season, and the player handed a sine die ban (though in part for gouging), although the club itself has since been admitted to the Pennine League. Also, some rugby unions have amateur rules, most notably the Argentine Rugby Union, where all member clubs are amateur. The Campeonato Argentino, the national championship for provincial teams, does not include players contracted to the country 's Super Rugby side, the Jaguares. Alternative sports, using the flying disc, began in the mid-sixties. As numbers of young people became alienated from social norms, they resisted and looked for alternative recreational activities, including that of throwing a Frisbee. What started with a few players, in the sixties, like Victor Malafronte, Z Weyand and Ken Westerfield experimenting with new ways of throwing and catching a Frisbee, later would become known as playing freestyle. Organized disc sports, in the 1970s, began with promotional efforts from Wham - O and Irwin Toy (Canada), a few tournaments and professionals using Frisbee show tours to perform at universities, fairs and sporting events. Disc sports such as freestyle, double disc court, guts, disc ultimate and disc golf became this sports first events. Two sports, the team sport of disc ultimate and disc golf are very popular worldwide and are now being played semi professionally. The World Flying Disc Federation, Professional Disc Golf Association, and the Freestyle Players Association, are the official rules and sanctioning organizations for flying disc sports worldwide. Disc ultimate is a team sport played with a flying disc. The object of the game is to score points by passing the disc to members of your own team, on a rectangular field, 120 yards (110m) by 40 yards (37m), until you have successfully completed a pass to a team member in the opposing teams end zone. There are currently over five million people that play some form of organized ultimate in the US. Ultimate has started to be played semi-professionally with two newly formed leagues, the American Ultimate Disc League (AUDL) and Major League Ultimate (MLU). The game of guts was invented by the Healy Brothers in the 1950s and developed at the International Frisbee Tournament (IFT) in Marquette, Michigan. The game of ultimate, the most widely played disc game, began in the late 1960s with Joel Silver and Jared Kass. In the 1970s it developed as an organized sport with the creation of the Ultimate Players Association with Dan Roddick, Tom Kennedy and Irv Kalb. Double disc court was invented and introduced in the early 1970s by Jim Palmeri. In 1974, freestyle competition was created and introduced by Ken Westerfield and Discrafts Jim Kenner. In 1976, the game of disc golf was standardized with targets called "pole holes '' invented and developed by Wham - O 's Ed Headrick. Sports teams commonly exist at the high school level; students who participate, commonly referred to as student athletes, do so during their course of study. Occasionally, sports success in high school sports may lead to a professional career in the field. The benefit of sports in high school is debated; some believe that they promote discipline and teamwork, while others find that they can cause injury. One study on the relationship between high school athletic and academic successes finds that, for the most part, higher participation and success rates in sports is positively related school - wide student successes on academic outcomes such as standardized test scores and educational attainment. The National Center for Educational Statistics reports that student athletes have a 20 % higher chance of completing a college degree, and are more likely to be employed and in better health than non-athletes. However, a survey of high school athletes in 2006 showed that high school athletes are more likely to cheat inside of the classroom than non-athletes, especially boys participating in football, baseball, and basketball and girls participating in softball and basketball. The survey does not indicate to what extent cheating contributes to the greater academic outcomes of high school athletes. In the world of middle school and high school sports, several fees have risen over the last few years making sports more expensive. The term "Pay - to - Play '' means that students and their parents must pay a flat fee to participate, and that fee often leaves out the costs of uniforms, transportation, and other team fees. This affects low - income families (those who earn less than $60,000 per year) and their ability to participate in the sports. The average cost is $381 per child per sport (Pay - to - Play Sports). Physical and mental health can improve with the right amount of physical fitness incorporated into everyday life. It allows for the child to have a healthy developing body, and a BMI within the normal range. Physical activity has been proven to improve mood and decrease both stress and anxiety. Studies have shown that the more physical activity one participates in as a child, the happier and more stable that person will be as an adult. Thus, the more students who participate in school sports, the more students who will find themselves balanced and successful adults later in life. Works Cited: "Pay - to - play Sports Keeping Lower - income Kids out of the Game. '' Pay - to - play Sports Keeping Lower - income Kids out of the Game. N.p., n.d. Web. 10 Apr. 2016. Golf still has amateur championships, most notably the U.S. Amateur Championship, British Amateur Championship, U.S. Women 's Amateur, British Ladies Amateur, Walker Cup, Eisenhower Trophy, Curtis Cup and Espirito Santo Trophy. However, amateur golfers are far less known than players of professional golf tours such as the PGA Tour and European Tour. Still, a few amateurs are invited to compete in open events, such as the U.S. Open and British Open or non-open event, such as the Masters Tournament. In motorsports, there are various forms of amateur drivers. When they compete at professional events, they are often referred to as "pay drivers ''. They have been a presence in Formula One for many years - drivers such as Felipe Nasr, Esteban Gutiérrez and Rio Haryanto bring sponsorship to the tune of $30 million for a seat, even in backmarker teams. In sports car racing, drivers are often seeded into certain categories, including Amateur and Pro-Am classes. The vast majority of these "gentlemen drivers '' however tend to participate at club level, often racing historic or classic cars, which are aimed primarily at amateurs. In Ireland, the Gaelic Athletic Association, or GAA, protects the amateur status of the country 's national sports, including Gaelic football, Hurling and Camogie. Major tennis championships prohibited professionals until 1968 but the subsequent admission of professionals virtually eliminated amateurs from public visibility. Paying players was considered disreputable in baseball until 1869.
what is the meaning of a red herring
Red herring - wikipedia A red herring is something that misleads or distracts from a relevant or important issue. It may be either a logical fallacy or a literary device that leads readers or audiences towards a false conclusion. A red herring might be intentionally used, such as in mystery fiction or as part of rhetorical strategies (e.g. in politics), or it could be inadvertently used during argumentation. The origin of the expression is unknown. Conventional wisdom has long supposed it to be the use of a kipper (a strong - smelling smoked fish) to train hounds to follow a scent, or to divert them from the correct route when hunting; however, modern linguistic research suggests that the term was probably invented in 1807 by English polemicist William Cobbett, referring to one occasion on which he had supposedly used a kipper to divert hounds from chasing a hare, and was never an actual practice of hunters. The phrase was later borrowed to provide a formal name for the logical fallacy and literary device. As an informal fallacy, the red herring falls into a broad class of relevance fallacies. Unlike the straw man, which is premised on a distortion of the other party 's position, the red herring is a seemingly plausible, though ultimately irrelevant, diversionary tactic. According to the Oxford English Dictionary, a red herring may be intentional, or unintentional; it does not necessarily mean a conscious intent to mislead. The expression is mainly used to assert that an argument is not relevant to the issue being discussed. For example, "I think we should make the academic requirements stricter for students. I recommend you support this because we are in a budget crisis, and we do not want our salaries affected. '' The second sentence, though used to support the first sentence, does not address that topic. In fiction and non-fiction a red herring may be intentionally used by the writer to plant a false clue that leads readers or audiences towards a false conclusion. For example, the character of Bishop Aringarosa in Dan Brown 's The Da Vinci Code is presented for most of the novel as if he is at the centre of the church 's conspiracies, but is later revealed to have been innocently duped by the true antagonist of the story. The character 's name is a loose Italian translation of "red herring '' (aringa rosa; rosa actually meaning pink, and very close to rossa, red). A red herring is often used in legal studies and exam problems to mislead and distract students from reaching a correct conclusion about a legal issue, allegedly as a device that tests students ' comprehension of underlying law and their ability to properly discern material factual circumstances. In a literal sense, there is no such fish as a "red herring ''; it refers to a particularly strong kipper, a fish (typically a herring) that has been strongly cured in brine and / or heavily smoked. This process makes the fish particularly pungent smelling and, with strong enough brine, turns its flesh reddish. In its literal sense as a strongly cured kipper, the term can be dated to the mid-13th century, in the poem The Treatise by Walter of Bibbesworth: "He eteþ no ffyssh But heryng red. '' Prior to 2008, the figurative sense of "red herring '' was thought to originate from a supposed technique of training young scent hounds. There are variations of the story, but according to one version, the pungent red herring would be dragged along a trail until a puppy learned to follow the scent. Later, when the dog was being trained to follow the faint odour of a fox or a badger, the trainer would drag a red herring (whose strong scent confuses the animal) perpendicular to the animal 's trail to confuse the dog. The dog eventually learned to follow the original scent rather than the stronger scent. A variation of this story is given, without mention of its use in training, in The Macmillan Book of Proverbs, Maxims, and Famous Phrases (1976), with the earliest use cited being from W.F. Butler 's Life of Napier, published in 1849. Brewer 's Dictionary of Phrase and Fable (1981) gives the full phrase as "Drawing a red herring across the path '', an idiom meaning "to divert attention from the main question by some side issue ''; here, once again, a "dried, smoked and salted '' herring when "drawn across a fox 's path destroys the scent and sets the hounds at fault. '' Another variation of the dog story is given by Robert Hendrickson (1994) who says escaping convicts used the pungent fish to throw off hounds in pursuit. According to a pair of articles by Professor Gerald Cohen and Robert Scott Ross published in Comments on Etymology (2008), supported by etymologist Michael Quinion and accepted by the Oxford English Dictionary, the idiom did not originate from a hunting practice. Ross researched the origin of the story and found the earliest reference to using herrings for training animals was in a tract on horsemanship published in 1697 by Gerland Langbaine. Langbaine recommended a method of training horses (not hounds) by dragging the carcass of a cat or fox so that the horse would be accustomed to following the chaos of a hunting party. He says if a dead animal is not available, a red herring would do as a substitute. This recommendation was misunderstood by Nicholas Cox, published in the notes of another book around the same time, who said it should be used to train hounds (not horses). Either way, the herring was not used to distract the hounds or horses from a trail, rather to guide them along it. The earliest reference to using herring for distracting hounds is an article published on 14 February 1807 by radical journalist William Cobbett in his polemical periodical Political Register. According to Cohen and Ross, and accepted by the OED, this is the origin of the figurative meaning of red herring. In the piece, William Cobbett critiques the English press, which had mistakenly reported Napoleon 's defeat. Cobbett recounted that he had once used a red herring to deflect hounds in pursuit of a hare, adding "It was a mere transitory effect of the political red - herring; for, on the Saturday, the scent became as cold as a stone. '' Quinion concludes: "This story, and (Cobbett 's) extended repetition of it in 1833, was enough to get the figurative sense of red herring into the minds of his readers, unfortunately also with the false idea that it came from some real practice of huntsmen. '' Although Cobbett popularized the figurative usage, he was not the first to consider red herring for scenting hounds in a literal sense; an earlier reference occurs in the pamphlet Nashe 's Lenten Stuffe, published in 1599 by the Elizabethan writer Thomas Nashe, in which he says "Next, to draw on hounds to a scent, to a red herring skin there is nothing comparable. '' The Oxford English Dictionary makes no connection with Nashe 's quote and the figurative meaning of red herring to distract from the intended target, only in the literal sense of a hunting practice to draw dogs towards a scent. The use of herring to distract pursuing scent hounds was tested on Episode 148 of the series MythBusters. Although the hound used in the test stopped to eat the fish and lost the fugitive 's scent temporarily, it eventually backtracked and located the target, resulting in the myth being classified by the show as "Busted ''.
where were you when i needed you stevie wonder
Superwoman (Where Were You When I Needed You) - Wikipedia "Superwoman (Where Were You When I Needed You) '' is a 1972 soul track by Stevie Wonder. It was the second track on Wonder 's Music of My Mind album, and was also released as the first single. In essence a two - part song, there is a coherence in that it tells a story of the singer 's relationship with "Mary ''. The first part covers her desire to be a star, and to leave behind her old life to become a movie star. The second part covers the narrator 's wondering why she had n't come back as soon as he had hoped. The second part of the song is also a reworking of the song "Never Dreamed You 'd Leave in Summer '' from the 1971 album Where I 'm Coming From. The song, both in its sound and length, was a change of pace for Wonder, who was trying to establish his own identity outside of the Motown sound. Besides its floaty ambience, it featured the singer as a virtual one - man band. The song reached a peak of # 33 on the Billboard Pop charts. "Superwoman '' chronicles the relationship Stevie had with his first wife, Syreeta Wright, a Motown singer and composer who entered the company as a secretary. The lyric "trying to boss the bull around '' references the woman trying to control Stevie who is a Taurus.
who sings can't get enough of you
Ca n't Get Enough of You Baby - wikipedia "Ca n't Get Enough of You Baby '' is a song written by Denny Randell and Sandy Linzer and recorded by various artists, for the first time by The 4 Seasons featuring The "Sound '' of Frankie Valli in January 1966. The protopunk band? and the Mysterians did it in 1967 for their second album Action. Their version reached No. 56 on the Billboard Hot 100 when it was released as a single. It was covered by Smash Mouth for the soundtrack to the 1998 film Ca n't Hardly Wait, and was also included as the lead single on their 1999 album Astro Lounge.
who sang don't leave me this way in 1986
Do n't Leave Me This Way - wikipedia "Do n't Leave Me This Way '' is a song written by Kenneth Gamble, Leon Huff and Cary Gilbert. First charting as a hit for Harold Melvin & the Blue Notes featuring Teddy Pendergrass, an act on Gamble & Huff 's Philadelphia International label in 1975, "Do n't Leave Me This Way '' was later a huge disco hit for Motown artist Thelma Houston in 1977. The song was also a major hit for British group the Communards in 1986. The Blue Notes ' original version of the song, featuring Teddy Pendergrass 's lead vocal, was included as an album track on the group 's successful album Wake Up Everybody released in November 1975. Though not issued as a single in the United States at the time, the Blue Notes ' recording reached number 3 on the US Billboard Disco Chart in the wake of Thelma Houston 's version. The song proved to be the group 's biggest hit in the UK, number 5 on the UK singles chart, when released there as a single in 1977. It became the title track of a budget LP issued on the CBS Embassy label in the UK in 1978. The track was finally issued as a 12 - inch single in the US in 1979, coupled with "Bad Luck ''. "Do n't Leave Me This Way '' was covered by Motown in 1976. Originally assigned to Diana Ross, it was intended to be the follow - up to her hit "Love Hangover '' but was reassigned and given to the upcoming Motown artist Thelma Houston instead. Following the release of her third album Any Way You Like It, a Boston record pool unanimously reported positive audience response to "Do n't Leave Me This Way '' in discos, and the song was selected for release as a single. Houston 's version became a massive international hit, topping the soul singles chart and, nine weeks later, the Billboard Hot 100 for one week in April 1977. The song peaked at number 13 in the UK. The song went to number one on the disco chart. Later in the year, it was featured on the soundtrack of the movie, Looking for Mr. Goodbar. In 1978, "Do n't Leave Me This Way '' won the award for Best R&B Vocal Performance, Female at the 20th Annual Grammy Awards. Houston 's version was revived in 1995 in several remixes, which reached number 19 on the US Billboard Dance Chart and number 35 in the UK. This version got Houston ranked number 86 on VH1 's "100 Greatest One - hit Wonders '', as well as the number 2 spot on their "100 Greatest Dance Songs '' list. The 1994 / 1995 remixes are: R&B vs 4: 00 Remix radio vs 4: 00 7 '' radio edit 4: 00 Club remix vertigo 5: 40 House club remix 5: 40 Factory team remix 5: 50 U.S. club edit 5: 50 Serious rope club remix 7: 10 Serious rope 7 '' remix 4: 10 Jazz voice 's classic club trax 6: 10 Jazz voice 's dub mix 7: 35 Xs'2 house pump mix 7: 30 Joe T. Vanelli dubby mix 8: 40 Joe T. Vanelli light mix 5: 20 Joe T. Vanelli Radio Cut 3: 54 Joe T. Vanelli Extra Dubby 5: 17 Junior sound factory mix 9: 30 Tribe dub (acid vocal) 7: 20 Junior 's factory dub 9: 30 Junior gospel dub 7: 55 Junior 's Tribe Prank Mix and Radio Edit 3: 20. Throughout the 1980s and 1990s, Houston 's version of the song became an unofficial theme song for the AIDS epidemic in gay male communities of the west. American artist Nayland Blake created a work for American Foundation of AIDS research about the epidemic that referenced the song and its significance in the community. An art exhibition at the National Gallery of Australia entitled "Do n't Leave Me This Way - Art in the age of AIDS '' opened in 1994 containing various works about the epidemic. A 246 - page publication of the exhibition also followed. The song was covered by the Communards in a Hi - NRG version. This recording topped the UK charts for four weeks in September 1986, becoming the biggest selling record of the year in the process. The featured guest vocalist was the female jazz singer Sarah Jane Morris. The song became a Top 40 hit on the US Billboard Hot 100 and topped the Billboard Dance chart. In 2015 the song was voted by the British public as the nation 's 16th favourite 1980s number one in a poll for ITV.. Several remixes were issued, notably the "Gotham City Mix '' which was split across two sides of a 12 '' single and ran for a total of 22 minutes 55 seconds. It was one of the songs played at Burberry 's February 2018 show, presented on 17 February at the Dimco Buildings in West London, marking Christopher Bailey 's final outing for the brand. The album liner notes dedicate the song to the GLC. Jeanie Tracy released a cover version of this in 1985 on Megatone Records. A version of the song is featured in the stage musical, Priscilla Queen of the Desert -- the Musical during a funeral scene. Episode 6 of the 2004 BBC miniseries Blackpool featured the Communards version, accompanied on screen by the singing and dancing of the characters, as part of the story. Cher did a cover of the song during her Las Vegas residency show Cher. The 2012 song "Lying Together '' by French Kiwi Juice samples vocals from Houston 's cover. The song appeared in the 2015 film The Martian directed by Ridley Scott and starring Matt Damon. Bakermat covered the song in 2017 with their single "Baby ''.
when was hong kong given back to china
Transfer of sovereignty over Hong Kong - Wikipedia The transfer of sovereignty over Hong Kong from the United Kingdom to China, referred to as "the Handover '' internationally or "the Return '' in China, took place on 1 July 1997. The landmark event marked the end of British administration in Hong Kong, and is often regarded as the fall of the British Empire. Hong Kong 's territory was acquired from three separate treaties: the Treaty of Nanking in 1842, the Convention of Peking in 1860, and The Convention for the Extension of Hong Kong Territory in 1898, which gave the UK the control of Hong Kong Island, Kowloon (area south of Boundary Street), and the New Territories (area north of Boundary Street and south of the Sham Chun River, and outlying islands), respectively. Although Hong Kong Island and Kowloon had been ceded to the United Kingdom in perpetuity, the control on the New Territories was a 99 - year lease. The finite nature of the 99 - year lease did not hinder Hong Kong 's development as the New Territories were combined as a part of Hong Kong. However, by 1997, it was impractical to separate the three territories and only return the New Territories. In addition, with the scarcity of land and natural resources in Hong Kong Island and Kowloon, the New Territories were being developed with large - scale infrastructures and other developments, with the break - even day lying well past 30 June 1997. Thus, the status of the New Territories after the expiry of the 99 - year lease became important for Hong Kong 's economic development. When the People 's Republic of China obtained its seat in the United Nations as a result of the UN General Assembly Resolution 2758 in 1971, it began to act diplomatically on the sovereignty issues of Hong Kong and Macau. In March 1972, the Chinese UN representative, Huang Hua, wrote to the United Nations Decolonization Committee to state the position of the Chinese government: The same year, on 8 November, the United Nations General Assembly passed the resolution on removing Hong Kong and Macau from the official list of colonies. In March 1979, the Governor of Hong Kong Murray MacLehose paid his first official visit to the People 's Republic of China (PRC), taking the initiative to raise the question of Hong Kong 's sovereignty with Deng Xiaoping. Without clarifying and establishing the official position of the PRC government, the arranging of real estate leases and loans agreements in Hong Kong within the next 18 years would become difficult. In response to concerns over land leases in the New Territories, MacLehose proposed that British administration of the whole of Hong Kong, as opposed to sovereignty, be allowed to continue after 1997. He also proposed that contracts include the phrase "for so long as the Crown administers the territory ''. In fact, as early as the mid-1970s, Hong Kong had faced additional risks raising loans for large - scale infrastructure projects such as its Mass Transit Railway (MTR) system and a new airport. Caught unprepared, Deng asserted the necessity of Hong Kong 's return to China, upon which Hong Kong would be given special status by the PRC government. MacLehose 's visit to the PRC raised the curtain on the issue of Hong Kong 's sovereignty: Britain was made aware of the PRC 's aspiration to resume sovereignty over Hong Kong and began to make arrangements accordingly to ensure the sustenance of her interests within the territory, as well as initiating the creation of a withdrawal plan in case of emergency. Three years later, Deng received the former British Prime Minister Edward Heath, who had been dispatched as the special envoy of Prime Minister Margaret Thatcher to establish an understanding of the PRC 's view with regards to the question of Hong Kong; during their meeting, Deng outlined his plans to make the territory a special economic zone, which would retain its capitalist system under Chinese sovereignty. In the same year, Edward Youde, who succeeded MacLehose as the 26th Governor of Hong Kong, led a delegation of five Executive Councillors to London, including Chung Sze - yuen, Lydia Dunn, and Roger Lobo. Chung presented their position on the sovereignty of Hong Kong to Thatcher, encouraging her to take into consideration the interests of the native Hong Kong population in her upcoming visit to China. In light of the increasing openness of the PRC government and economic reforms on the mainland, the then British Prime Minister Margaret Thatcher sought the PRC 's agreement to a continued British presence in the territory. However, the PRC took a contrary position: not only did the PRC wish for the New Territories, on lease until 1997, to be placed under the PRC 's jurisdiction, it also refused to recognize the "unfair and unequal treaties '' under which Hong Kong Island and Kowloon had been ceded to Britain in perpetuity. Consequently, the PRC recognized only the British administration in Hong Kong, but not British sovereignty. In the wake of Governor MacLehose 's visit, Britain and the PRC established initial diplomatic contact for further discussions of the Hong Kong question, paving the way for Thatcher 's first visit to the PRC in September 1982. Margaret Thatcher, in discussion with Deng Xiaoping, reiterated the validity of an extension of the lease of Hong Kong territory, particularly in light of binding treaties, including the Treaty of Nanking in 1842, the Convention of Peking in 1856, and the Convention for the Extension of Hong Kong Territory signed in 1890. In response, Deng Xiaoping cited clearly the lack of room for compromise on the question of sovereignty over Hong Kong; the PRC, as the successor of Qing Dynasty and the Republic of China on the mainland, would recover the entirety of the New Territories, Kowloon and Hong Kong Island. China considered treaties about Hong Kong as unequal and ultimately refused to accept any outcome that would indicate permanent loss of sovereignty over Hong Kong 's area, whatever wording the former treaties had. During talks with Thatcher, China planned to invade and seize Hong Kong if the negotiations set off unrest in the colony. Thatcher later said that Deng told her bluntly that China could easily take Hong Kong by force, stating that "I could walk in and take the whole lot this afternoon '', to which she replied that "there is nothing I could do to stop you, but the eyes of the world would now know what China is like ''. After her visit with Deng in Beijing, Thatcher was received in Hong Kong as the first British Prime Minister to set foot on the territory whilst in office. At a press conference, Thatcher re-emphasised the validity of the three treaties, asserting the need for countries to respect treaties on universal terms: "There are three treaties in existence; we stick by our treaties unless we decide on something else. At the moment, we stick by our treaties. ''. At the same time, at the 5th session of the 5th National People 's Congress, the constitution was amended to include a new Article 31 which stated that the country might establish Special Administrative Regions (SARs) when necessary. The additional Article would hold tremendous significance in settling the question of Hong Kong and later Macau, putting into social consciousness the concept of "One country, two systems ''. The concept would prove useful to deploy until the territories were secured and conditions were ripe for its gradual abrogation. A few months after Thatcher 's visit to Beijing, the PRC government had yet to open negotiations with the British government regarding the sovereignty of Hong Kong. Shortly before the initiation of sovereignty talks, Governor Youde declared his intention to represent the population of Hong Kong at the negotiations. This statement sparked a strong response from the PRC, prompting Deng Xiaoping to denounce talk of "the so - called ' three - legged stool ' '', which implied that Hong Kong was a party to talks on its future, alongside Beijing and London. At the preliminary stage of the talks, the British government proposed an exchange of sovereignty for administration and the implementation of a British administration post-handover. The PRC government refused, contending that the notions of sovereignty and administration were inseparable, and although it recognised Macau as a "Chinese territory under Portuguese administration '', this was only temporary. In fact, during informal exchanges between 1979 and 1981, the PRC had proposed a "Macau solution '' in Hong Kong, under which it would remain under British administration at China 's discretion. However, this had previously been rejected following the 1967 Leftist riots, with the then Governor, David Trench, claiming the leftists ' aim was to leave the UK without effective control, or "to Macau us ''. The conflict that arose at that point of the negotiations ended the possibility of further negotiation. During the reception of former British Prime Minister Edward Heath during his sixth visit to the PRC, Deng Xiaoping commented quite clearly on the impossibility of exchanging sovereignty for administration, declaring an ultimatum: the British government must modify or give up its position or the PRC will announce its resolution of the issue of Hong Kong sovereignty unilaterally. In 1983, Typhoon Ellen ravaged Hong Kong, causing great amounts of damage to both life and property. The Hong Kong dollar plummeted on Black Saturday, and the Financial Secretary John Bremridge publicly associated the economic uncertainty with the instability of the political climate. In response, the PRC government condemned Britain through the press for "playing the economic card '' in order to achieve their ends: to intimidate the PRC into conceding to British demands. Governor Youde with nine members of the Hong Kong Executive Council travelled to London to discuss with Prime Minister Thatcher the crisis of confidence -- the problem with morale among the people of Hong Kong arising from the ruination of the Sino - British talks. The session concluded with Thatcher 's writing of a letter addressed to the PRC Premier Zhao Ziyang. In the letter, she expressed Britain 's willingness to explore arrangements optimising the future prospects of Hong Kong while utilising the PRC 's proposals as a foundation. Furthermore, and perhaps most significantly, she expressed Britain 's concession on its position of a continued British presence in the form of an administration post-handover. Two rounds of negotiations were held in October and November. On the sixth round of talks in November, Britain formally conceded its intentions of either maintaining a British administration in Hong Kong or seeking some form of co-administration with the PRC, and showed its sincerity in discussing PRC 's proposal on the 1997 issue. Obstacles were cleared. Simon Keswick, chairman of Jardine Matheson & Co., said they were not pulling out of Hong Kong, but a new holding company would be established in Bermuda instead. The PRC took this as yet another plot by the British. The Hong Kong government explained that it had been informed about the move only a few days before the announcement. The government would not and could not stop the company from making a business decision. Just as the atmosphere of the talks was becoming cordial, members of the Legislative Council of Hong Kong felt impatient at the long - running secrecy over the progress of Sino - British talks on the Hong Kong issue. A motion, tabled by legislator Roger Lobo, declared "This Council deems it essential that any proposals for the future of Hong Kong should be debated in this Council before agreement is reached '', was passed unanimously. The PRC attacked the motion furiously, referring to it as "somebody 's attempt to play the three - legged stool trick again ''. At length, the PRC and Britain initiated the Joint Declaration on the question of Hong Kong 's future in Beijing. Zhou Nan, the then PRC Deputy Foreign Minister and leader of the negotiation team, and Sir Richard Evans, British Ambassador to Beijing and leader of the team, signed respectively on behalf of the two governments. The Sino - British Joint Declaration was signed by the Prime Ministers of the People 's Republic of China and the United Kingdom governments on 19 December 1984 in Beijing. The Declaration entered into force with the exchange of instruments of ratification on 27 May 1985 and was registered by the People 's Republic of China and United Kingdom governments at the United Nations on 12 June 1985. In the Joint Declaration, the People 's Republic of China Government stated that it had decided to resume the exercise of sovereignty over Hong Kong (including Hong Kong Island, Kowloon, and the New Territories) with effect from 1 July 1997 and the United Kingdom Government declared that it would restore Hong Kong to the PRC with effect from 1 July 1997. In the document, the People 's Republic of China Government also declared its basic policies regarding Hong Kong. In accordance with the "One Country, Two Systems '' principle agreed between the United Kingdom and the People 's Republic of China, the socialist system of the People 's Republic of China would not be practised in the Hong Kong Special Administrative Region (HKSAR), and Hong Kong 's previous capitalist system and its way of life would remain unchanged for a period of 50 years. This would have left Hong Kong unchanged until 2047. The Joint Declaration provided that these basic policies should be stipulated in the Hong Kong Basic Law. The ceremony of the signing of the Sino - British Joint Declaration took place at 18: 00, 19 December 1984 at the Western Main Chamber of the Great Hall of the People. The Hong Kong and Macao Affairs Office at first proposed a list of 60 - 80 Hong Kong people to attend the ceremony. The number was finally extended to 101. The list included Hong Kong government officials, members of the Legislative and Executive Councils, chairmen of the Hongkong and Shanghai Banking Corporation and Standard Chartered Bank, prominent businessmen such as Li Ka - shing, Pao Yue - kong and Fok Ying - tung, and also Martin Lee Chu - ming and Szeto Wah. The Basic Law was drafted by a Drafting Committee composed of members from both Hong Kong and mainland China. A Basic Law Consultative Committee formed purely by Hong Kong people was established in 1985 to canvas views in Hong Kong on the drafts. The first draft was published in April 1988, followed by a five - month public consultation exercise. The second draft was published in February 1989, and the subsequent consultation period ended in October 1989. The Basic Law was formally promulgated on 4 April 1990 by the NPC, together with the designs for the flag and emblem of the HKSAR. Some members of the Basic Law drafting committee were ousted by Beijing following the 4 June 1989 Tiananmen Square protests, after voicing views supporting the students. The Basic Law was said to be a mini-constitution drafted with the participation of Hong Kong people. The political system had been the most controversial issue in the drafting of the Basic Law. The special issue sub-group adopted the political model put forward by Louis Cha. This "mainstream '' proposal was criticised for being too conservative. According to Clauses 158 and 159 of the Basic Law, powers of interpretation and amendment of the Basic Law are vested in the Standing Committee of the National People 's Congress and the National People 's Congress, respectively. Hong Kong 's people have limited influence. After the Tiananmen Square protests of 1989, the Executive Councillors and the Legislative Councillors unexpectedly held an urgent meeting, in which they agreed unanimously that the British Government should give the people of Hong Kong the right of abode in the United Kingdom. More than 10,000 Hong Kong residents rushed to Central in order to get an application form for residency in the United Kingdom. On the eve of the deadline, over 100,000 lined up overnight for a BN (O) application form. While mass migration did begin well before 1989, the event did lead to the peak migration year in 1992 with 66,000 leaving. Many citizens were pessimistic towards the future of Hong Kong and the transfer of the region 's sovereignty. A tide of emigration, which was to last for no less than five years, broke out. At its peak, citizenship of small countries, such as Tonga, was also in great demand. Singapore, which also had a predominantly Chinese population, was another popular destination, with the country 's Commission (now Consulate - General) being besieged by anxious Hong Kong residents. By September 1989, 6000 applications for residency in Singapore had been approved by the Commission. Some consul staff were suspended or arrested for their corrupt behaviour in granting immigration visas. In April 1997, the acting immigration officer at the US Consulate - General, James DeBates, was suspended after his wife was arrested for smuggling of Chinese migrants into the United States. The previous year, his predecessor, Jerry Stuchiner, had been arrested for smuggling forged Honduran passports into the territory before being sentenced to 40 months in prison. Canada (Vancouver and Toronto), United Kingdom (London, Glasgow, and Manchester), Australia (Sydney and Melbourne), and the United States (San Francisco and New York) were, by and large, the most popular destinations. The United Kingdom devised the British Nationality Selection Scheme, granting 50,000 families British citizenship under the British Nationality Act (Hong Kong) 1990. Vancouver was among the most popular destinations, earning the nickname of "Hongcouver ''. Richmond, a suburb of Vancouver, was nicknamed "Little Hong Kong ''. Other popular settlements are found in Auckland, New Zealand and Dublin, Ireland. All in all, from the start of the settlement of the negotiation in 1984 to 1997, nearly 1 million people emigrated; consequently, Hong Kong suffered serious loss of capital. Chris Patten became the last governor of Hong Kong. This was regarded as a turning point in Hong Kong 's history. Unlike his predecessors, Patten was not a diplomat, but a career politician and former Member of Parliament. He introduced democratic reforms which pushed PRC -- British relations to a standstill and affected the negotiations for a smooth handover. Patten introduced a package of electoral reforms in the Legislative Council. These reforms proposed to enlarge the electorate, thus making voting in the Legislative Council more democratic. This move posed significant changes because Hong Kong citizens would have the power to make decisions regarding their future. The handover ceremony was held at the new wing of the Hong Kong Convention and Exhibition Centre in Wan Chai on the night of 30 June 1997. The principal British guest was Prince Charles, who read a farewell speech on behalf of the Queen. The newly elected Prime Minister, Tony Blair, the Foreign Secretary Robin Cook, the departing Governor Chris Patten and General Sir Charles Guthrie, Chief of the Defence Staff, also attended. Representing the People 's Republic of China were the President, Jiang Zemin, the Premier, Li Peng, and the first Chief Executive Tung Chee - hwa. The event was broadcast around the world. After the Tiananmen Square protests of 1989, the Hong Kong government proposed a grand "Rose Garden Project '' to restore faith and solidarity among the residents. As the construction of the new Hong Kong International Airport would extend well after the handover, Governor Wilson met PRC Premier Li Peng in Beijing to ease the mind of the PRC government. The communist press published stories that the project was an evil plan to bleed Hong Kong dry before the handover, leaving the territory in serious debt. After three years of negotiations, Britain and the PRC finally reached an agreement over the construction of the new airport, and signed a Memorandum of Understanding. Removing hills and reclaiming land, it took only a few years to construct the new airport. The Walled City was originally a single fort built in the mid-19th century on the site of an earlier 17th century watch post on the Kowloon Peninsula of Hong Kong. After the ceding of Hong Kong Island to Britain in 1842 (Treaty of Nanjing), Manchu Qing Dynasty authorities of China felt it necessary for them to establish a military and administrative post to rule the area and to check further British influence in the area. The 1898 Convention which handed additional parts of Hong Kong (the New Territories) to Britain for 99 years excluded the Walled City, with a population of roughly 700. It stated that China could continue to keep troops there, so long as they did not interfere with Britain 's temporary rule. Britain quickly went back on this unofficial part of the agreement, attacking Kowloon Walled City in 1899, only to find it deserted. They did nothing with it, or the outpost, and thus posed the question of Kowloon Walled City 's ownership squarely up in the air. The outpost consisted of a yamen, as well as buildings which grew into low - lying, densely packed neighbourhoods from the 1890s to 1940s. The enclave remained part of Chinese territory despite the turbulent events of the early 20th century that saw the fall of the Qing government, the establishment of the Republic of China and later, a Communist Chinese government (PRC). Squatters began to occupy the Walled City, resisting several attempts by Britain in 1948 to drive them out. The Walled City became a haven for criminals and drug addicts, as the Hong Kong Police had no right to enter the City and China refused maintainability. The 1949 foundation of the People 's Republic of China added thousands of refugees to the population, many from Guangdong; by this time, Britain had had enough, and simply adopted a "hands - off '' policy. A murder that occurred in Kowloon Walled City in 1959 set off a small diplomatic crisis, as the two nations each tried to get the other to accept responsibility for a vast tract of land now virtually ruled by anti-Manchurian Triads. After the Joint Declaration in 1984, the PRC allowed British authorities to demolish the City and resettle its inhabitants. The mutual decision to tear down the walled city was made in 1987. The government spent up to HK $ 3 billion to resettle the residents and shops. Some residents were not satisfied with the compensation, and some even obstructed the demolition in every possible way. Ultimately, everything was settled, and the Walled City became a park. Rennie 's Mill got its name from a Canadian businessman named Alfred Herbert Rennie, who established a flour mill at Junk Bay. The business failed, and Rennie hanged himself there in 1908. The incident gave the Chinese name for the site Tiu Keng Leng (吊 頸 嶺), meaning "Hanging (neck) Ridge ''. The name was later formally changed to similar - sounding Tiu King Leng (調 景 嶺) because it was regarded as inauspicious. In the 1950s the (British) Government of Hong Kong settled a considerable number of refugees from China -- former Nationalist soldiers and other Kuomintang supporters -- at Rennie 's Mill, following the Chinese civil war. For many years the area was a Kuomintang enclave known as "Little Taiwan '', with the flag of the Republic of China flying, its own school system and practically off - limits to the Royal Hong Kong Police Force. In 1996 the Hong Kong government finally forcibly evicted Rennie 's Mill 's residents, ostensibly to make room for new town developments, as part of the Tseung Kwan O New Town, but widely understood to be a move to please the Communist Chinese government before Hong Kong reverted to Communist Chinese rule in 1997. Before the eviction, Rennie 's Mill could be reached by the winding, hilly and narrow Po Lam Road South. At that time, Rennie 's Mill 's only means of public transport were the routes 90 and 290 of Kowloon Motor Bus, which were operated by minibuses, and by water transport. The Republic of China on Taiwan promulgated the Laws and Regulations Regarding Hong Kong & Macao Affairs on 2 April 1997 by Presidential Order, and the Executive Yuan on 19 June 1997 ordered the provisions pertaining to Hong Kong to take effect on 1 July 1997. The United States -- Hong Kong Policy Act or more commonly known as the Hong Kong Policy Act (P.L no. 102 - 383m 106 Stat. 1448) is a 1992 act enacted by the United States Congress. It allows the United States to continue to treat Hong Kong separately from China for matters concerning trade export and economics control after the handover. The United States was represented by then Secretary of State Madeleine Albright at the Hong Kong handover ceremony. However, she partially boycotted it in protest of China 's dissolution of the democratically elected Hong Kong legislature. Scholars have begun to study the complexities of the transfer as shown in the popular media, such as films, television and video and online games. For example, Hong Kong director Fruit Chan made a sci - fi thriller The Midnight After (2014) that stressed the sense of loss and alienation represented by survivors in an apocalyptic Hong Kong. Chan infuses a political agenda in the film by playing on Hong Kongers ' collective anxiety towards communist China. Yiman Wang has argued that America has viewed China through the prisms of films from Shanghai and Hong Kong, with a recent emphasis on futuristic disaster films set in Hong Kong after the transfer goes awry.
guns and roses sweet child of mine video
Sweet Child O ' Mine - wikipedia "Sweet Child o ' Mine '' is a song by American rock band Guns N ' Roses, appearing on their debut album, Appetite for Destruction. Released in August 1988 as the album 's third single, the song topped the Billboard Hot 100 chart, becoming the band 's only number 1 US single. Billboard ranked it the number 5 song of 1988. Re-released in 1989, it reached number 6 on the UK Singles Chart. Guitarist Slash said in 1990, "(The song) turned into a huge hit and now it makes me sick. I mean, I like it, but I hate what it represents. '' Duff McKagan, 1988 Slash has been quoted as having an initial disdain for the song due to its roots as simply a "string skipping '' exercise and a joke at the time. During a jam session at the band 's house in the Sunset Strip, drummer Steven Adler and Slash were warming up and Slash began to play a "circus '' melody while making faces at Adler. Rhythm guitarist Izzy Stradlin asked Slash to play it again. Stradlin came up with some chords, Duff McKagan created a bassline and Adler planned a beat. In his autobiography, Slash said "within an hour my guitar exercise had become something else ''. Lead singer Axl Rose was listening to the musicians upstairs in his room and was inspired to write lyrics, which he completed by the following afternoon. He based it on his girlfriend Erin Everly, and declared that Lynyrd Skynyrd served as an inspiration "to make sure that we 'd got that heartfelt feeling ''. On the next composing session in Burbank, the band added a bridge and a guitar solo. When the band recorded demos with producer Spencer Proffer, he suggested adding a breakdown at the song 's end. The musicians agreed, but were not sure what to do. Listening to the demo in a loop, Rose started saying to himself, "Where do we go? Where do we go now? '' and Proffer suggested that he sing that. The "Sweet Child o ' Mine '' video depicts the band rehearsing in the Huntington Ballroom at Huntington Beach, surrounded by crew members. All of the band members ' girlfriends at the time were shown in the clip: Rose 's girlfriend Erin Everly, whose father is Don Everly of The Everly Brothers; McKagan 's girlfriend Mandy Brix, from the all - female rock band the Lame Flames; Stradlin 's girlfriend Angela Nicoletti; Adler 's girlfriend Cheryl Swiderski; and Slash 's girlfriend Sally McLaughlin. Stradlin 's dog was also featured. The video was successful on MTV, and helped launch the song to success on mainstream radio. To make "Sweet Child o ' Mine '' more marketable to MTV and radio stations, the song was cut from 5: 56 to 4: 59, for the video / radio edit, with much of Slash 's solo removed. This drew the ire of the band, including Rose, who commented on it in a 1989 interview with Rolling Stone: "I hate the radio edit of ' Sweet Child O ' Mine. ' Radio stations said, "Well, your vocals are n't cut. '' "My favorite part of the song is Slash 's slow solo; it 's the heaviest part for me. There 's no reason for it to be missing except to create more space for commercials, so the radio - station owners can get more advertising dollars. When you get the chopped version of ' Paradise City ' or half of ' Sweet Child ' and ' Patience ' cut, you 're getting screwed. '' The video uses the same edits as the radio version. A 7 - inch vinyl format and cassette single were released. The album version of the song was included on the US single release, while the UK single was the "edit / remix '' version. The 12 '' vinyl format also contained the longer LP version. The b - side to the single is a non-album, live version of "It 's So Easy ''. On an interview on Eddie Trunk 's New York radio show in May 2006, Rose stated that his original concept for the video focused on the theme of drug trafficking. According to Rose, the video was to depict an Asian woman carrying a baby into a foreign land, only to discover at the end that the child was dead and filled with heroin. This concept was rejected by Geffen Records. There is also an alternative video for "Sweet Child o ' Mine '' in the same place, but with different shots and filmed in black & white. "Sweet Child o ' Mine '' placed number 37 on Guitar World 's list of the "100 Greatest Guitar Solos. '' It also came in at number 3 on Blender 's 500 Greatest Songs Since You Were Born, and at number 198 on Rolling Stone 's The 500 Greatest Songs of All Time. In March 2005, Q magazine placed it at number 6 in its list of the 100 Greatest Guitar Tracks. On a 2004 Total Guitar magazine poll, the introduction 's famous riff was voted number 1 riff of all - time by the readers of the magazine. It was also in Rolling Stone 's 40 Greatest Songs that Changed the World. It places number 7 in VH1 's "100 Greatest Songs of the ' 80s '', and placed number 210 on the RIAA Songs of the Century list. The song is currently ranked as the 104th greatest song of all time, as well as the best song of 1987, by Acclaimed Music. The song has sold 2,609,000 digital copies in the United States as of March 2012. In 2015, the web page of the Australian music TV channel MAX published an article by music writer Nathan Jolly that noted similarities between "Sweet Child o ' Mine '' and the song "Unpublished Critics '' by the Australian band Australian Crawl, from 1981. The article included both songs, inviting readers to compare the two. It also cited a reader 's comment on an earlier article that had originally drawn attention to the similarities between the songs. As of May 2015, this comment no longer appeared on the earlier article. The story went viral quickly, encouraging several comments on both the MAX article and the suggestion that "Unpublished Critics '' had influenced "Sweet Child o ' Mine '', including one from Duff McKagan, bass player with Guns N ' Roses when "Sweet Child o ' Mine '' was written and recorded. McKagan found the similarities between the songs "stunning, '' but said he had not previously heard "Unpublished Critics. '' All tracks written by Guns N ' Roses except where noted. The song was covered by Sheryl Crow on the soundtrack to Big Daddy, and released as a bonus track on her third studio album, The Globe Sessions. This version earned her a Grammy Award for Best Female Rock Vocal Performance. The recording was produced by Rick Rubin and Crow. A music video for Crow 's version was also released, directed by Stéphane Sednaoui. Crow performed the song live at Woodstock ' 99. Ultimate Classic Rock profiled the song as part of a series on "Terrible Classic Rock Covers '', and Rolling Stone readers named it the 4th worst cover song of all - time. "Sweet Child O ' Mine '' has been used in several films, such as: sales figures based on certification alone shipments figures based on certification alone sales + streaming figures based on certification alone
u.s. district court for the district of rhode island
United States District Court for the District of Rhode Island - wikipedia Coordinates: 41 ° 49 ′ 33 '' N 71 ° 24 ′ 38 '' W  /  41.825811 ° N 71.410454 ° W  / 41.825811; - 71.410454 The United States District Court for the District of Rhode Island (in case citations, D.R.I.) is the Federal district court whose jurisdiction is the state of Rhode Island. The District Court was created in 1790 when Rhode Island ratified the Constitution. The Federal Courthouse was built in 1908. Appeals from the District of Rhode Island are taken to the United States Court of Appeals for the First Circuit (except for patent claims and claims against the U.S. government under the Tucker Act, which are appealed to the Federal Circuit). The United States Attorney for the District of Rhode Island represents the United States in civil and criminal litigation in the court. The current Interim United States Attorney is Stephen G. Dambruch since January 5, 2018. The United States District Court for the District of Rhode Island was established on June 23, 1790 by 1 Stat. 128. Congress authorized one judgeship for the Court, and assigned the district to the Eastern Circuit. On February 13, 1801, the outgoing lame duck Federalist - controlled Congress passed the controversial Judiciary Act of 1801 which reassigned the District of Rhode Island to the First Circuit. The incoming Congress repealed the Judiciary Act of 1801, but in the Judiciary Act of 1802, Congress again assigned the District of Rhode Island to the First Circuit. A second seat on the Court was created on March 18, 1966 by 80 Stat. 75. A third seat was added on July 10, 1984 by 98 Stat. 333. Chief judges have administrative responsibilities with respect to their district court. Unlike the Supreme Court, where one justice is specifically nominated to be chief, the office of chief judge rotates among the district court judges. To be chief, a judge must have been in active service on the court for at least one year, be under the age of 65, and have not previously served as chief judge. A vacancy is filled by the judge highest in seniority among the group of qualified judges. The chief judge serves for a term of seven years or until age 70, whichever occurs first. The age restrictions are waived if no members of the court would otherwise be qualified for the position. When the office was created in 1948, the chief judge was the longest - serving judge who had not elected to retire on what has since 1958 been known as senior status or declined to serve as chief judge. After August 6, 1959, judges could not become or remain chief after turning 70 years old. The current rules have been in operation since October 1, 1982.
where is the lung located in human body
Lung - Wikipedia The lungs are the primary organs of the respiratory system in humans and many other animals including a few fish and some snails. In mammals and most other vertebrates, two lungs are located near the backbone on either side of the heart. Their function in the respiratory system is to extract oxygen from the atmosphere and transfer it into the bloodstream, and to release carbon dioxide from the bloodstream into the atmosphere, in a process of gas exchange. Respiration is driven by different muscular systems in different species. Mammals, reptiles and birds use their different muscles to support and foster breathing. In early tetrapods, air was driven into the lungs by the pharyngeal muscles via buccal pumping, a mechanism still seen in amphibians. In humans, the main muscle of respiration that drives breathing is the diaphragm. The lungs also provide airflow that makes vocal sounds including human speech possible. Humans have two lungs, a right lung and a left lung. They are situated within the thoracic cavity of the chest. The right lung is bigger than the left, which shares space in the chest with the heart. The lungs together weigh approximately 1.3 kilograms (2.9 lb), and the right is heavier. The lungs are part of the lower respiratory tract that begins at the trachea and branches into the bronchi and bronchioles, and which receive air breathed in via the conducting zone. The conducting zone ends at the terminal bronchioles. These divide into the respiratory bronchioles of the respiratory zone which divide into alveolar ducts that give rise to the microscopic alveoli, where gas exchange takes place. Together, the lungs contain approximately 2,400 kilometres (1,500 mi) of airways and 300 to 500 million alveoli. Each lung is enclosed within a pleural sac which allows the inner and outer walls to slide over each other whilst breathing takes place, without much friction. This sac also divides each lung into sections called lobes. The right lung has three lobes and the left has two. The lobes are further divided into bronchopulmonary segments and lobules. The lungs have a unique blood supply, receiving deoxygenated blood from the heart in the pulmonary circulation for the purposes of receiving oxygen and releasing carbon dioxide, and a separate supply of oxygenated blood to the tissue of the lungs, in the bronchial circulation. The tissue of the lungs can be affected by a number of diseases, including pneumonia and lung cancer. Chronic obstructive pulmonary disease includes chronic bronchitis and previously termed emphysema, can be related to smoking or exposure to harmful substances such as coal dust, asbestos fibres and crystalline silica dust. Diseases such as bronchitis can also affect the respiratory tract. Medical terms related to the lung often begin with pulmo -, from the Latin pulmonarius (of the lungs) as in pulmonology, or with pneumo - (from Greek πνεύμων "lung '') as in pneumonia. In embryonic development, the lungs begin to develop as an outpouching of the foregut, a tube which goes on to form the upper part of the digestive system. When the lungs are formed the fetus is held in the fluid - filled amniotic sac and so they do not function to breathe. Blood is also diverted from the lungs through the ductus arteriosus. At birth however, air begins to pass through the lungs, and the diversionary duct closes, so that the lungs can begin to respire. The lungs only fully develop in early childhood. The lungs are located in the chest on either side of the heart in the rib cage. They are conical in shape with a narrow rounded apex at the top, and a broad concave base that rests on the convex surface of the diaphragm. The apex of the lung extends into the root of the neck, reaching shortly above the level of the sternal end of the first rib. The lungs stretch from close to the backbone in the rib cage to the front of the chest and downwards from the lower part of the trachea to the diaphragm. The left lung shares space with the heart, and has an indentation in its border called the cardiac notch of the left lung to accommodate this. The front and outer sides of the lungs face the ribs, which make light indentations on their surfaces. The medial surfaces of the lungs face towards the centre of the chest, and lie against the heart, great vessels, and the carina where the trachea divides into the two main bronchi. The cardiac impression is an indentation formed on the surfaces of the lungs where they rest against the heart. Both lungs have a central recession called the hilum at the root of the lung, where the blood vessels and airways pass into the lungs. There are also bronchopulmonary lymph nodes on the hilum. The lungs are surrounded by the pulmonary pleurae. The pleurae are two serous membranes; the outer parietal pleura lines the inner wall of the rib cage and the inner visceral pleura directly lines the surface of the lungs. Between the pleurae is a potential space called the pleural cavity containing a thin layer of lubricating pleural fluid. Each lung is divided into lobes by the infoldings of the pleura as fissures. The fissures are double folds of pleura that section the lungs and help in their expansion. Lower Lingula The main or primary bronchi enter the lungs at the hilum and initially branch into secondary bronchi also known as lobar bronchi that supply air to each lobe of the lung. The lobar bronchi branch into tertiary bronchi also known as segmental bronchi and these supply air to the further divisions of the lobes known as bronchopulmonary segments. Each bronchopulmonary segment has its own (segmental) bronchus and arterial supply. Segments for the left and right lung are shown in the table. The segmental anatomy is useful clinically for localising disease processes in the lungs. A segment is a discrete unit that can be surgically removed without seriously affecting surrounding tissue. The right lung has both more lobes and segments than the left. It is divided into three lobes, an upper, middle, and a lower, by two fissures, one oblique and one horizontal. The upper, horizontal fissure, separates the upper from the middle lobe. It begins in the lower oblique fissure near the posterior border of the lung, and, running horizontally forward, cuts the anterior border on a level with the sternal end of the fourth costal cartilage; on the mediastinal surface it may be traced backward to the hilum. The lower, oblique fissure, separates the lower from the middle and upper lobes, and is closely aligned with the oblique fissure in the left lung. The mediastinal surface of the right lung is indented by a number of nearby structures. The heart sits in an impression called the cardiac impression. Above the hilum of the lung is an arched groove for the azygos vein, and above this is a wide groove for the superior vena cava and right brachiocephalic vein; behind this, and close to the top of the lung is a groove for the brachiocephalic artery. There is a groove for the esophagus behind the hilum and the pulmonary ligament, and near the lower part of the esophageal groove is a deeper groove for the inferior vena cava before it enters the heart. The left lung is divided into two lobes, an upper and a lower, by the oblique fissure, which extends from the costal to the mediastinal surface of the lung both above and below the hilum. The left lung, unlike the right, does not have a middle lobe, though it does have a homologous feature, a projection of the upper lobe termed the "lingula ''. Its name means "little tongue ''. The lingula on the left serves as an anatomic parallel to the right middle lobe, with both areas being predisposed to similar infections and anatomic complications. There are two bronchopulmonary segments of the lingula: superior and inferior. The mediastinal surface of the left lung has a large cardiac impression where the heart sits. This is deeper and larger than that on the right lung, at which level the heart projects to the left. On the same surface, immediately above the hilum, is a well - marked curved groove for the aortic arch, and a groove below it for the descending aorta. The left subclavian artery, a branch off the aortic arch, sits in a groove from the arch to near the apex of the lung. A shallower groove in front of the artery and near the edge of the lung, lodges the left brachiocephalic vein. The esophagus may sit in a wider shallow impression at the base of the lung. The lungs are part of the lower respiratory tract, and accommodate the bronchial airways when they branch from the trachea. The lungs include the bronchial airways that terminate in alveoli, the lung tissue in between, and veins, arteries, nerves and lymphatic vessels. The trachea and bronchi have plexuses of lymph capillaries in their mucosa and submucosa. The smaller bronchi have a single layer and they are absent in the alveoli. All of the lower respiratory tract including the trachea, bronchi, and bronchioles is lined with respiratory epithelium. This is a ciliated epithelium interspersed with goblet cells which produce mucus, and club cells with actions similar to macrophages. Incomplete rings of cartilage in the trachea and smaller plates of cartilage in the bronchi, keep these airways open. Bronchioles are too narrow to support cartilage and their walls are of smooth muscle, and this is largely absent in the narrower respiratory bronchioles which are mainly just of epithelium. The respiratory tract ends in lobules. Each lobule consists of a respiratory bronchiole, which branches into alveolar ducts and alveolar sacs, which in turn divide into alveoli. The epithelial cells throughout the respiratory tract secrete epithelial lining fluid (ELF), the composition of which is tightly regulated and determines how well mucociliary clearance works. Alveoli consist of two types of alveolar cell and an alveolar macrophage. The two types of cell are known as type I and type II alveolar cells (also known as pneumocytes). Types I and II make up the walls and alveolar septa. Type I cells provide 95 % of the surface area of each alveoli and are flat ("squamous ''), and Type II cells generally cluster in the corners of the alveoli and have a cuboidal shape. Despite this, cells occur in a roughly equal ratio of 1: 1 or 6: 4. Type I are squamous epithelial cells that make up the alveolar wall structure. They have extremely thin walls that enable an easy gas exchange. These type I cells also make up the alveolar septa which separate each alveolus. The septa consist of an epithelial lining and associated basement membranes. Type I cells are not able to divide, and consequently rely on differentiation from Type II cells. Type II are larger and they line the alveoli and produce and secrete epithelial lining fluid, and lung surfactant. Type II cells are able to divide and differentiate to Type 1 cells. The alveolar macrophages have an important immunological role. They remove substances which deposit in the alveoli including loose red blood cells that have been forced out from blood vesels. The lung is surrounded by a serous membrane of visceral pleura, which has an underlying layer of loose connective tissue attached to the substance of the lung. The lower respiratory tract is part of the respiratory system, and consists of the trachea and the structures below this including the lungs. The trachea receives air from the pharynx and travels down to a place where it splits (the carina) into a right and left bronchus. These supply air to the right and left lungs, splitting progressively into the secondary and tertiary bronchi for the lobes of the lungs, and into smaller and smaller bronchioles until they become the respiratory bronchioles. These in turn supply air through alveolar ducts into the alveoli, where the exchange of gases take place. Oxygen breathed in, diffuses through the walls of the alveoli into the enveloping capillaries and into the circulation, and carbon dioxide diffuses into the lungs to be breathed out. Estimates of the total surface area of lungs vary from 50 to 75 square metres (540 to 810 sq ft); roughly the same area as one side of a tennis court. The bronchi in the conducting zone are reinforced with hyaline cartilage in order to hold open the airways. The bronchioles have no cartilage and are surrounded instead by smooth muscle. Air is warmed to 37 ° C (99 ° F), humidified and cleansed by the conducting zone; particles from the air being removed by the cilia on the respiratory epithelium lining the passageways. The lungs have a dual blood supply provided by a bronchial and a pulmonary circulation. The bronchial circulation supplies oxygenated blood to the airways of the lungs, through the bronchial arteries that leave the aorta. There are usually three arteries, two to the left lung and one to the right, and they branch alongside the bronchi and bronchioles. The pulmonary circulation carries deoxygenated blood from the heart to the lungs and returns the oxygenated blood to the heart to supply the rest of the body. The blood volume of the lungs, is about 450 millilitres on average, about 9 per cent of the total blood volume of the entire circulatory system. This quantity can easily fluctuate from between one - half and twice the normal volume. The lungs are supplied by nerves of the autonomic nervous system. Input from the parasympathetic nervous system occurs via the vagus nerve. When stimulated by acetylcholine, this causes constriction of the smooth muscle lining the bronchus and bronchioles, and increases the secretions from glands. The lungs also have a sympathetic tone from norepinephrine acting on the beta 2 receptors in the respiratory tract, which causes bronchodilation. The action of breathing takes place because of nerve signals sent by the respiratory centres in the brainstem, along the phrenic nerve to the diaphragm. The development of the human lungs arise from the laryngotracheal groove and develop to maturity over several weeks in the foetus and for several years following birth. The larynx, trachea, bronchi and lungs that make up the respiratory tract, begin to form during the fourth week of embryogenesis from the lung bud which appears ventrally to the caudal portion of the foregut. The respiratory tract has a branching structure like that of a tree. In the embryo this structure is developed in the process of branching morphogenesis, and is generated by the repeated splitting of the tip of the branch. In the development of the lungs (as in some other organs) the epithelium forms branching tubes. The lung has a left - right symmetry and each bud known as a bronchial bud grows out as a tubular epithelium that becomes a bronchus. Each bronchus branches into bronchioles. The branching is a result of the tip of each tube bifurcating. The branching process forms the bronchi, bronchioles, and ultimately the alveoli. The four genes mostly associated with branching morphogenesis in the lung are the intercellular signalling protein -- sonic hedgehog (SHH), fibroblast growth factors FGF10 and FGFR2b, and bone morphogenetic protein BMP4. FGF10 is seen to have the most prominent role. FGF10 is a paracrine signalling molecule needed for epithelial branching, and SHH inhibits FGF10. The development of the alveoli is influenced by a different mechanism whereby continued bifurcation is stopped and the distal tips become dilated to form the alveoli. At the end of the fourth week the lung bud divides into two, the right and left primary bronchial buds on each side of the trachea. During the fifth week the right bud branches into three secondary bronchial buds and the left branches into two secondary bronchial buds. These give rise to the lobes of the lungs, three on the right and two on the left. Over the following week, the secondary buds branch into tertiary buds, about ten on each side. From the sixth week to the sixteenth week, the major elements of the lungs appear except the alveoli. From week 16 to week 26, the bronchi enlarge and lung tissue becomes highly vascularised. Bronchioles and alveolar ducts also develop. By week 26 the terminal bronchioles have formed which branch into two respiratory bronchioles. During the period covering the 26th week until birth the important blood -- air barrier is established. Specialised type I alveolar cells where gas exchange will take place, together with the type II alveolar cells that secrete pulmonary surfactant, appear. The surfactant reduces the surface tension at the air - alveolar surface which allows expansion of the alveolar sacs. The alveolar sacs contain the primitive alveoli that form at the end of the alveolar ducts, and their appearance around the seventh month marks the point at which limited respiration would be possible, and the premature baby could survive. At birth, the baby 's lungs are filled with fluid secreted by the lungs and are not inflated. After birth the infant 's central nervous system reacts to the sudden change in temperature and environment. This triggers the first breath, within about 10 seconds after delivery. Before birth, the lungs are filled with fetal lung fluid. After the first breath, the fluid is quickly absorbed into the body or exhaled. The resistance in the lung 's blood vessels decreases giving an increased surface area for gas exchange, and the lungs begins to breathe spontaneously. This accompanies other changes which result in an increased amount of blood entering the lung tissues. At birth the lungs are very undeveloped with only around one sixth of the alveoli of the adult lung present. The alveoli continue to form into early adulthood, and their ability to form when necessary is seen in the regeneration of the lung. Alveolar septa have a double capillary network instead of the single network of the developed lung. Only after the maturation of the capillary network can the lung enter a normal phase of growth. Following the early growth in numbers of alveoli there is another stage of the alveoli being enlarged. The lungs are not capable of expanding themselves, and will only do so when there is an increase in the volume of the thoracic cavity. This is achieved by the muscles of respiration, through the contraction of the diaphragm, and the intercostal muscles which pull the rib cage upwards as shown in the diagram. During breathing out, at rest, the muscles of inhalation relax, returning the chest and abdomen to a resting position determined by their anatomical elasticity. At this point the lungs contain the functional residual capacity (FRC) of air, which, in the adult human, has a volume of about 2.5 -- 3.0 litres. During heavy breathing as, for instance, during exercise, a large number of accessory muscles in the neck and abdomen are recruited. Exhalation, instead of being passive, is now caused by the rib cage being actively pulled downwards at the front and sides by the abdominal muscles. This has the effect of decreasing the size of the rib cage, and of pushing the abdominal organs up against the diaphragm which bulges into the thorax. The lung volume of air at the end of exhalation is now less than the resting FRC. However, the lungs can not be emptied completely and in an adult human there is always still at least one litre of residual air left in the lungs after maximum exhalation. The FRC always remains in the alveoli after a normal out - breath. With each breath only about 350 ml (i.e. less than 15 %) of this alveolar air is expelled into the ambient air (with about 150 ml remaining behind in the airways, as "dead space '' ventilation, as it is the first air to re-enter the alveoli on inhalation). This is immediately replaced with the same volume of fresh, but moistened, atmospheric air. It is therefore obvious that the 350 ml inhaled fresh air is highly diluted by the 3 litres of functional residual capacity air (i.e. the air that remains in the lungs after a normal exhalation), and that the composition of the alveolar air therefore changes very little under normal circumstances: the alveolar partial pressure of oxygen remains very close to 14 kPa (105 mmHg), and that of carbon dioxide varies minimally around 5.3 kPa (40 mmHg) throughout the respiratory cycle (of inhalation and exhalation). The major function of the lungs is gas exchange between the lungs and the blood. The alveolar and pulmonary capillary gases equilibrate across the blood -- air barrier, a very thin diffusion membrane which is only, on average, about 2 μm thick, consisting of the walls of the pulmonary alveoli, consisting of the alveolar epithelial cells, their basement membranes and the endothelial cells of the pulmonary capillaries. This membrane is folded into about 300 million small air sacs called alveoli (each between 75 and 300 μm in diameter) branching off from the bronchioles in the lungs, thus providing an extremely large surface area (estimates varying between 70 and 145 m) for gas exchange to occur. Since the blood gases in the alveolar capillaries equilibrate with those in the alveolar air, the arterial blood that is spread evenly throughout the body by the left ventricle, will have the same partial pressures of oxygen and carbon dioxide as persist in the alveoli. The arterial partial pressures of oxygen and carbon dioxide in the arterial blood are homeostatically controlled. A rise in the arterial partial pressure of carbon dioxide, and, to a lesser extent, a fall in the arterial partial pressure of oxygen, will reflexly cause deeper and faster breathing till the blood gas tensions return to normal. The converse happens when the carbon dioxide tension falls, or, again to a lesser extent, the oxygen tension rises: the rate and depth of breathing are reduced till blood gas normality is restored. It is only as a result of accurately maintaining the composition of the 3 litres alveolar air that, with each breath, some carbon dioxide is discharged into the atmosphere and some oxygen is taken up from the outside air. If more carbon dioxide than usual has been lost by a short period of hyperventilation, breathing will be slowed down or halted until the alveolar partial pressure of carbon dioxide has returned to 5.3 kPa (40 mmHg). The opposite occurs after breath - holding. The partial pressures of oxygen and carbon dioxide in the arterial blood are measured by the peripheral chemoreceptors -- the aortic and carotid bodies, and the central chemoreceptors of the medulla oblongata of the brainstem. The peripheral chemoreceptors are more sensitive to the arterial partial pressure of oxygen than they are to the arterial partial pressure of carbon dioxide. The central chemoreceptor is particularly sensitive to the pH of the cerebrospinal fluid, which is directly influenced by the partial pressure of carbon dioxide in the arterial blood. Information from the peripheral chemoreceptors is relayed to a series of interconnected nuclei which comprise the respiratory centres in the medulla oblongata and the pons of the brainstem. This information determines the average rate of ventilation of the alveoli of the lungs, to keep the partial pressures of oxygen and carbon dioxide in the arterial blood constant. The respiratory centre does so via motor neurons which activate the muscles of respiration (in particular the diaphragm). Exercise also increases the respiratory rate, partly in response to the movement of the limbs detected by proprioceptors in the muscles and joints, an increase in body temperature, the release of adrenaline (epinephrine) from the adrenal glands, and from motor impulses originating from the cerebral cortex. The lungs possess several characteristics which protect against infection. The lung tract is lined by epithelia with hair - like projections called cilia that beat rhythmically and carry mucus. This mucociliary clearance is an important defence system against air - borne infection. The dust particles and bacteria in the inhaled air are caught in the mucosal surface of respiratory passages and are moved up towards the pharynx by the rhythmic upward beating action of the cilia. The lining of the lung also secretes immunoglobulin A which protects against respiratory infections; goblet cells secrete mucus which also contains several antimicrobial compounds such as defensins, antiproteases, and antioxidates. In addition, the lining of the lung also contains macrophages, immune cells which engulf and destroy debris and microbes that enter the lung in a process known as phagocytosis; and dendritic cells which present antigens to activate components of the adaptive immune system such as T - cells and B - cells. The size of the respiratory tract and the flow of air also protect the lungs from larger particles. Smaller particles deposit in the mouth and behind the mouth in the oropharynx, and larger particles are trapped in nasal hair after inhalation. In addition to their function in respiration, the lungs also have a number of other functions. They are involved in maintaining homeostasis, helping in the regulation of blood pressure as part of the renin - angiotensin system. The inner lining of the blood vessels secretes angiotensin - converting enzyme (ACE) an enzyme that catalyses the conversion of angiotensin I to angiotensin II. The lungs are involved in the blood 's acid - base homeostasis by expelling carbon dioxide when breathing. The lungs also serve a protective role. Several blood - borne substances, such as a few types of prostaglandins, leukotrienes, serotonin and bradykinin, are excreted through the lungs. Drugs and other substances can be absorbed, modified or excreted in the lungs. The lungs filter out small blood clots from veins and prevent them from entering arteries and causing strokes. The lungs also play a pivotal role in speech by providing air and airflow for the creation of vocal sounds. Lungs also provide the airflow that enables the expression of many emotions such as sighing, yawning, sobbing, laughing in humans and the vocal sounds in other animals. About 20,000 protein coding genes are expressed in human cells and almost 75 % of these genes are expressed in the normal lung. A little less than 200 of these genes are more specifically expressed in the lung with less 20 genes being highly lung specific. The corresponding specific proteins are expressed within different cellular compartments such as pneumocytes in alveoli, and ciliated and mucus secreting goblet cells in the respiratory mucosa. The highest expression of lung specific proteins are different surfactant proteins, such as SFTPA1, SFTPB and SFTPC, and napsin, expressed in type II pneumocytes. Other proteins with elevated expression in the lung are the dynein protein DNAH5 in ciliated cells, and the secreted SCGB1A1 protein in mucus secreting goblet cells of the airway mucosa. Lungs can be affected by a variety of diseases. Pulmonology is the medical speciality that deals with diseases involving the respiratory tract, and cardiothoracic surgery is the surgical field that deals with surgery of the lungs. Inflammatory conditions of the lung tissue are pneumonia, of the respiratory tract are bronchitis and bronchiolitis, and of the pleurae surrounding the lungs pleurisy. Inflammation is usually caused by infections due to bacteria or viruses. When the lung tissue is inflamed due to other causes it is called pneumonitis. One major cause of bacterial pneumonia is tuberculosis. Chronic infections often occur in those with immunodeficiency and can include a fungal infection by Aspergillus fumigatus that can lead to an aspergilloma forming in the lung. A pulmonary embolism is a blood clot that becomes lodged in the pulmonary arteries. The majority of emboli arise because of deep vein thrombosis in the legs. Pulmonary emboli may be investigated using a ventilation / perfusion scan, a CT scan of the arteries of the lung, or blood tests such as the D - dimer. Pulmonary hypertension describes an increased pressure at the beginning of the pulmonary artery that has a large number of differing causes. Other rarer conditions may also affect the blood supply of the lung, such as granulomatosis with polyangiitis, which causes inflammation of the small blood vessels of the lungs and kidneys. A lung contusion is a bruise caused by chest trauma. It results in hemorrhage of the alveoli causing a build - up of fluid which can impair breathing, and this can be either mild or severe. The function of the lungs can also be affected by compression from fluid in the pleural cavity pleural effusion, or other substances such as air (pneumothorax), blood (hemothorax), or rarer causes. These may be investigated using a chest X-ray or CT scan, and may require the insertion of a surgical drain until the underlying cause is identified and treated. Asthma, chronic bronchitis, bronchiectasis and chronic obstructive pulmonary disease (COPD) are all obstructive lung diseases characterised by airway obstruction. This limits the amount of air that is able to enter alveoli because of constriction of the bronchial tree, due to inflammation. Obstructive lung diseases are often identified because of symptoms and diagnosed with pulmonary function tests such as spirometry. Many obstructive lung diseases are managed by avoiding triggers (such as dust mites or smoking), with symptom control such as bronchodilators, and with suppression of inflammation (such as through corticosteroids) in severe cases. One common cause of COPD and emphysema is smoking, and common causes of bronchiectasis include severe infections and cystic fibrosis. The definitive cause of asthma is not yet known. Some types of chronic lung diseases are classified as restrictive lung disease, because of a restriction in the amount of lung tissue involved in respiration. These include pulmonary fibrosis which can occur when the lung is inflamed for a long period of time. Fibrosis in the lung replaces functioning lung tissue with fibrous connective tissue. This can be due to a large variety of occupational diseases such as Coalworker 's pneumoconiosis, autoimmune diseases or more rarely to a reaction to medication. Lung cancer can either arise directly from lung tissue or as a result of metastasis from another part of the body. There are two main types of primary tumour described as either small - cell or non-small - cell lung carcinomas. The major risk factor for cancer is smoking. Once a cancer is identified it is staged using scans such as a CT scan and a sample of tissue (a biopsy) is taken. Cancers may be treated by surgically removing the tumour, radiotherapy, chemotherapy or combinations thereof, or with the aim of symptom control. Lung cancer screening is being recommended in the United States for high - risk populations. Congenital disorders include cystic fibrosis, pulmonary hypoplasia (an incomplete development of the lungs) congenital diaphragmatic hernia, and infant respiratory distress syndrome caused by a deficiency in lung surfactant. An azygos lobe is a congenital anatomical variation which though usually without effect can cause problems in thoracoscopic procedures. A pneumothorax (collapsed lung) is an abnormal collection of air in the pleural space that causes an uncoupling of the lung from the chest wall. The lung can not expand against the air pressure inside the pleural space. An easy to understand example is a traumatic pneumothorax, where air enters the pleural space from outside the body, as occurs with puncture to the chest wall. Similarly, a scuba diver ascending while holding their breath with their lungs fully inflated can cause air sacs (alveoli) to burst and leak high pressure air into the pleural space. Lung function testing is carried out by evaluating a person 's capacity to inhale and exhale in different circumstances. The volume of air inhaled and exhaled by a person at rest is the tidal volume (normally 500 - 750mL); the inspiratory reserve volume and expiratory reserve volume are the additional amounts a person is able to forcibly inhale and exhale respectively. The summed total of forced inspiration and expiration is a person 's vital capacity. Not all air is expelled from the lungs even after a forced breath out; the remainder of the air is called the residual volume. Together these terms are referred to as lung volumes. Pulmonary plethysmographs are used to measure functional residual capacity. Functional residual capacity can not be measured by tests that rely on breathing out, as a person is only able to breathe a maximum of 80 % of their total functional capacity. The total lung capacity depends on the person 's age, height, weight, and sex, and normally ranges between 4 and 6 litres. Females tend to have a 20 -- 25 % lower capacity than males. Tall people tend to have a larger total lung capacity than shorter people. Smokers have a lower capacity than nonsmokers. Thinner persons tend to have a larger capacity, and capacity can be increased by physical training as much as 40 %. Other lung function tests include spirometry, measuring the amount (volume) and flow of air that can be inhaled and exhaled. The maximum volume of breath that can be exhaled is called the vital capacity. In particular, how much a person is able to exhale in one second (called forced expiratory volume (FEV1) as a proportion of how much they are able to exhale in total (FEV). This ratio, the FEV1 / FEV ratio, is important to distinguish whether a lung disease is restrictive or obstructive. Another test is that of the lung 's diffusing capacity -- this is a measure of the transfer of gas from air to the blood in the lung capillaries. The lungs of birds are relatively small, but are connected to 8 or 9 air sacs that extend through much of the body, and are in turn connected to air spaces within the bones. On inhalation, air travels through the trachea of a bird into the air sacs. Air then travels continuously from the air sacs at the back, through the lungs, which are relatively fixed in size, to the air sacs at the front. From here, the air is exhaled. These fixed size lungs are called "circulatory lungs '', as distinct from the "bellows - type lungs '' found in most other animals. The lungs of birds contain millions of tiny parallel passages called parabronchi. Small sacs called atria radiate from the walls of the tiny passages; these, like the alveoli in other lungs, are the site of gas exchange by simple diffusion. The blood flow around the parabronchi and their atria forms a cross-current process of gas exchange (see diagram on the right). The air sacs, which hold air, do not contribute much to gas exchange, despite being thin - walled, as they are poorly vascularised. The air sacs expand and contract due to changes in the volume in the thorax and abdomen. This volume change is caused by the movement of the sternum and ribs and this movement is often synchronised with movement of the flight muscles. Parabronchi in which the air flow is unidirectional are called paleopulmonic parabronchi and are found in all birds. Some birds, however, have, in addition, a lung structure where the air flow in the parabronchi is bidirectional. These are termed neopulmonic parabronchi. The lungs of most reptiles have a single bronchus running down the centre, from which numerous branches reach out to individual pockets throughout the lungs. These pockets are similar to alveoli in mammals, but much larger and fewer in number. These give the lung a sponge - like texture. In tuataras, snakes, and some lizards, the lungs are simpler in structure, similar to that of typical amphibians. Snakes and limbless lizards typically possess only the right lung as a major respiratory organ; the left lung is greatly reduced, or even absent. Amphisbaenians, however, have the opposite arrangement, with a major left lung, and a reduced or absent right lung. Both crocodilians and monitor lizards have developed lungs similar to those of birds, providing an unidirectional airflow and even possessing air sacs. The now extinct pterosaurs have seemingly even further refined this type of lung, extending the airsacs into the wing membranes and, in the case of lonchodectids, tupuxuara, and azhdarchoids, the hindlimbs. Reptilian lungs typically receive air via expansion and contraction of the ribs driven by axial muscles and buccal pumping. Crocodilians also rely on the hepatic piston method, in which the liver is pulled back by a muscle anchored to the pubic bone (part of the pelvis) called the diaphragmaticus, which in turn creates negative pressure in the crocodile 's thoracic cavity, allowing air to be moved into the lungs by Boyle 's law. Turtles, which are unable to move their ribs, instead use their forelimbs and pectoral girdle to force air in and out of the lungs. The lungs of most frogs and other amphibians are simple and balloon - like, with gas exchange limited to the outer surface of the lung. This is not very efficient, but amphibians have low metabolic demands and can also quickly dispose of carbon dioxide by diffusion across their skin in water, and supplement their oxygen supply by the same method. Amphibians employ a positive pressure system to get air to their lungs, forcing air down into the lungs by buccal pumping. This is distinct from most higher vertebrates, who use a breathing system driven by negative pressure where the lungs are inflated by expanding the rib cage. In buccal pumping, the floor of the mouth is lowered, filling the mouth cavity with air. The throat muscles then presses the throat against the underside of the skull, forcing the air into the lungs. Due to the possibility of respiration across the skin combined with small size, all known lungless tetrapods are amphibians. The majority of salamander species are lungless salamanders, which respirate through their skin and tissues lining their mouth. This necessarily restrict their size: all are small and rather thread - like in appearance, maximising skin surface relative to body volume. Other known lungless tetrapods are the Bornean flat - headed frog and Atretochoana eiselti, a caecilian. The lungs of amphibians typically have a few narrow internal walls (septa) of soft tissue around the outer walls, increasing the respiratory surface area and giving the lung a honey - comb appearance. In some salamanders even these are lacking, and the lung has a smooth wall. In caecilians, as in snakes, only the right lung attains any size or development. The lungs of lungfish are similar to those of amphibians, with few, if any, internal septa. In the Australian lungfish, there is only a single lung, albeit divided into two lobes. Other lungfish and Polypterus, however, have two lungs, which are located in the upper part of the body, with the connecting duct curving around and above the esophagus. The blood supply also twists around the esophagus, suggesting that the lungs originally evolved in the ventral part of the body, as in other vertebrates. Some invertebrates have lung - like structures that serve a similar respiratory purpose as, but are not evolutionarily related to, vertebrate lungs. Some arachnids, such as spiders and scorpions, have structures called book lungs used for atmospheric gas exchange. Some species of spider have four pairs of book lungs but most have two pairs. Scorpions have spiracles on their body for the entrance of air to the book lungs. The coconut crab is terrestrial and uses structures called branchiostegal lungs to breathe air. They can not swim and would drown in water, yet they possess a rudimentary set of gills. They can breathe on land and hold their breath underwater. The branchiostegal lungs are seen as a developmental adaptive stage from water - living to enable land - living, or from fish to amphibian. Pulmonates are mostly land snails and slugs that have developed a simple lung from the mantle cavity. An externally located opening called the pneumostome allows air to be taken into the mantle cavity lung. The lungs of today 's terrestrial vertebrates and the gas bladders of today 's fish are believed to have evolved from simple sacs, as outpocketings of the esophagus, that allowed early fish to gulp air under oxygen - poor conditions. These outpocketings first arose in the bony fish. In most of the ray - finned fish the sacs evolved into closed off gas bladders, while a number of carp, trout, herring, catfish, and eels have retained the physostome condition with the sack being open to the esophagus. In more basal bony fish, such as the gar, bichir, bowfin and the lobe - finned fish, the bladders have evolved to primarily function as lungs. The lobe - finned fish gave rise to the land - based tetrapods. Thus, the lungs of vertebrates are homologous to the gas bladders of fish (but not to their gills).
who won the 1860 presidential election and with what percentage of votes
United states presidential election, 1860 - wikipedia James Buchanan Democratic Abraham Lincoln Republican The United States Presidential Election of 1860 was the nineteenth quadrennial presidential election to select the President and Vice President of the United States. The election was held on Tuesday, November 6, 1860. In a four - way contest, the Republican Party ticket of Abraham Lincoln and Hannibal Hamlin emerged triumphant. The election of Lincoln served as the primary catalyst of the American Civil War. The United States had become increasingly divided during the 1850s over sectional disagreements, especially regarding the extension of slavery into the territories. Incumbent President James Buchanan, like his predecessor Franklin Pierce, was a northern Democrat with sympathies for the South. During the mid-to - late 1850s, the anti-slavery Republican Party became a major political force in the wake of the Kansas -- Nebraska Act and the Supreme Court 's decision in the 1857 case of Dred Scott v. Sandford. By 1860, the Republican Party had replaced the defunct Whig Party as the major opposition to the Democrats. A group of former Whigs and Know Nothings formed the Constitutional Union Party, which sought to avoid secession by pushing aside the issue of slavery. The 1860 Republican National Convention nominated Lincoln, a moderate former Congressman from Illinois, as its standard - bearer. The Republican Party platform promised not to interfere with slavery in the states, but opposed the further extension of slavery into the territories. The first 1860 Democratic National Convention adjourned without agreeing on a nominee, but a second convention nominated Senator Stephen A. Douglas of Illinois for president. Douglas 's support for the concept of popular sovereignty, which called for each individual territory to decide on the status of slavery, alienated many Southern Democrats. The Southern Democrats, with the support of President Buchanan, held their own convention and nominated Vice President John C. Breckinridge of Kentucky for president. The 1860 Constitutional Union Convention nominated a ticket led by former Senator John Bell of Tennessee. Despite minimal support in the South, Lincoln won a plurality of the popular vote and a majority of the electoral vote. The divisions among the Republicans ' opponents were not in themselves decisive in ensuring the Republican capture of the White House, as Lincoln received absolute majorities in states that combined for a majority of the electoral votes. Lincoln 's main opponent in the North was Douglas, who finished second in several states but only won the slave state of Missouri and three electors from the free state of New Jersey. Bell won three Southern states, while Breckinridge swept the remainder of the South. The election of Lincoln led to the secession of several states in the South, and the Civil War would begin with the Battle of Fort Sumter. The election was the first of six consecutive victories for the Republican Party. The 1860 presidential election conventions were unusually tumultuous, due in particular to a split in the Democratic Party that led to rival conventions. Northern Democratic candidates: Senator Stephen A. Douglas from Illinois Former Treasury Secretary James Guthrie Senator Robert M.T. Hunter from Virginia Senator Joseph Lane from Oregon Former Senator Daniel S. Dickinson from New York Senator Andrew Johnson from Tennessee At the Democratic National Convention held in Institute Hall in Charleston, South Carolina, in April 1860, 51 Southern Democrats walked out over a platform dispute. The extreme pro-slavery "Fire - Eater '' William Lowndes Yancey and the Alabama delegation first left the hall, followed by the delegates of Florida, Georgia, Louisiana, Mississippi, South Carolina, Texas, three of the four delegates from Arkansas, and one of the three delegates from Delaware. Six candidates were nominated: Stephen A. Douglas from Illinois, James Guthrie from Kentucky, Robert Mercer Taliaferro Hunter from Virginia, Joseph Lane from Oregon, Daniel S. Dickinson from New York, and Andrew Johnson from Tennessee. Three other candidates, Isaac Toucey from Connecticut, James Pearce from Maryland, and Jefferson Davis from Mississippi (the future president of the Confederate States) also received votes. Douglas, a moderate on the slavery issue who favored "popular sovereignty '', was ahead on the first ballot, but needed 56.5 more votes to secure the nomination. On the 57th ballot, Douglas was still ahead, but 51.5 votes short of the nomination. In desperation, the delegates agreed on May 3 to stop voting and adjourn the convention. The Democrats convened again at the Front Street Theater in Baltimore, Maryland, on June 18. This time, 110 Southern delegates (led by "Fire - Eaters '') walked out when the convention would not adopt a resolution supporting extending slavery into territories whose voters did not want it. Some considered Horatio Seymour a compromise candidate for the National Democratic nomination at the reconvening convention in Baltimore. Seymour wrote a letter to the editor of his local newspaper declaring unreservedly that he was not a candidate for either spot on the ticket. After two ballots, the remaining Democrats nominated Stephen A. Douglas from Illinois for president. Benjamin Fitzpatrick from Alabama was nominated for vice president, but he refused the nomination. That nomination ultimately went instead to Herschel Vespasian Johnson from Georgia. Southern Democratic candidates: Vice President John C. Breckinridge Former Senator Daniel S. Dickinson from New York Senator Robert M.T. Hunter from Virginia (declined to be nominated) Senator Joseph Lane from Oregon (declined to be nominated) Senator Jefferson Davis from Mississippi (declined to be nominated) The Charleston bolters reconvened in Richmond, Virginia on June 11. When the Democrats reconvened in Baltimore, they rejoined (except South Carolina and Florida, who stayed in Richmond). When the convention seated two replacement delegations on June 18, they bolted again, now accompanied by nearly all other Southern delegates, as well as erstwhile Convention chair Caleb Cushing, a New Englander and former member of Franklin Pierce 's cabinet. This larger group met immediately in Baltimore 's Institute Hall, with Cushing again presiding. They adopted the pro-slavery platform rejected at Charleston, and nominated Vice President John C. Breckinridge for President, and Senator Joseph Lane from Oregon for Vice President. Yancey and some (less than half) of the bolters, almost entirely from the Lower South, met on June 28 in Richmond, along with the South Carolina and Florida delegations. This convention affirmed the nominations of Breckinridge and Lane. Besides the Democratic Parties in the southern states, the Breckinridge / Lane ticket was also supported by the Buchanan administration. Buchanan 's own continued prestige in his home state of Pennsylvania ensured that Breckinridge would be the principal Democratic candidate in that populous state. Breckinridge was the last sitting Vice President nominated for President until Richard Nixon in 1960. Republican candidates: Former Representative Abraham Lincoln from Illinois Senator William H. Seward from New York Senator Simon Cameron from Pennsylvania Governor Salmon P. Chase of Ohio Former Representative Edward Bates from Missouri Associate Justice John McLean Senator Benjamin Wade from Ohio Former Senator William L. Dayton from New Jersey The Republican National Convention met in mid-May 1860 after the Democrats had been forced to adjourn their convention in Charleston. With the Democrats in disarray and a sweep of the Northern states possible, the Republicans felt confident going into their convention in Chicago. William H. Seward from New York was considered the front - runner, followed by Abraham Lincoln from Illinois, Salmon P. Chase from Ohio, and Missouri 's Edward Bates. As the convention developed, however, it was revealed that Seward, Chase, and Bates had each alienated factions of the Republican Party. Delegates were concerned that Seward was too closely identified with the radical wing of the party, and his moves toward the center had alienated the radicals. Chase, a former Democrat, had alienated many of the former Whigs by his coalition with the Democrats in the late 1840s. He had also opposed tariffs demanded by Pennsylvania, and critically, had opposition from his own delegation from Ohio. Bates outlined his positions on the extension of slavery into the territories and equal constitutional rights for all citizens, positions that alienated his supporters in the border states and Southern conservatives. German Americans in the party opposed Bates because of his past association with the Know Nothings. Since it was essential to carry the West, and because Lincoln had a national reputation from his debates and speeches as the most articulate moderate, he won the party 's nomination for president on the third ballot on May 18, 1860. Senator Hannibal Hamlin from Maine was nominated for vice-president, defeating Cassius Clay from Kentucky. The party platform promised not to interfere with slavery in the states, but opposed slavery in the territories. The platform promised tariffs protecting industry and workers, a Homestead Act granting free farmland in the West to settlers, and the funding of a transcontinental railroad. There was no mention of Mormonism (which had been condemned in the Party 's 1856 platform), the Fugitive Slave Act, personal liberty laws, or the Dred Scott decision. While the Seward forces were disappointed at the nomination of a little - known western upstart, they rallied behind Lincoln. Abolitionists, however, were angry at the selection of a moderate and had little faith in Lincoln. Constitutional Union candidates: Former Senator John Bell of Tennessee Governor Sam Houston of Texas Senator John J. Crittenden from Kentucky Former Senator Edward Everett from Massachusetts Former Senator William A. Graham from North Carolina Former Senator William C. Rives from Virginia The Constitutional Union Party was formed by remnants of both the defunct Know Nothing and Whig Parties who were unwilling to join either the Republicans or the Democrats. The new party 's members hoped to stave off Southern secession by avoiding the slavery issue. They met in the Eastside District Courthouse of Baltimore and nominated John Bell from Tennessee for president over Governor Sam Houston of Texas on the second ballot. Edward Everett was nominated for vice-president at the convention on May 9, 1860, one week before Lincoln. John Bell was a former Whig who had opposed the Kansas -- Nebraska Act and the Lecompton Constitution. Edward Everett had been president of Harvard University and Secretary of State in the Fillmore administration. The party platform advocated compromise to save the Union with the slogan "The Union as it is, and the Constitution as it is. '' Liberty (Union) candidates: Former Representative Gerrit Smith from New York The Liberty Party as formed in 1860 was a splinter (or remnant) of the former Liberty Party of the 1840s, after most of its membership left to join the Free Soil Party in 1848 and nearly all of what remained of it joined the Republicans in 1854. The remaining party was also called the Radical Abolitionists. A convention of one hundred delegates was held in Convention Hall, Syracuse, New York, on August 29, 1860. Delegates were in attendance from New York, Pennsylvania, New Jersey, Michigan, Illinois, Ohio, Kentucky, and Massachusetts. Several of the delegates were women. Gerrit Smith, a prominent abolitionist and the 1848 presidential nominee of the original Liberty Party, had sent a letter in which he stated that his health had been so poor that he had not been able to be away from home since 1858. Nonetheless, he remained popular in the party because he had helped inspire some of John Brown 's supporters at the Raid on Harpers Ferry. In his letter, Smith donated $50 to pay for the printing of ballots in the various states. There was quite a spirited contest between the friends of Gerrit Smith and William Goodell in regard to the nomination for the presidency. In spite of his professed ill health, Gerrit Smith was nominated for president and Samuel McFarland from Pennsylvania was nominated for vice president. In Ohio, a slate of presidential electors pledged to Smith ran with the name of the Union Party. Governor Sam Houston of Texas The People 's Party was a loose association of the supporters of Governor Samuel Houston. On April 20, 1860, the party held what it termed a national convention to nominate Houston for president on the San Jacinto Battlefield in Texas. Houston 's supporters at the gathering did not nominate a vice-presidential candidate, since they expected later gatherings to carry out that function. Later mass meetings were held in northern cities, such as New York City on May 30, 1860, but they too failed to nominate a vice-presidential candidate. Houston, never enthusiastic about running for the Presidency, soon become convinced that he had no chance of winning and that his candidacy would only make it easier for the Republican candidate to win. He withdrew from the race on August 16 and urged the formation of a Unified "Union '' ticket in opposition to Lincoln. In their campaigning, Bell and Douglas both claimed that disunion would not necessarily follow a Lincoln election. Nonetheless, loyal army officers in Virginia, Kansas and South Carolina warned Lincoln of military preparations to the contrary. Secessionists threw their support behind Breckinridge in an attempt either to force the anti-Republican candidates to coordinate their electoral votes or throw the election into the House of Representatives, where the selection of the president would be made by the representatives elected in 1858, before the Republican majorities in both House and Senate achieved in 1860 were seated in the new 37th Congress. Mexican War hero Winfield Scott suggested to Lincoln that he assume the powers of a commander - in - chief before inauguration. However, historian Bruce Chadwick observes that Lincoln and his advisors ignored the widespread alarms and threats of secession as mere election trickery. Indeed, voting in the South was not as monolithic as the Electoral College map would make it seem. Economically, culturally, and politically, the South was made up of three regions. In the states of the "Upper '' South, later known as the "Border States '' (Delaware, Maryland, Kentucky, and Missouri along with the Kansas territories), unionist popular votes were scattered among Lincoln, Douglas, and Bell, to form a majority in all four. In the "Middle '' South states, there was a unionist majority divided between Douglas and Bell in Virginia and Tennessee; in North Carolina and Arkansas, the unionist (Bell and Douglas) vote approached a majority. Texas was the only Middle South state that Breckinridge carried convincingly. In three of the six "Deep '' South, unionists (Bell and Douglas) won divided majorities in Georgia and Louisiana or neared it in Alabama. Breckinridge convincingly carried only three of the six states of the Deep South (South Carolina, Florida, and Mississippi). These three Deep South states were all among the four Southern states with the lowest white populations; together, they held only nine - percent of Southern whites. Among the slave states, the three states with the highest voter turnouts voted the most one - sided. Texas, with five percent of the total wartime South 's population, voted 75 percent Breckinridge. Kentucky and Missouri, with one - fourth the total population, voted 73 percent pro-union Bell, Douglas and Lincoln. In comparison, the six states of the Deep South making up one - fourth the Confederate voting population, split 57 percent Breckinridge versus 43 percent for the two pro-union candidates. The four states that were admitted to the Confederacy after Fort Sumter held almost half its population, and voted a narrow combined majority of 53 percent for the pro-union candidates. In the eleven states that would later declare their secession from the Union and be controlled by Confederate armies, ballots for Lincoln were cast only in Virginia, where he received 1,929 votes (1.15 percent of the total). Unsurprisingly, the vast majority of the votes Lincoln received were cast in border counties of what would soon become West Virginia - the future state accounted for 1,832 of Lincoln 's 1,929 votes. Lincoln received no votes at all in 121 of the state 's then - 145 counties (including 31 of the 50 that would form West Virginia), received a single vote in three counties and received ten or fewer votes in nine of the 24 counties where he polled votes. Lincoln 's best results, by far, were in the four counties that comprised the state 's northern panhandle, a region which had long felt alienated from Richmond and which was economically and culturally linked to its neighbors Ohio and Pennsylvania and which would become the key driver in the successful effort to form a separate state. Hancock County (Virginia 's northernmost at the time) returned Lincoln 's best result - he polled over 40 % of the vote there and finished in second place (Lincoln polled only eight votes fewer than Breckinridge). Of the 97 votes cast for Lincoln in the state 's post-1863 boundaries, 93 were polled in four counties (all along the Potomac River) and four were tallied in the coastal city of Portsmouth. Some key differences between modern elections and the those of the mid-nineteenth century are that at the time, there was no secret ballot anywhere in the United States, that candidates were responsible for printing and distributing their own ballots (a service that was typically done by supportive newspaper publishers) and that in order to distribute valid ballots for a presidential election in a state, candidates needed citizens eligible to vote in that state who would pledge to vote for the candidate in the Electoral College. This meant that even if a voter had access to a ballot for Lincoln, casting one in favor of him in a strongly pro-slavery county would incur (at minimum) social ostracization (of course, casting a vote for Breckinridge in a strongly abolitionist county ran a voter the same risk). In ten southern slave states, no citizen would publicly pledge to vote for Abraham Lincoln. In most of Virginia, no publisher would print ballots for Lincoln 's pledged electors. In the four slave states that did not secede (Missouri, Kentucky, Maryland, and Delaware), Lincoln came in fourth in every state except Delaware (where he finished third). Within the fifteen slave states, Lincoln won only two counties out of 996, Missouri 's St. Louis and Gasconade Counties. In the 1856 election, the Republican candidate for president had received no votes at all in twelve of the fourteen slave states with a popular vote (these being the same states as in the 1860 election, plus Missouri and Virginia). The election was held on Tuesday, November 6, 1860, and was noteworthy for exaggerated sectionalism in a country that was soon to dissolve into civil war. Voter turnout was 81.2 %, the highest in American history up to that time, and the second - highest overall (exceeded only in the election of 1876). All six Presidents elected since Andrew Jackson won re-election in 1832 had been one - term presidents, the last four with a popular vote under 51 percent. Lincoln won the Electoral College with less than 40 percent of the popular vote nationwide by carrying states above the Mason -- Dixon line and north of the Ohio River, plus the states of California and Oregon in the Far West. Unlike every preceding president - elect, Lincoln did not carry even one slave state, and indeed he was not on the ballot in the southern states of North and South Carolina, Georgia, Florida, Alabama, Tennessee, Mississippi, Arkansas, Louisiana and Texas. He was the first President - elect to not be on the ballot in all states, a feat which has since been equalled thrice but never to the same extent. Lincoln won the second lowest share of the popular vote among all winning presidential candidates in U.S. history. The Republican victory resulted from the concentration of votes in the free states, which together controlled a majority of the presidential electors. Lincoln 's strategy was deliberately focused, in collaboration with Republican Party Chairman Thurlow Weed, on expanding on the states Frémont won four years earlier. New York was critical with 35 Electoral College votes, 11.5 percent of the total; with Pennsylvania (27) and Ohio (23), a candidate could collect more than half (85) of the votes needed. The Wide Awakes young Republican men 's organization massively expanded registered voter lists, and although Lincoln was not even on the ballot in most southern states, population increases in the free states had far exceeded those seen in the slave states for many years before the election of 1860, hence free states dominated in the Electoral College. The split in the Democratic party is sometimes held responsible for Lincoln 's victory,, however, despite the fact that Lincoln won the election with less than forty percent of the popular vote, much of the anti-Republican vote was "wasted '' in Southern states where Lincoln was not even on the ballot. At most, a single opponent nationwide would only have deprived Lincoln of California, Oregon, and four New Jersey electors, whose combined total of eleven electoral votes would have made no difference to the result; every other state won by the Republicans was won by a clear majority of the vote. In the three states of New York, Rhode Island, and New Jersey where anti-Lincoln votes did combine into fusion tickets, Lincoln still won two and split New Jersey. If the opposition had formed fusion tickets in every state, Lincoln still would have received 169 electoral votes, 17 more than the 152 required to win the Electoral College. Like Lincoln, Breckinridge and Bell won no electoral votes outside of their respective sections. While Bell retired to his family business, quietly supporting his state 's secession, Breckinridge served as a Confederate general. He finished second in the Electoral College with 72 votes, carrying eleven of fifteen slave states (including South Carolina, whose electors were chosen by the state legislature, not popular vote). Breckinridge stood a distant third in national popular vote at eighteen percent, but accrued 50 -- 75 percent in the first seven states that would become the Confederate States of America. He took nine of the eleven states that eventually joined, plus the border slave states of Delaware and Maryland, losing only Virginia and Tennessee. Breckinridge received very little support in the free states, showing some strength only in California, Oregon, Pennsylvania and Connecticut. Bell carried three slave states (Tennessee, Kentucky, and Virginia) and lost Maryland by only 722 votes. Nevertheless, he finished a remarkable second in all slave states won by Breckinridge and Douglas. He won 45 to 47 percent in Maryland, Tennessee and North Carolina and canvassed respectably with 36 to 40 percent in Missouri, Arkansas, Louisiana, Georgia, and Florida. It was hoped by Bell himself that he would take over the former support of the extinct Whig Party in free states, but the majority of this support went to Lincoln. Thus, except for running mate Everett 's home state of Massachusetts, and California, Bell received even less support in the free states than did Breckinridge, and consequently came in last in the national popular vote at 12 percent. Douglas was the only candidate who won electoral votes in both slave and free states (free New Jersey and slave Missouri). His support was the most widespread geographically; he finished second behind Lincoln in the popular vote with 29.5 percent, but last in the Electoral College. Douglas attained a 28 -- 47 percent share in the states of the Mid-Atlantic, Midwest and Trans - Mississippi West, but slipped to 19 -- 39 percent in New England. Outside his regional section, Douglas took 15 -- 17 percent of the popular vote total in the slave states of Kentucky, Alabama and Louisiana, then 10 percent or less in the nine remaining slave states. Douglas, in his "Norfolk Doctrine '', reiterated in North Carolina, promised to keep the Union together by coercion if states proceeded to secede. The popular vote for Lincoln and Douglas combined was seventy percent of the turnout. The 1860 Republican ticket was the first successful national ticket that did not feature a Southerner, and the election marked the end of Southern political dominance in the United States. Between 1789 and 1860, Southerners had been President for two - thirds of the era, and had held the offices of Speaker of the House and President pro tem of the Senate during much of that time. Moreover, since 1791, Southerners had comprised a majority of the Supreme Court. Source (Popular Vote): Leip, David. "1860 Presidential Election Results ''. Dave Leip 's Atlas of U.S. Presidential Elections. Retrieved July 27, 2005. Source (Electoral Vote): "Electoral College Box Scores 1789 -- 1996 ''. National Archives and Records Administration. Retrieved July 31, 2005. The popular vote figures exclude South Carolina where the Electors were chosen by the state legislature rather than by popular vote. Map of presidential election results by county Map of Republican presidential election results by county Map of Northern Democratic presidential election results by county Map of Southern Democratic presidential election results by county Map of Constitutional Union presidential election results by county Map of "Fusion '' slate presidential election results by county Cartogram of presidential election results by county Cartogram of Republican presidential election results by county Cartogram of Northern Democratic presidential election results by county Cartogram of Southern Democratic presidential election results by county Cartogram of Constitutional Union presidential election results by county Cartogram of "Fusion '' slate presidential election results by county Source: Data from Walter Dean Burnham, Presidential ballots, 1836 -- 1892 (Johns Hopkins University Press, 1955) pp 247 -- 57. The election of Abraham Lincoln in 1860 was an immediate cause of secession of the first 7 Southern states (SC, MS, FL, AL, GA, LA, TX), which formed the Confederacy in February 1861. The statehood of Kansas as a free state and Lincoln 's military resistance to the Confederacy led to secession of 4 more states (VA, NC, TN, AR) after May 1861. Lincoln had been the nominee of the Republican party with an anti-slavery expansion platform, he refused to acknowledge the right to secession, and he would not yield federal property within Southern states. Numerous historians have explored the reasons so many white Southerners adopted secessionism in 1860, after 30 years of disputes between North and South states over protection tariffs, Federal spending, and civil rights of refusing to allow slaves to travel with slaveholder families in some North States. Tariffs had been levied on South imports to protect North industries, taxes were charged on South cotton but not North wool, or 3 - to - 1 Federal expenditures on North navigation lighthouses versus the South 's longer coastline, and a faked slave uprising in Virginia angered many Southerners. Bertram Wyatt - Brown argues that secessionists desired independence as necessary for their honor. They could no longer tolerate Northern state attitudes that regarded slave ownership as a great sin and Northern politicians who insisted on stopping the spread of slavery. Avery Craven argues that secessionists believed Lincoln 's election meant long - term doom for their vast social system, of thousands of Southerners working with over two million slaves living in private households as nearly half the population of many Southern states in 1860. This situation could not be solved by the democratic process, and it placed "the great masses of men, North and South, helpless before the drift into war ''.
who wrote the song living on a prayer
Livin ' on a Prayer - wikipedia "Livin ' on a Prayer '' is Bon Jovi 's second chart - topping song from their third album Slippery When Wet. Written by Jon Bon Jovi, Richie Sambora, and Desmond Child, the single, released in late 1986, was well received at both rock and pop radio and its music video was given heavy rotation at MTV, giving the band their first No. 1 on the Billboard Mainstream Rock chart and their second consecutive No. 1 Billboard Hot 100 hit. The song is the band 's signature song, topping fan - voted lists and re-charting around the world decades after its release. The original 45 - RPM single release sold 800,000 copies in the United States, and in 2013 was certified Triple Platinum for over 3 million digital downloads. The official music video has over 440 million views on YouTube as of May 2018. Jon Bon Jovi did not like the original recording of this song, which can be found as a hidden track on 100,000,000 Bon Jovi Fans Ca n't Be Wrong. Lead guitarist Richie Sambora, however, convinced him the song was good, and they reworked it with a new bass guitar line (recorded by Hugh McDonald uncredited), different drum fills and the use of a talk box to include it on their upcoming album Slippery When Wet. The song spent two weeks at number one on the Mainstream Rock Tracks, from January 31 to February 14, 1987, and four weeks at number one on the Billboard Hot 100, from February 14 to March 14. It also hit number four on the UK singles chart. The album version of the song, timed around 4: 10, fades out at the end. However, the music video game Guitar Hero World Tour features the song 's original studio ending, where the band revisit the intro riff and end with a talk box solo; this version ends at 4: 53. The original ending is also playable on the similar video game Rock Band 2, though edited in this case (thereby eliminating the talk box solo at the end). The version included on the 2005 DualDisc edition of Slippery When Wet has an extended version of the original ending, with a different talk box solo playing over the riff (possibly taken from an outtake of the song); this version, which fades out at the end like the standard version of the song, ends at 5: 06. After the September 11, 2001 attacks -- in which New Jersey was the second - hardest hit state after New York City, suffering hundreds of casualties among both WTC workers and first responders -- the band performed an acoustic version of this song for New York. Bon Jovi performed a similar version as part of the special America: A Tribute to Heroes. In 2006, online voters rated "Livin ' on a Prayer '' No. 1 on VH1 's list of The 100 Greatest Songs of the ' 80s. More recently, in New Zealand, "Livin ' on a Prayer '' was No. 1 on the C4 music channel 's show U Choose 40, on the 80 's Icons list. It was also No. 1 on the "Sing - a-long Classics List ''. After Bon Jovi performed in New Zealand on January 28, 2008 while on their Lost Highway Tour, the song re-entered the official New Zealand RIANZ singles chart at number 24, over twenty years after the initial release. Australian music TV channel MAX placed this song at No. 18 on their 2008 countdown "Rock Songs: Top 100 ''. In 2009, the song returned to the charts in the UK, notably hitting the number - one spot on the UK Rock Chart. In 2010 the song was chosen in an online vote on the Grammy.com website over the group 's more recent hits "Always '' and "It 's My Life '' to be played live by the band on the 52nd Grammy Awards telecast. In the Billboard Hot 100 Anniversary 50, "Livin ' on a Prayer '' was named as 46 in the All time rock songs. After the song was released for download, the song has sold 3.4 million digital copies in the US as of November 2014. The song, including its original ending, is also playable on the music video games Guitar Hero World Tour and Rock Band 2. The song was re-worked and made available to download on November 9, 2010 for use in the Rock Band 3 music gaming platform to take advantage of PRO mode which allows use of a real guitar / bass guitar, and standard MIDI - compatible electronic drum kits / keyboards in addition to up to three - part harmony or backup vocals. In November 2013, the song made its return to the Billboard Hot 100 at number 25, due to a viral video. In 2017, ShortList 's Dave Fawbert listed the song as containing "one of the greatest key changes in music history ''. "It deals with the way that two kids -- Tommy and Gina -- face life 's struggles, '' noted Bon Jovi, "and how their love and ambitions get them through the hard times. It 's working class and it 's real... I wanted to incorporate the movie element, and tell a story about people I knew. So instead of doing what I did on ' Runaway ', where the girl did n't have a name, I gave them names, which gave them an identity... Tommy and Gina are n't two specific people; they represent a lifestyle. '' Bon Jovi explained that he "wrote that song during the Reagan era (1980 -- 1988) and the trickle - down economics are really inspirational to writing songs ''. Tommy and Gina are also referred to in Bon Jovi 's 2000 song "It 's My Life ''. The music video was filmed on September 17, 1986, at the Grand Olympic Auditorium in Los Angeles, California and was directed by Wayne Isham. It all starts with a silhouette of the band walking down the hall and it features shots of the band rehearsing, then playing in front of a crowd. The first half of the video, featuring the rehearsal footage, is in black and white, and the second half of the video, performing to the arena audience, is in color. In the beginning of the video, Jon has a harness attached by professional stunt coordinators and stunt spotters, and later in the video he soars over the crowd via overhead wires. Bon Jovi have themselves reworked the song several times, including an acoustic live version that served as a precursor to the MTV Unplugged series and a re-recorded version of the song, "Prayer ' 94, '' which appeared on U.S. versions of their Cross Road hits collection. Bon Jovi Additional musicians shipments figures based on certification alone sales + streaming figures based on certification alone
who started world war 2 and why did it start
World War II - wikipedia Allied victory Pacific War Mediterranean and Middle East Other campaigns Contemporaneous wars World War II (often abbreviated to WWII or WW2), also known as the Second World War, was a global war that lasted from 1939 to 1945, although conflicts reflecting the ideological clash between what would become the Allied and Axis blocs began earlier. The vast majority of the world 's countries -- including all of the great powers -- eventually formed two opposing military alliances: the Allies and the Axis. It was the most global war in history; it directly involved more than 100 million people from over 30 countries. In a state of total war, the major participants threw their entire economic, industrial, and scientific capabilities behind the war effort, blurring the distinction between civilian and military resources. World War II was the deadliest conflict in human history, marked by 50 to 85 million fatalities, most of which were civilians in the Soviet Union and China. It included massacres, the genocide of the Holocaust, strategic bombing, premeditated death from starvation and disease and the only use of nuclear weapons in war. The Empire of Japan aimed to dominate Asia and the Pacific and was already at war with the Republic of China in 1937, but the world war is generally said to have begun on 1 September 1939, the day of the invasion of Poland by Nazi Germany and the subsequent declarations of war on Germany by France and the United Kingdom. From late 1939 to early 1941, in a series of campaigns and treaties, Germany conquered or controlled much of continental Europe, and formed the Axis alliance with Italy and Japan. Under the Molotov -- Ribbentrop Pact of August 1939, Germany and the Soviet Union partitioned and annexed territories of their European neighbours, Poland, Finland, Romania and the Baltic states. The war continued primarily between the European Axis powers and the coalition of the United Kingdom and the British Commonwealth, with campaigns including the North Africa and East Africa campaigns, the aerial Battle of Britain, the Blitz bombing campaign, and the Balkan Campaign, as well as the long - running Battle of the Atlantic. On 22 June 1941, the European Axis powers launched an invasion of the Soviet Union, opening the largest land theatre of war in history, which trapped the Axis, most crucially the German Wehrmacht, into a war of attrition. In December 1941, Japan attacked the United States and European colonies in the Pacific Ocean, and quickly conquered much of the Western Pacific. The Japanese conquests were perceived by many in Asia as liberation from Western dominance; as such, several armies from the conquered territories aided the Japanese. The Axis advance halted in 1942 when Japan lost the critical Battle of Midway, and Germany and Italy were defeated in North Africa and then, decisively, at Stalingrad in the Soviet Union. In 1943, with a series of German defeats on the Eastern Front, the Allied invasions of Sicily and Italy which brought about Italian surrender, and Allied victories in the Pacific, the Axis lost the initiative and was forced into strategic retreat on all fronts. In 1944, the Western Allies invaded German - occupied France, while the Soviet Union regained all of its territorial losses and invaded Germany and its allies. During 1944 and 1945 the Japanese suffered major reverses in mainland Asia in South Central China and Burma, while the Allies crippled the Japanese Navy and captured key Western Pacific islands. The war in Europe concluded with an invasion of Germany by the Western Allies and the Soviet Union, culminating in the capture of Berlin by Soviet troops, the suicide of Adolf Hitler and the subsequent German unconditional surrender on 8 May 1945. Following the Potsdam Declaration by the Allies on 26 July 1945 and the refusal of Japan to surrender under its terms, the United States dropped atomic bombs on the Japanese cities of Hiroshima and Nagasaki on 6 and 9 August respectively. With an invasion of the Japanese archipelago imminent, the possibility of additional atomic bombings and the Soviet invasion of Manchuria, Japan formally surrendered on 2 September 1945. Thus ended the war in Asia, cementing the total victory of the Allies. World War II changed the political alignment and social structure of the world. The United Nations (UN) was established to foster international co-operation and prevent future conflicts. The victorious great powers -- China, France, the Soviet Union, the United Kingdom, and the United States -- became the permanent members of the United Nations Security Council. The Soviet Union and the United States emerged as rival superpowers, setting the stage for the Cold War, which lasted for the next 46 years. Meanwhile, the influence of European great powers waned, while the decolonisation of Africa and Asia began. Most countries whose industries had been damaged moved towards economic recovery. Political integration, especially in Europe, emerged as an effort to end pre-war enmities and to create a common identity. The start of the war in Europe is generally held to be 1 September 1939, beginning with the German invasion of Poland; the United Kingdom and France declared war on Germany two days later. The dates for the beginning of war in the Pacific include the start of the Second Sino - Japanese War on 7 July 1937, or even the Japanese invasion of Manchuria on 19 September 1931. Others follow the British historian A.J.P. Taylor, who held that the Sino - Japanese War and war in Europe and its colonies occurred simultaneously, and the two wars merged in 1941. This article uses the conventional dating. Other starting dates sometimes used for World War II include the Italian invasion of Abyssinia on 3 October 1935. The British historian Antony Beevor views the beginning of World War II as the Battles of Khalkhin Gol fought between Japan and the forces of Mongolia and the Soviet Union from May to September 1939. The exact date of the war 's end is also not universally agreed upon. It was generally accepted at the time that the war ended with the armistice of 14 August 1945 (V-J Day), rather than the formal surrender of Japan, which was on 2 September 1945 that officially ended the war in Asia. A peace treaty with Japan was signed in 1951. A treaty regarding Germany 's future allowed the reunification of East and West Germany to take place in 1990 and resolved most post-World War II issues. A formal peace treaty between Japan and the USSR had never been signed. World War I had radically altered the political European map, with the defeat of the Central Powers -- including Austria - Hungary, Germany, Bulgaria and the Ottoman Empire -- and the 1917 Bolshevik seizure of power in Russia, which eventually led to the founding of the Soviet Union. Meanwhile, the victorious Allies of World War I, such as France, Belgium, Italy, Romania and Greece, gained territory, and new nation - states were created out of the collapse of Austria - Hungary and the Ottoman and Russian Empires. To prevent a future world war, the League of Nations was created during the 1919 Paris Peace Conference. The organisation 's primary goals were to prevent armed conflict through collective security, military and naval disarmament, and settling international disputes through peaceful negotiations and arbitration. Despite strong pacifist sentiment after World War I, its aftermath still caused irredentist and revanchist nationalism in several European states. These sentiments were especially marked in Germany because of the significant territorial, colonial, and financial losses incurred by the Treaty of Versailles. Under the treaty, Germany lost around 13 percent of its home territory and all of its overseas possessions, while German annexation of other states was prohibited, reparations were imposed, and limits were placed on the size and capability of the country 's armed forces. The German Empire was dissolved in the German Revolution of 1918 -- 1919, and a democratic government, later known as the Weimar Republic, was created. The interwar period saw strife between supporters of the new republic and hardline opponents on both the right and left. Italy, as an Entente ally, had made some post-war territorial gains; however, Italian nationalists were angered that the promises made by Britain and France to secure Italian entrance into the war were not fulfilled in the peace settlement. From 1922 to 1925, the Fascist movement led by Benito Mussolini seized power in Italy with a nationalist, totalitarian, and class collaborationist agenda that abolished representative democracy, repressed socialist, left - wing and liberal forces, and pursued an aggressive expansionist foreign policy aimed at making Italy a world power, promising the creation of a "New Roman Empire ''. Adolf Hitler, after an unsuccessful attempt to overthrow the German government in 1923, eventually became the Chancellor of Germany in 1933. He abolished democracy, espousing a radical, racially motivated revision of the world order, and soon began a massive rearmament campaign. Meanwhile, France, to secure its alliance, allowed Italy a free hand in Ethiopia, which Italy desired as a colonial possession. The situation was aggravated in early 1935 when the Territory of the Saar Basin was legally reunited with Germany and Hitler repudiated the Treaty of Versailles, accelerated his rearmament programme, and introduced conscription. To contain Germany, the United Kingdom, France and Italy formed the Stresa Front in April 1935; however, that June, the United Kingdom made an independent naval agreement with Germany, easing prior restrictions. The Soviet Union, concerned by Germany 's goals of capturing vast areas of Eastern Europe, drafted a treaty of mutual assistance with France. Before taking effect though, the Franco - Soviet pact was required to go through the bureaucracy of the League of Nations, which rendered it essentially toothless. The United States, concerned with events in Europe and Asia, passed the Neutrality Act in August of the same year. Hitler defied the Versailles and Locarno treaties by remilitarising the Rhineland in March 1936, encountering little opposition. In October 1936, Germany and Italy formed the Rome -- Berlin Axis. A month later, Germany and Japan signed the Anti-Comintern Pact, which Italy would join in the following year. The Kuomintang (KMT) party in China launched a unification campaign against regional warlords and nominally unified China in the mid-1920s, but was soon embroiled in a civil war against its former Chinese Communist Party allies and new regional warlords. In 1931, an increasingly militaristic Empire of Japan, which had long sought influence in China as the first step of what its government saw as the country 's right to rule Asia, used the Mukden Incident as a pretext to launch an invasion of Manchuria and establish the puppet state of Manchukuo. Too weak to resist Japan, China appealed to the League of Nations for help. Japan withdrew from the League of Nations after being condemned for its incursion into Manchuria. The two nations then fought several battles, in Shanghai, Rehe and Hebei, until the Tanggu Truce was signed in 1933. Thereafter, Chinese volunteer forces continued the resistance to Japanese aggression in Manchuria, and Chahar and Suiyuan. After the 1936 Xi'an Incident, the Kuomintang and communist forces agreed on a ceasefire to present a united front to oppose Japan. The Second Italo -- Ethiopian War was a brief colonial war that began in October 1935 and ended in May 1936. The war began with the invasion of the Ethiopian Empire (also known as Abyssinia) by the armed forces of the Kingdom of Italy (Regno d'Italia), which was launched from Italian Somaliland and Eritrea. The war resulted in the military occupation of Ethiopia and its annexation into the newly created colony of Italian East Africa (Africa Orientale Italiana, or AOI); in addition it exposed the weakness of the League of Nations as a force to preserve peace. Both Italy and Ethiopia were member nations, but the League did nothing when the former clearly violated the League 's Article X. Germany was the only major European nation to openly support the invasion. Italy subsequently dropped its objections to Germany 's goal of absorbing Austria. When civil war broke out in Spain, Hitler and Mussolini lent military support to the Nationalist rebels, led by General Francisco Franco. The Soviet Union supported the existing government, the Spanish Republic. Over 30,000 foreign volunteers, known as the International Brigades, also fought against the Nationalists. Both Germany and the USSR used this proxy war as an opportunity to test in combat their most advanced weapons and tactics. The Nationalists won the civil war in April 1939; Franco, now dictator, remained officially neutral during World War II but generally favoured the Axis. His greatest collaboration with Germany was the sending of volunteers to fight on the Eastern Front. In July 1937, Japan captured the former Chinese imperial capital of Peking after instigating the Marco Polo Bridge Incident, which culminated in the Japanese campaign to invade all of China. The Soviets quickly signed a non-aggression pact with China to lend materiel support, effectively ending China 's prior co-operation with Germany. From September to November, the Japanese attacked Taiyuan, as well as engaging the Kuomintang Army around Xinkou and Communist forces in Pingxingguan. Generalissimo Chiang Kai - shek deployed his best army to defend Shanghai, but, after three months of fighting, Shanghai fell. The Japanese continued to push the Chinese forces back, capturing the capital Nanking in December 1937. After the fall of Nanking, tens of thousands if not hundreds of thousands of Chinese civilians and disarmed combatants were murdered by the Japanese. In March 1938, Nationalist Chinese forces won their first major victory at Taierzhuang but then the city of Xuzhou was taken by Japanese in May. In June 1938, Chinese forces stalled the Japanese advance by flooding the Yellow River; this manoeuvre bought time for the Chinese to prepare their defences at Wuhan, but the city was taken by October. Japanese military victories did not bring about the collapse of Chinese resistance that Japan had hoped to achieve; instead the Chinese government relocated inland to Chongqing and continued the war. In the mid-to - late 1930s, Japanese forces in Manchukuo had sporadic border clashes with the Soviet Union and the Mongolian People 's Republic. The Japanese doctrine of Hokushin - ron, which emphasised Japan 's expansion northward, was favoured by the Imperial Army during this time. With the Japanese defeat at Khalkin Gol in 1939, the ongoing Second Sino - Japanese War and ally Nazi Germany pursuing neutrality with the Soviets, this policy would prove difficult to maintain. Japan and the Soviet Union eventually signed a Neutrality Pact in April 1941, and Japan adopted the doctrine of Nanshin - ron, promoted by the Navy, which took its focus southward, eventually leading to its war with the United States and the Western Allies. In Europe, Germany and Italy were becoming more aggressive. In March 1938, Germany annexed Austria, again provoking little response from other European powers. Encouraged, Hitler began pressing German claims on the Sudetenland, an area of Czechoslovakia with a predominantly ethnic German population; and soon Britain and France followed the counsel of British Prime Minister Neville Chamberlain and conceded this territory to Germany in the Munich Agreement, which was made against the wishes of the Czechoslovak government, in exchange for a promise of no further territorial demands. Soon afterwards, Germany and Italy forced Czechoslovakia to cede additional territory to Hungary and Poland annexed Czechoslovakia 's Zaolzie region. Although all of Germany 's stated demands had been satisfied by the agreement, privately Hitler was furious that British interference had prevented him from seizing all of Czechoslovakia in one operation. In subsequent speeches Hitler attacked British and Jewish "war - mongers '' and in January 1939 secretly ordered a major build - up of the German navy to challenge British naval supremacy. In March 1939, Germany invaded the remainder of Czechoslovakia and subsequently split it into the German Protectorate of Bohemia and Moravia and a pro-German client state, the Slovak Republic. Hitler also delivered the 20 March 1939 ultimatum to Lithuania, forcing the concession of the Klaipėda Region. Greatly alarmed and with Hitler making further demands on the Free City of Danzig, Britain and France guaranteed their support for Polish independence; when Italy conquered Albania in April 1939, the same guarantee was extended to Romania and Greece. Shortly after the Franco - British pledge to Poland, Germany and Italy formalised their own alliance with the Pact of Steel. Hitler accused Britain and Poland of trying to "encircle '' Germany and renounced the Anglo - German Naval Agreement and the German -- Polish Non-Aggression Pact. The situation reached a general crisis in late August as German troops continued to mobilise against the Polish border. In August 23, when tripartite negotiations about a military alliance between France, Britain and USSR stalled, Soviet Union signed a non-aggression pact with Germany. This pact had a secret protocol that defined German and Soviet "spheres of influence '' (western Poland and Lithuania for Germany; eastern Poland, Finland, Estonia, Latvia and Bessarabia for the USSR), and raised the question of continuing Polish independence. The pact neutralized the possibility of Soviet opposition to a campaign against Poland and assured that Germany would not have to face the prospect of a two - front war, as it had in World War I. Immediately after that, Hitler ordered the attack to proceed on 26 August, but upon hearing that Britain had concluded a formal mutual assistance pact with Poland, and that Italy would maintain neutrality, he decided to delay it. In response to British requests for direct negotiations to avoid war, Germany made demands on Poland, which only served as a pretext to worsen relations. On 29 August, Hitler demanded that a Polish plenipotentiary immediately travel to Berlin to negotiate the handover of Danzig, and to allow a plebiscite in the Polish Corridor in which the German minority would vote on secession. The Poles refused to comply with the German demands, and on the night of 30 -- 31 August in a violent meeting with the British ambassador Neville Henderson, Ribbentrop declared that Germany considered its claims rejected. On 1 September 1939, Germany invaded Poland after having staged several false flag border incidents as a pretext to initiate the attack. The Battle of Westerplatte is often described as the first battle of the war. Britain responded with an ultimatum to Germany to cease military operations, and on 3 September, after the ultimatum was ignored, France, Britain, Australia, and New Zealand declared war on Germany. This alliance was joined by South Africa (6 September) and Canada (10 September). The alliance provided no direct military support to Poland, outside of a cautious French probe into the Saarland. The Western Allies also began a naval blockade of Germany, which aimed to damage the country 's economy and war effort. Germany responded by ordering U-boat warfare against Allied merchant and warships, which would later escalate into the Battle of the Atlantic. On 8 September, German troops reached the suburbs of Warsaw. The Polish counter offensive to the west halted the German advance for several days, but it was outflanked and encircled by the Wehrmacht. Remnants of the Polish army broke through to besieged Warsaw. On 17 September 1939, after signing a cease - fire with Japan, the Soviets invaded Eastern Poland under a pretext that the Polish state had ostensibly ceased to exist. On 27 September, the Warsaw garrison surrendered to the Germans, and the last large operational unit of the Polish Army surrendered on 6 October. Despite the military defeat, the Polish government never surrendered. Significant part of Polish military personnel evacuated to Romania and the Baltic countries; many of them would fight against the Axis in other theatres of the war. The Polish government in exile also established an Underground State and a resistance movement; in particular the Polish partisan Home Army would grow to become one of the war 's largest resistance movements. Germany annexed the western and occupied the central part of Poland, and the USSR annexed its eastern part; small shares of Polish territory were transferred to Lithuania and Slovakia. On 6 October, Hitler made a public peace overture to Britain and France, but said that the future of Poland was to be determined exclusively by Germany and the Soviet Union. The proposal was rejected, and Hitler ordered an immediate offensive against France, which would be postponed until the spring of 1940 due to bad weather. The Soviet Union forced the Baltic countries -- Estonia, Latvia and Lithuania, the states that were in a Soviet sphere of influence "-- to sign "mutual assistance pacts '' that stipulated stationing Soviet troops in these countries. Soon after, significant Soviet military contingents were moved there. Finland refused to sign a similar pact and rejected to cede part of its territory to the USSR, which prompted a Soviet invasion in November 1939, and the USSR was expelled from the League of Nations. Despite overwhelming numerical superiority, Soviet military success was modest, and the Finno - Soviet war ended in March 1940 with minimal Finnish concessions. In June 1940, the Soviet Union forcibly annexed Estonia, Latvia and Lithuania, and the disputed Romanian regions of Bessarabia, Northern Bukovina and Hertza. Meanwhile, Nazi - Soviet political rapprochement and economic co-operation gradually stalled, and both states began preparations for war. In April 1940, Germany invaded Denmark and Norway to protect shipments of iron ore from Sweden, which the Allies were attempting to cut off. Denmark capitulated after a few hours, and Norway was conquered within two months despite Allied support. British discontent over the Norwegian campaign led to the appointment of Winston Churchill as Prime Minister on 10 May 1940. On the same day, Germany launched an offensive against France. To circumvent the strong Maginot Line fortifications on the Franco - German border, Germany directed its attack at the neutral nations of Belgium, the Netherlands, and Luxembourg. The Germans carried out a flanking manoeuvre through the Ardennes region, which was mistakenly perceived by Allies as an impenetrable natural barrier against armoured vehicles. By successfully implementing new blitzkrieg tactics, the Wehrmacht rapidly advanced to the Channel and cut off the Allied forces in Belgium, trapping the bulk of the Allied armies in a cauldron on the Franco - Belgian border near Lille. Britain was able to evacuate a significant number of Allied troops from the continent by early June, although abandoning almost all of their equipment.. On 10 June, Italy invaded France, declaring war on both France and the United Kingdom. The Germans turned south against the weakened French army, and Paris fell to them on 14 June. Eight days later France signed an armistice with Germany; it was divided into German and Italian occupation zones, and an unoccupied rump state under the Vichy Regime, which, though officially neutral, was generally aligned with Germany. France kept its fleet, which Britain attacked on 3 July in an attempt to prevent its seizure by Germany. The Battle of Britain began in early July with Luftwaffe attacks on shipping and harbours. Britain rejected Hitler 's ultimatum, and the German air superiority campaign started in August but failed to defeat RAF Fighter Command. Due to this the proposed German invasion of Britain was postponed indefinitely on 17 September. The German strategic bombing offensive intensified with night attacks on London and other cities in the Blitz, but failed to significantly disrupt the British war effort and largely ended in May 1941. Using newly captured French ports, the German Navy enjoyed success against an over-extended Royal Navy, using U-boats against British shipping in the Atlantic. The British Home Fleet scored a significant victory on 27 May 1941 by sinking the German battleship Bismarck. In November 1939, the United States, who were taking measures to assist China and the Western Allies, amended the Neutrality Act to allow "cash and carry '' purchases by the Allies. In 1940, following the German capture of Paris, the size of the United States Navy was significantly increased. In September the United States further agreed to a trade of American destroyers for British bases. Still, a large majority of the American public continued to oppose any direct military intervention in the conflict well into 1941. In December 1940 Roosevelt accused Hitler of planning world conquest and ruled out any negotiations as useless, calling for the US to become an "arsenal of democracy '' and promoted Lend - Lease programmes of aid to support the British war effort. The US started strategic planning to prepare for a full scale offensive against Germany. At the end of September 1940, the Tripartite Pact formally united Japan, Italy and Germany as the Axis Powers. The Tripartite Pact stipulated that any country, with the exception of the Soviet Union, which attacked any Axis Power would be forced to go to war against all three. The Axis expanded in November 1940 when Hungary, Slovakia and Romania joined. Romania and Hungary would make major contributions to the Axis war against the USSR; in Romania 's case partially to recapture territory ceded to the USSR. In early June 1940 the Italian Regia aeronautica attacked Malta, and a siege of this British possession started. In late summer -- early autumn, Italy conquered British Somaliland and made an incursion into British - held Egypt. In October Italy attacked Greece, but the attack was repulsed with heavy Italian casualties; the campaign ended within days with minor territorial changes. Germany started preparation for an invasion of the Balkans to assist Italy, to prevent the British from gaining a foothold in the Balkans, which would be a potential threat for Romanian oil fields, and to strike against the British dominance of the Mediterranean. In December 1940 British Commonwealth forces began counter-offensives against Italian forces in Egypt and Italian East Africa. The offensives were highly successful; by early February 1941 Italy had lost control of eastern Libya, and large numbers of Italian troops had been taken prisoner. The Italian Navy also suffered significant defeats, with the Royal Navy putting three Italian battleships out of commission by a carrier attack at Taranto and neutralising several more warships at the Battle of Cape Matapan. Italian defeats prompted Germany to deploy an expeditionary force to North Africa, and at the end of March 1941 Rommel 's Afrika Korps launched an offensive which drove back the Commonwealth forces. In under a month, Axis forces advanced to western Egypt and besieged the port of Tobruk. By late March 1941 Bulgaria and Yugoslavia signed the Tripartite Pact. However, the Yugoslav government was overthrown two days later by pro-British nationalists. Germany responded with simultaneous invasions of both Yugoslavia and Greece, commencing on 6 April, 1941; both nations were forced to surrender within the month. The airborne invasion of the Greek island of Crete at the end of May completed the German conquest of the Balkans. Although the Axis victory was swift, bitter and large - scale partisan warfare subsequently broke out against the Axis occupation of Yugoslavia, which continued until the end of the war. In the Middle East, in May Commonwealth forces quashed an uprising in Iraq which had been supported by German aircraft from bases within Vichy - controlled Syria. In June -- July they invaded and occupied the French possessions Syria and Lebanon, with the assistance of the Free French. With the situation in Europe and Asia relatively stable, Germany, Japan, and the Soviet Union made preparations. With the Soviets wary of mounting tensions with Germany and the Japanese planning to take advantage of the European War by seizing resource - rich European possessions in Southeast Asia, the two powers signed the Soviet -- Japanese Neutrality Pact in April 1941. By contrast, the Germans were steadily making preparations for an attack on the Soviet Union, massing forces on the Soviet border. Hitler believed that Britain 's refusal to end the war was based on the hope that the United States and the Soviet Union would enter the war against Germany sooner or later. He therefore decided to try to strengthen Germany 's relations with the Soviets, or failing that, to attack and eliminate them as a factor. In November 1940, negotiations took place to determine if the Soviet Union would join the Tripartite Pact. The Soviets showed some interest, but asked for concessions from Finland, Bulgaria, Turkey, and Japan that Germany considered unacceptable. On 18 December 1940, Hitler issued the directive to prepare for an invasion of the Soviet Union. On 22 June 1941, Germany, supported by Italy and Romania, invaded the Soviet Union in Operation Barbarossa, with Germany accusing the Soviets of plotting against them. They were joined shortly by Finland and Hungary. The primary targets of this surprise offensive were the Baltic region, Moscow and Ukraine, with the ultimate goal of ending the 1941 campaign near the Arkhangelsk - Astrakhan line, from the Caspian to the White Seas. Hitler 's objectives were to eliminate the Soviet Union as a military power, exterminate Communism, generate Lebensraum ("living space '') by dispossessing the native population and guarantee access to the strategic resources needed to defeat Germany 's remaining rivals. Although the Red Army was preparing for strategic counter-offensives before the war, Barbarossa forced the Soviet supreme command to adopt a strategic defence. During the summer, the Axis made significant gains into Soviet territory, inflicting immense losses in both personnel and materiel. By the middle of August, however, the German Army High Command decided to suspend the offensive of a considerably depleted Army Group Centre, and to divert the 2nd Panzer Group to reinforce troops advancing towards central Ukraine and Leningrad. The Kiev offensive was overwhelmingly successful, resulting in encirclement and elimination of four Soviet armies, and made possible further advance into Crimea and industrially developed Eastern Ukraine (the First Battle of Kharkov). The diversion of three quarters of the Axis troops and the majority of their air forces from France and the central Mediterranean to the Eastern Front prompted Britain to reconsider its grand strategy. In July, the UK and the Soviet Union formed a military alliance against Germany The British and Soviets invaded neutral Iran to secure the Persian Corridor and Iran 's oil fields. In August, the United Kingdom and the United States jointly issued the Atlantic Charter. By October Axis operational objectives in Ukraine and the Baltic region were achieved, with only the sieges of Leningrad and Sevastopol continuing. A major offensive against Moscow was renewed; after two months of fierce battles in increasingly harsh weather the German army almost reached the outer suburbs of Moscow, where the exhausted troops were forced to suspend their offensive. Large territorial gains were made by Axis forces, but their campaign had failed to achieve its main objectives: two key cities remained in Soviet hands, the Soviet capability to resist was not broken, and the Soviet Union retained a considerable part of its military potential. The blitzkrieg phase of the war in Europe had ended. By early December, freshly mobilised reserves allowed the Soviets to achieve numerical parity with Axis troops. This, as well as intelligence data which established that a minimal number of Soviet troops in the East would be sufficient to deter any attack by the Japanese Kwantung Army, allowed the Soviets to begin a massive counter-offensive that started on 5 December all along the front and pushed German troops 100 -- 250 kilometres (62 -- 155 mi) west. In 1939, the United States had renounced its trade treaty with Japan; and, beginning with an aviation gasoline ban in July 1940, Japan became subject to increasing economic pressure. During this time, Japan launched its first attack against Changsha, a strategically important Chinese city, but was repulsed by late September. Despite several offensives by both sides, the war between China and Japan was stalemated by 1940. To increase pressure on China by blocking supply routes, and to better position Japanese forces in the event of a war with the Western powers, Japan invaded and occupied northern Indochina. Afterwards, the United States embargoed iron, steel and mechanical parts against Japan. Other sanctions soon followed. Chinese nationalist forces launched a large - scale counter-offensive in early 1940. In August, Chinese communists launched an offensive in Central China; in retaliation, Japan instituted harsh measures in occupied areas to reduce human and material resources for the communists. Continued antipathy between Chinese communist and nationalist forces culminated in armed clashes in January 1941, effectively ending their co-operation. In March, the Japanese 11th army attacked the headquarters of the Chinese 19th army but was repulsed during Battle of Shanggao. In September, Japan attempted to take the city of Changsha again and clashed with Chinese nationalist forces. German successes in Europe encouraged Japan to increase pressure on European governments in Southeast Asia. The Dutch government agreed to provide Japan some oil supplies from the Dutch East Indies, but negotiations for additional access to their resources ended in failure in June 1941. In July 1941 Japan sent troops to southern Indochina, thus threatening British and Dutch possessions in the Far East. The United States, United Kingdom and other Western governments reacted to this move with a freeze on Japanese assets and a total oil embargo. At the same time, Japan was planning an invasion of the Soviet Far East, intending to capitalise off the German invasion in the west, but abandoned the operation after the sanctions. Since early 1941 the United States and Japan had been engaged in negotiations in an attempt to improve their strained relations and end the war in China. During these negotiations Japan advanced a number of proposals which were dismissed by the Americans as inadequate. At the same time the US, Britain, and the Netherlands engaged in secret discussions for the joint defence of their territories, in the event of a Japanese attack against any of them. Roosevelt reinforced the Philippines (an American protectorate scheduled for independence in 1946) and warned Japan that the US would react to Japanese attacks against any "neighboring countries ''. Frustrated at the lack of progress and feeling the pinch of the American - British - Dutch sanctions, Japan prepared for war. On 20 November a new government under Hideki Tojo presented an interim proposal as its final offer. It called for the end of American aid to China and for the supply of oil and other resources to Japan. In exchange Japan promised not to launch any attacks in Southeast Asia and to withdraw its forces from southern Indochina. The American counter-proposal of 26 November required that Japan evacuate all of China without conditions and conclude non-aggression pacts with all Pacific powers. That meant Japan was essentially forced to choose between abandoning its ambitions in China, or seizing the natural resources it needed in the Dutch East Indies by force; the Japanese military did not consider the former an option, and many officers considered the oil embargo an unspoken declaration of war. Japan planned to rapidly seize European colonies in Asia to create a large defensive perimeter stretching into the Central Pacific; the Japanese would then be free to exploit the resources of Southeast Asia while exhausting the over-stretched Allies by fighting a defensive war. To prevent American intervention while securing the perimeter it was further planned to neutralise the United States Pacific Fleet and the American military presence in the Philippines from the outset. On 7 December 1941 (8 December in Asian time zones), Japan attacked British and American holdings with near - simultaneous offensives against Southeast Asia and the Central Pacific. These included an attack on the American fleet at Pearl Harbor, the Philippines, landings in Thailand and Malaya and the battle of Hong Kong. These attacks led the United States, United Kingdom, China, Australia and several other states to formally declare war on Japan, whereas the Soviet Union, being heavily involved in large - scale hostilities with European Axis countries, maintained its neutrality agreement with Japan. Germany, followed by the other Axis states, declared war on the United States in solidarity with Japan, citing as justification the American attacks on German war vessels that had been ordered by Roosevelt. On 1 January 1942, the Allied Big Four -- the Soviet Union, China, Britain and the United States -- and 22 smaller or exiled governments issued the Declaration by United Nations, thereby affirming the Atlantic Charter, and agreeing to not to sign a separate peace with the Axis powers. During 1942, Allied officials debated on the appropriate grand strategy to pursue. All agreed that defeating Germany was the primary objective. The Americans favoured a straightforward, large - scale attack on Germany through France. The Soviets were also demanding a second front. The British, on the other hand, argued that military operations should target peripheral areas to wear out German strength, leading to increasing demoralisation, and bolster resistance forces. Germany itself would be subject to a heavy bombing campaign. An offensive against Germany would then be launched primarily by Allied armour without using large - scale armies. Eventually, the British persuaded the Americans that a landing in France was infeasible in 1942 and they should instead focus on driving the Axis out of North Africa. At the Casablanca Conference in early 1943, the Allies reiterated the statements issued in the 1942 Declaration by the United Nations, and demanded the unconditional surrender of their enemies. The British and Americans agreed to continue to press the initiative in the Mediterranean by invading Sicily to fully secure the Mediterranean supply routes. Although the British argued for further operations in the Balkans to bring Turkey into the war, in May 1943, the Americans extracted a British commitment to limit Allied operations in the Mediterranean to an invasion of the Italian mainland and to invade France in 1944. By the end of April 1942, Japan and its ally Thailand had almost fully conquered Burma, Malaya, the Dutch East Indies, Singapore, and Rabaul, inflicting severe losses on Allied troops and taking a large number of prisoners. Despite stubborn resistance by Filipino and US forces, the Philippine Commonwealth was eventually captured in May 1942, forcing its government into exile. On 16 April, in Burma, 7,000 British soldiers were encircled by the Japanese 33rd Division during the Battle of Yenangyaung and rescued by the Chinese 38th Division. Japanese forces also achieved naval victories in the South China Sea, Java Sea and Indian Ocean, and bombed the Allied naval base at Darwin, Australia. In January 1942, the only Allied success against Japan was a Chinese victory at Changsha. These easy victories over unprepared US and European opponents left Japan overconfident, as well as overextended. In early May 1942, Japan initiated operations to capture Port Moresby by amphibious assault and thus sever communications and supply lines between the United States and Australia. The planned invasion was thwarted when an Allied task force, centred on two American fleet carriers, fought Japanese naval forces to a draw in the Battle of the Coral Sea. Japan 's next plan, motivated by the earlier Doolittle Raid, was to seize Midway Atoll and lure American carriers into battle to be eliminated; as a diversion, Japan would also send forces to occupy the Aleutian Islands in Alaska. In mid-May, Japan started the Zhejiang - Jiangxi Campaign in China, with the goal of inflicting retribution on the Chinese who aided the surviving American airmen in the Doolittle Raid by destroying air bases and fighting against the Chinese 23rd and 32nd Army Groups. In early June, Japan put its operations into action but the Americans, having broken Japanese naval codes in late May, were fully aware of plans and order of battle, and used this knowledge to achieve a decisive victory at Midway over the Imperial Japanese Navy. With its capacity for aggressive action greatly diminished as a result of the Midway battle, Japan chose to focus on a belated attempt to capture Port Moresby by an overland campaign in the Territory of Papua. The Americans planned a counter-attack against Japanese positions in the southern Solomon Islands, primarily Guadalcanal, as a first step towards capturing Rabaul, the main Japanese base in Southeast Asia. Both plans started in July, but by mid-September, the Battle for Guadalcanal took priority for the Japanese, and troops in New Guinea were ordered to withdraw from the Port Moresby area to the northern part of the island, where they faced Australian and United States troops in the Battle of Buna - Gona. Guadalcanal soon became a focal point for both sides with heavy commitments of troops and ships in the battle for Guadalcanal. By the start of 1943, the Japanese were defeated on the island and withdrew their troops. In Burma, Commonwealth forces mounted two operations. The first, an offensive into the Arakan region in late 1942, went disastrously, forcing a retreat back to India by May 1943. The second was the insertion of irregular forces behind Japanese front - lines in February which, by the end of April, had achieved mixed results. Despite considerable losses, in early 1942 Germany and its allies stopped a major Soviet offensive in central and southern Russia, keeping most territorial gains they had achieved during the previous year. In May the Germans defeated Soviet offensives in the Kerch Peninsula and at Kharkov, and then launched their main summer offensive against southern Russia in June 1942, to seize the oil fields of the Caucasus and occupy Kuban steppe, while maintaining positions on the northern and central areas of the front. The Germans split Army Group South into two groups: Army Group A advanced to the lower Don River and struck south - east to the Caucasus, while Army Group B headed towards the Volga River. The Soviets decided to make their stand at Stalingrad on the Volga. By mid-November, the Germans had nearly taken Stalingrad in bitter street fighting when the Soviets began their second winter counter-offensive, starting with an encirclement of German forces at Stalingrad and an assault on the Rzhev salient near Moscow, though the latter failed disastrously. By early February 1943, the German Army had taken tremendous losses; German troops at Stalingrad had been forced to surrender, and the front - line had been pushed back beyond its position before the summer offensive. In mid-February, after the Soviet push had tapered off, the Germans launched another attack on Kharkov, creating a salient in their front line around the Soviet city of Kursk. Exploiting poor American naval command decisions, the German navy ravaged Allied shipping off the American Atlantic coast. By November 1941, Commonwealth forces had launched a counter-offensive, Operation Crusader, in North Africa, and reclaimed all the gains the Germans and Italians had made. In North Africa, the Germans launched an offensive in January, pushing the British back to positions at the Gazala Line by early February, followed by a temporary lull in combat which Germany used to prepare for their upcoming offensives. Concerns the Japanese might use bases in Vichy - held Madagascar caused the British to invade the island in early May 1942. An Axis offensive in Libya forced an Allied retreat deep inside Egypt until Axis forces were stopped at El Alamein. On the Continent, raids of Allied commandos on strategic targets, culminating in the disastrous Dieppe Raid, demonstrated the Western Allies ' inability to launch an invasion of continental Europe without much better preparation, equipment, and operational security. In August 1942, the Allies succeeded in repelling a second attack against El Alamein and, at a high cost, managed to deliver desperately needed supplies to the besieged Malta. A few months later, the Allies commenced an attack of their own in Egypt, dislodging the Axis forces and beginning a drive west across Libya. This attack was followed up shortly after by Anglo - American landings in French North Africa, which resulted in the region joining the Allies. Hitler responded to the French colony 's defection by ordering the occupation of Vichy France; although Vichy forces did not resist this violation of the armistice, they managed to scuttle their fleet to prevent its capture by German forces. The Axis forces in Africa withdrew into Tunisia, which was conquered by the Allies in May 1943. In June 1943 the British and Americans began a strategic bombing campaign against Germany with a goal to disrupt the war economy, reduce morale, and "de-house '' the civilian population. The firebombing of Hamburg was among the first attacks in this campaign, it lead to significant casualties and inflicted considerable losses on infrastructure of this important industrial center. After the Guadalcanal Campaign, the Allies initiated several operations against Japan in the Pacific. In May 1943, Canadian and US forces were sent to eliminate Japanese forces from the Aleutians. Soon after, the US, with support from Australian and New Zealand forces, began major operations to isolate Rabaul by capturing surrounding islands, and breach the Japanese Central Pacific perimeter at the Gilbert and Marshall Islands. By the end of March 1944, the Allies had completed both of these objectives, and had also neutralised the major Japanese base at Truk in the Caroline Islands. In April, the Allies launched an operation to retake Western New Guinea. In the Soviet Union, both the Germans and the Soviets spent the spring and early summer of 1943 preparing for large offensives in central Russia. On 4 July 1943, Germany attacked Soviet forces around the Kursk Bulge. Within a week, German forces had exhausted themselves against the Soviets ' deeply echeloned and well - constructed defences and, for the first time in the war, Hitler cancelled the operation before it had achieved tactical or operational success. This decision was partially affected by the Western Allies ' invasion of Sicily launched on 9 July which, combined with previous Italian failures, resulted in the ousting and arrest of Mussolini later that month. On 12 July 1943, the Soviets launched their own counter-offensives, thereby dispelling any chance of German victory or even stalemate in the east. The Soviet victory at Kursk marked the end of German superiority, giving the Soviet Union the initiative on the Eastern Front. The Germans tried to stabilise their eastern front along the hastily fortified Panther -- Wotan line, but the Soviets broke through it at Smolensk and by the Lower Dnieper Offensives. On 3 September 1943, the Western Allies invaded the Italian mainland, following Italy 's armistice with the Allies. Germany with the help of fascists responded by disarming Italian forces that were in many places without superior orders, seizing military control of Italian areas, and creating a series of defensive lines. German special forces then rescued Mussolini, who then soon established a new client state in German - occupied Italy named the Italian Social Republic, causing an Italian civil war. The Western Allies fought through several lines until reaching the main German defensive line in mid-November. German operations in the Atlantic also suffered. By May 1943, as Allied counter-measures became increasingly effective, the resulting sizeable German submarine losses forced a temporary halt of the German Atlantic naval campaign. In November 1943, Franklin D. Roosevelt and Winston Churchill met with Chiang Kai - shek in Cairo and then with Joseph Stalin in Tehran. The former conference determined the post-war return of Japanese territory and the military planning for the Burma Campaign, while the latter included agreement that the Western Allies would invade Europe in 1944 and that the Soviet Union would declare war on Japan within three months of Germany 's defeat. From November 1943, during the seven - week Battle of Changde, the Chinese forced Japan to fight a costly war of attrition, while awaiting Allied relief. In January 1944, the Allies launched a series of attacks in Italy against the line at Monte Cassino and tried to outflank it with landings at Anzio. By the end of January, a major Soviet offensive expelled German forces from the Leningrad region, ending the longest and most lethal siege in history. The following Soviet offensive was halted on the pre-war Estonian border by the German Army Group North aided by Estonians hoping to re-establish national independence. This delay slowed subsequent Soviet operations in the Baltic Sea region. By late May 1944, the Soviets had liberated Crimea, largely expelled Axis forces from Ukraine, and made incursions into Romania, which were repulsed by the Axis troops. The Allied offensives in Italy had succeeded and, at the expense of allowing several German divisions to retreat, on 4 June, Rome was captured. The Allies had mixed success in mainland Asia. In March 1944, the Japanese launched the first of two invasions, an operation against British positions in Assam, India, and soon besieged Commonwealth positions at Imphal and Kohima. In May 1944, British forces mounted a counter-offensive that drove Japanese troops back to Burma, and Chinese forces that had invaded northern Burma in late 1943 besieged Japanese troops in Myitkyina. The second Japanese invasion of China aimed to destroy China 's main fighting forces, secure railways between Japanese - held territory and capture Allied airfields. By June, the Japanese had conquered the province of Henan and begun a new attack on Changsha in the Hunan province. On 6 June 1944 (known as D - Day), after three years of Soviet pressure, the Western Allies invaded northern France. After reassigning several Allied divisions from Italy, they also attacked southern France. These landings were successful, and led to the defeat of the German Army units in France. Paris was liberated by the local resistance assisted by the Free French Forces, both led by General Charles de Gaulle, on 25 August and the Western Allies continued to push back German forces in western Europe during the latter part of the year. An attempt to advance into northern Germany spearheaded by a major airborne operation in the Netherlands failed. After that, the Western Allies slowly pushed into Germany, but failed to cross the Ruhr river in a large offensive. In Italy, Allied advance also slowed due to the last major German defensive line. On 22 June, the Soviets launched a strategic offensive in Belarus ("Operation Bagration '') that destroyed the German Army Group Centre almost completely. Soon after that another Soviet strategic offensive forced German troops from Western Ukraine and Eastern Poland. The Soviet advance prompted resistance forces in Poland to initiate several uprisings against the German occupation. However, the largest of these in Warsaw, where German soldiers massacred 200,000 civilians, and a national uprising in Slovakia, did not receive Soviet support and were subsequently suppressed by the Germans. The Red Army 's strategic offensive in eastern Romania cut off and destroyed the considerable German troops there and triggered a successful coup d'état in Romania and in Bulgaria, followed by those countries ' shift to the Allied side. In September 1944, Soviet troops advanced into Yugoslavia and forced the rapid withdrawal of German Army Groups E and F in Greece, Albania and Yugoslavia to rescue them from being cut off. By this point, the Communist - led Partisans under Marshal Josip Broz Tito, who had led an increasingly successful guerrilla campaign against the occupation since 1941, controlled much of the territory of Yugoslavia and engaged in delaying efforts against German forces further south. In northern Serbia, the Red Army, with limited support from Bulgarian forces, assisted the Partisans in a joint liberation of the capital city of Belgrade on 20 October. A few days later, the Soviets launched a massive assault against German - occupied Hungary that lasted until the fall of Budapest in February 1945. Unlike impressive Soviet victories in the Balkans, bitter Finnish resistance to the Soviet offensive in the Karelian Isthmus denied the Soviets occupation of Finland and led to a Soviet - Finnish armistice on relatively mild conditions, although Finland was forced to fight their former allies. By the start of July 1944, Commonwealth forces in Southeast Asia had repelled the Japanese sieges in Assam, pushing the Japanese back to the Chindwin River while the Chinese captured Myitkyina. In September 1944, Chinese force captured the Mount Song to reopen the Burma Road. In China, the Japanese had more successes, having finally captured Changsha in mid-June and the city of Hengyang by early August. Soon after, they invaded the province of Guangxi, winning major engagements against Chinese forces at Guilin and Liuzhou by the end of November and successfully linking up their forces in China and Indochina by mid-December. In the Pacific, US forces continued to press back the Japanese perimeter. In mid-June 1944, they began their offensive against the Mariana and Palau islands, and decisively defeated Japanese forces in the Battle of the Philippine Sea. These defeats led to the resignation of the Japanese Prime Minister, Hideki Tojo, and provided the United States with air bases to launch intensive heavy bomber attacks on the Japanese home islands. In late October, American forces invaded the Filipino island of Leyte; soon after, Allied naval forces scored another large victory in the Battle of Leyte Gulf, one of the largest naval battles in history. On 16 December 1944, Germany made a last attempt on the Western Front by using most of its remaining reserves to launch a massive counter-offensive in the Ardennes and along the French -- German border to split the Western Allies, encircle large portions of Western Allied troops and capture their primary supply port at Antwerp to prompt a political settlement. By January, the offensive had been repulsed with no strategic objectives fulfilled. In Italy, the Western Allies remained stalemated at the German defensive line. In mid-January 1945, the Soviets and Poles attacked in Poland, pushing from the Vistula to the Oder river in Germany, and overran East Prussia. On 4 February, Soviet, British and US leaders met for the Yalta Conference. They agreed on the occupation of post-war Germany, and on when the Soviet Union would join the war against Japan. In February, the Soviets entered Silesia and Pomerania, while Western Allies entered western Germany and closed to the Rhine river. By March, the Western Allies crossed the Rhine north and south of the Ruhr, encircling the German Army Group B, while the Soviets advanced to Vienna. In early April, the Western Allies finally pushed forward in Italy and swept across western Germany capturing Hamburg and Nuremberg, while Soviet and Polish forces stormed Berlin in late April. American and Soviet forces met at the Elbe river on 25 April. On 30 April 1945, the Reichstag was captured, signalling the military defeat of Nazi Germany. Several changes in leadership occurred during this period. On 12 April, President Roosevelt died and was succeeded by Harry S. Truman. Benito Mussolini was killed by Italian partisans on 28 April. Two days later, Hitler committed suicide, and was succeeded by Grand Admiral Karl Dönitz. German forces surrendered in Italy on 29 April. Total and unconditional surrender was signed on 7 May, to be effective by the end of 8 May. German Army Group Centre resisted in Prague until 11 May. In the Pacific theatre, American forces accompanied by the forces of the Philippine Commonwealth advanced in the Philippines, clearing Leyte by the end of April 1945. They landed on Luzon in January 1945 and recaptured Manila in March following a battle which reduced the city to ruins. Fighting continued on Luzon, Mindanao, and other islands of the Philippines until the end of the war. Meanwhile, the United States Army Air Forces (USAAF) were destroying strategic and populated cities and towns in Japan in an effort to destroy Japanese war industry and civilian morale. On the night of 9 -- 10 March, USAAF B - 29 bombers struck Tokyo with thousands of incendiary bombs, which killed 100,000 civilians and destroyed 16 square miles (41 km) within a few hours. Over the next five months, the USAAF firebombed a total of 67 Japanese cities, killing 393,000 civilians and destroying 65 % of built - up areas. In May 1945, Australian troops landed in Borneo, over-running the oilfields there. British, American, and Chinese forces defeated the Japanese in northern Burma in March, and the British pushed on to reach Rangoon by 3 May. Chinese forces started to counterattack in Battle of West Hunan that occurred between 6 April and 7 June 1945. American naval and amphibious forces also moved towards Japan, taking Iwo Jima by March, and Okinawa by the end of June. At the same time, American submarines cut off Japanese imports, drastically reducing Japan 's ability to supply its overseas forces. On 11 July, Allied leaders met in Potsdam, Germany. They confirmed earlier agreements about Germany, and reiterated the demand for unconditional surrender of all Japanese forces by Japan, specifically stating that "the alternative for Japan is prompt and utter destruction ''. During this conference, the United Kingdom held its general election, and Clement Attlee replaced Churchill as Prime Minister. The Allies called for unconditional Japanese surrender in the Potsdam Declaration of 27 July, but the Japanese government rejected the call. In early August, the USAAF dropped atomic bombs on the Japanese cities of Hiroshima and Nagasaki. Between the two bombings, the Soviets, pursuant to the Yalta agreement, invaded Japanese - held Manchuria, and quickly defeated the Kwantung Army, which was the largest Japanese fighting force. The Red Army also captured Sakhalin Island and the Kuril Islands. On 15 August 1945, Japan surrendered, with the surrender documents finally signed at Tokyo Bay on the deck of the American battleship USS Missouri on 2 September 1945, ending the war. The Allies established occupation administrations in Austria and Germany. The former became a neutral state, non-aligned with any political bloc. The latter was divided into western and eastern occupation zones controlled by the Western Allies and the USSR, accordingly. A denazification programme in Germany led to the prosecution of Nazi war criminals and the removal of ex-Nazis from power, although this policy moved towards amnesty and re-integration of ex-Nazis into West German society. Germany lost a quarter of its pre-war (1937) territory. Among the eastern territories, Silesia, Neumark and most of Pomerania were taken over by Poland, East Prussia was divided between Poland and the USSR, followed by the expulsion of the 9 million Germans from these provinces, as well as the expulsion of 3 million Germans from the Sudetenland in Czechoslovakia to Germany. By the 1950s, every fifth West German was a refugee from the east. The Soviet Union also took over the Polish provinces east of the Curzon line, from which 2 million Poles were expelled; north - east Romania, parts of eastern Finland, and the three Baltic states were also incorporated into the USSR. In an effort to maintain world peace, the Allies formed the United Nations, which officially came into existence on 24 October 1945, and adopted the Universal Declaration of Human Rights in 1948, as a common standard for all member nations. The great powers that were the victors of the war -- France, China, Britain, the Soviet Union and the United States -- became the permanent members of the UN 's Security Council. The five permanent members remain so to the present, although there have been two seat changes, between the Republic of China and the People 's Republic of China in 1971, and between the Soviet Union and its successor state, the Russian Federation, following the dissolution of the Soviet Union in 1991. The alliance between the Western Allies and the Soviet Union had begun to deteriorate even before the war was over. Germany had been de facto divided, and two independent states, the Federal Republic of Germany and the German Democratic Republic were created within the borders of Allied and Soviet occupation zones, accordingly. The rest of Europe was also divided into Western and Soviet spheres of influence. Most eastern and central European countries fell into the Soviet sphere, which led to establishment of Communist - led regimes, with full or partial support of the Soviet occupation authorities. As a result, East Germany, Poland, Hungary, Romania, Czechoslovakia, and Albania became Soviet satellite states. Communist Yugoslavia conducted a fully independent policy, causing tension with the USSR. Post-war division of the world was formalised by two international military alliances, the United States - led NATO and the Soviet - led Warsaw Pact; the long period of political tensions and military competition between them, the Cold War, would be accompanied by an unprecedented arms race and proxy wars. In Asia, the United States led the occupation of Japan and administrated Japan 's former islands in the Western Pacific, while the Soviets annexed Sakhalin and the Kuril Islands. Korea, formerly under Japanese rule, was divided and occupied by the Soviet Union in the North and the US in the South between 1945 and 1948. Separate republics emerged on both sides of the 38th parallel in 1948, each claiming to be the legitimate government for all of Korea, which led ultimately to the Korean War. In China, nationalist and communist forces resumed the civil war in June 1946. Communist forces were victorious and established the People 's Republic of China on the mainland, while nationalist forces retreated to Taiwan in 1949. In the Middle East, the Arab rejection of the United Nations Partition Plan for Palestine and the creation of Israel marked the escalation of the Arab -- Israeli conflict. While European powers attempted to retain some or all of their colonial empires, their losses of prestige and resources during the war rendered this unsuccessful, leading to decolonisation. The global economy suffered heavily from the war, although participating nations were affected differently. The US emerged much richer than any other nation; it had a baby boom and by 1950 its gross domestic product per person was much higher than that of any of the other powers and it dominated the world economy. The UK and US pursued a policy of industrial disarmament in Western Germany in the years 1945 -- 1948. Because of international trade interdependencies this led to European economic stagnation and delayed European recovery for several years. Recovery began with the mid-1948 currency reform in Western Germany, and was sped up by the liberalisation of European economic policy that the Marshall Plan (1948 -- 1951) both directly and indirectly caused. The post-1948 West German recovery has been called the German economic miracle. Italy also experienced an economic boom and the French economy rebounded. By contrast, the United Kingdom was in a state of economic ruin, and although it received a quarter of the total Marshall Plan assistance, more than any other European country, continued relative economic decline for decades. The Soviet Union, despite enormous human and material losses, also experienced rapid increase in production in the immediate post-war era. Japan experienced incredibly rapid economic growth, becoming one of the most powerful economies in the world by the 1980s. China returned to its pre-war industrial production by 1952. Estimates for the total number of casualties in the war vary, because many deaths went unrecorded. Most suggest that some 60 million people died in the war, including about 20 million military personnel and 40 million civilians. Many of the civilians died because of deliberate genocide, massacres, mass - bombings, disease, and starvation. The Soviet Union lost around 27 million people during the war, including 8.7 million military and 19 million civilian deaths. A quarter of the people in the Soviet Union were wounded or killed. Germany sustained 5.3 million military losses, mostly on the Eastern Front and during the final battles in Germany. Of the total number of deaths in World War II, approximately 85 per cent -- mostly Soviet and Chinese -- were on the Allied side. Many of these deaths were caused by war crimes committed by German and Japanese forces in occupied territories. An estimated 11 to 17 million civilians died as a direct or as an indirect result of Nazi racist policies, including the Holocaust of around 6 million Jews, half of whom were Polish citizens, along with at least 1.9 million ethnic Poles. Millions of other Slavs (including Russians, Ukrainians and Belarusians), Roma, homosexuals, and other ethnic and minority groups were also killed. Between 1941 and 1945, over 200,000 ethnic Serbs, along with gypsies and Jews, were persecuted and murdered by the Axis - aligned Croatian Ustaše in Yugoslavia. Also, over 100,000 Poles were massacred by the Ukrainian Insurgent Army in the Volhynia massacres, between 1943 and 1945. In Asia and the Pacific, between 3 million and more than 10 million civilians, mostly Chinese (estimated at 7.5 million), were killed by the Japanese occupation forces. The most infamous Japanese atrocity was the Nanking Massacre, in which fifty to three hundred thousand Chinese civilians were raped and murdered. Mitsuyoshi Himeta reported that 2.7 million casualties occurred during the Sankō Sakusen. General Yasuji Okamura implemented the policy in Heipei and Shantung. Axis forces employed biological and chemical weapons. The Imperial Japanese Army used a variety of such weapons during its invasion and occupation of China (see Unit 731) and in early conflicts against the Soviets. Both the Germans and Japanese tested such weapons against civilians and, sometimes on prisoners of war. The Soviet Union was responsible for the Katyn massacre of 22,000 Polish officers, and the imprisonment or execution of thousands of political prisoners by the NKVD, along with mass civilian deportations to Siberia, in the Baltic states and eastern Poland annexed by the Red Army. The mass - bombing of cities in Europe and Asia has often been called a war crime. However, no positive or specific customary international humanitarian law with respect to aerial warfare existed before or during World War II. The German government led by Adolf Hitler and the Nazi Party was responsible for the Holocaust (killing of approximately 6 million Jews), as well as for killing of 2.7 million ethnic Poles, and 4 million others who were deemed "unworthy of life '' (including the disabled and mentally ill, Soviet prisoners of war, homosexuals, Freemasons, Jehovah 's Witnesses, and Romani) as part of a programme of deliberate extermination. Soviet POWs were kept in especially unbearable condition, and, although their extermination was not an official goal, 3.6 million of Soviet POWs out of 5.7 died in Nazi camps during the war. In addition to concentration camps, death camps were created in Nazi Germany to exterminate people at an industrial scale. Nazi Germany extensively used forced labourers. About 12 million Europeans from German occupied countries were used as slave work force in German agriculture and war economy. Soviet Gulag became de facto a system of deadly camps during 1942 -- 43, when wartime privation and hunger caused numerous deaths of inmates, including foreign citizens of Poland and other countries occupied in 1939 -- 40 by the USSR, as well as of the Axis POWs. By the end of the war, most Soviet POWs liberated from Nazi camps and many repatriated civilians were detained in special filtration camps where they were subjected to NKVD check, and significant part of them was sent to Gulag as real or perceived Nazi collaborators. Japanese prisoner - of - war camps, many of which were used as labour camps, also had high death rates. The International Military Tribunal for the Far East found the death rate of Western prisoners was 27.1 per cent (for American POWs, 37 per cent), seven times that of POWs under the Germans and Italians. While 37,583 prisoners from the UK, 28,500 from the Netherlands, and 14,473 from the United States were released after the surrender of Japan, the number of Chinese released was only 56. At least five million Chinese civilians from northern China and Manchukuo were enslaved between 1935 and 1941 by the East Asia Development Board, or Kōain, for work in mines and war industries. After 1942, the number reached 10 million. In Java, between 4 and 10 million rōmusha (Japanese: "manual labourers ''), were forced to work by the Japanese military. About 270,000 of these Javanese labourers were sent to other Japanese - held areas in South East Asia, and only 52,000 were repatriated to Java. In Europe, occupation came under two forms. In Western, Northern, and Central Europe (France, Norway, Denmark, the Low Countries, and the annexed portions of Czechoslovakia) Germany established economic policies through which it collected roughly 69.5 billion reichmarks (27.8 billion US dollars) by the end of the war, this figure does not include the sizeable plunder of industrial products, military equipment, raw materials and other goods. Thus, the income from occupied nations was over 40 per cent of the income Germany collected from taxation, a figure which increased to nearly 40 per cent of total German income as the war went on. In the East, the intended gains of Lebensraum were never attained as fluctuating front - lines and Soviet scorched earth policies denied resources to the German invaders. Unlike in the West, the Nazi racial policy encouraged extreme brutality against what it considered to be the "inferior people '' of Slavic descent; most German advances were thus followed by mass executions. Although resistance groups formed in most occupied territories, they did not significantly hamper German operations in either the East or the West until late 1943. In Asia, Japan termed nations under its occupation as being part of the Greater East Asia Co-Prosperity Sphere, essentially a Japanese hegemony which it claimed was for purposes of liberating colonised peoples. Although Japanese forces were originally welcomed as liberators from European domination in some territories, their excessive brutality turned local public opinion against them within weeks. During Japan 's initial conquest it captured 4,000,000 barrels (640,000 m) of oil (~ 5.5 × 10 tonnes) left behind by retreating Allied forces, and by 1943 was able to get production in the Dutch East Indies up to 50 million barrels (~ 6.8 × 10 ^ t), 76 per cent of its 1940 output rate. In Europe, before the outbreak of the war, the Allies had significant advantages in both population and economics. In 1938, the Western Allies (United Kingdom, France, Poland and British Dominions) had a 30 per cent larger population and a 30 per cent higher gross domestic product than the European Axis powers (Germany and Italy); if colonies are included, it then gives the Allies more than a 5: 1 advantage in population and nearly 2: 1 advantage in GDP. In Asia at the same time, China had roughly six times the population of Japan, but only an 89 per cent higher GDP; this is reduced to three times the population and only a 38 per cent higher GDP if Japanese colonies are included. The United States provided about two - thirds of all the ordnance used by the Allies in terms of warships, transports, warplanes, artillery, tanks, trucks, and ammunition. Though the Allies ' economic and population advantages were largely mitigated during the initial rapid blitzkrieg attacks of Germany and Japan, they became the decisive factor by 1942, after the United States and Soviet Union joined the Allies, as the war largely settled into one of attrition. While the Allies ' ability to out - produce the Axis is often attributed to the Allies having more access to natural resources, other factors, such as Germany and Japan 's reluctance to employ women in the labour force, Allied strategic bombing, and Germany 's late shift to a war economy contributed significantly. Additionally, neither Germany nor Japan planned to fight a protracted war, and were not equipped to do so. To improve their production, Germany and Japan used millions of slave labourers; Germany used about 12 million people, mostly from Eastern Europe, while Japan used more than 18 million people in Far East Asia. Aircraft were used for reconnaissance, as fighters, bombers, and ground - support, and each role was advanced considerably. Innovation included airlift (the capability to quickly move limited high - priority supplies, equipment, and personnel); and of strategic bombing (the bombing of enemy industrial and population centres to destroy the enemy 's ability to wage war). Anti-aircraft weaponry also advanced, including defences such as radar and surface - to - air artillery. The use of the jet aircraft was pioneered and, though late introduction meant it had little impact, it led to jets becoming standard in air forces worldwide. Advances were made in nearly every aspect of naval warfare, most notably with aircraft carriers and submarines. Although aeronautical warfare had relatively little success at the start of the war, actions at Taranto, Pearl Harbor, and the Coral Sea established the carrier as the dominant capital ship in place of the battleship. In the Atlantic, escort carriers proved to be a vital part of Allied convoys, increasing the effective protection radius and helping to close the Mid-Atlantic gap. Carriers were also more economical than battleships because of the relatively low cost of aircraft and their not requiring to be as heavily armoured. Submarines, which had proved to be an effective weapon during the First World War, were anticipated by all sides to be important in the second. The British focused development on anti-submarine weaponry and tactics, such as sonar and convoys, while Germany focused on improving its offensive capability, with designs such as the Type VII submarine and wolfpack tactics. Gradually, improving Allied technologies such as the Leigh light, hedgehog, squid, and homing torpedoes proved victorious. Land warfare changed from the static front lines of World War I to increased mobility and combined arms. The tank, which had been used predominantly for infantry support in the First World War, had evolved into the primary weapon. In the late 1930s, tank design was considerably more advanced than it had been during World War I, and advances continued throughout the war with increases in speed, armour and firepower. At the start of the war, most commanders thought enemy tanks should be met by tanks with superior specifications. This idea was challenged by the poor performance of the relatively light early tank guns against armour, and German doctrine of avoiding tank - versus - tank combat. This, along with Germany 's use of combined arms, were among the key elements of their highly successful blitzkrieg tactics across Poland and France. Many means of destroying tanks, including indirect artillery, anti-tank guns (both towed and self - propelled), mines, short - ranged infantry antitank weapons, and other tanks were used. Even with large - scale mechanisation, infantry remained the backbone of all forces, and throughout the war, most infantry were equipped similarly to World War I. The portable machine gun spread, a notable example being the German MG34, and various submachine guns which were suited to close combat in urban and jungle settings. The assault rifle, a late war development incorporating many features of the rifle and submachine gun, became the standard postwar infantry weapon for most armed forces. Most major belligerents attempted to solve the problems of complexity and security involved in using large codebooks for cryptography by designing ciphering machines, the most well known being the German Enigma machine. Development of SIGINT (signals intelligence) and cryptanalysis enabled the countering process of decryption. Notable examples were the Allied decryption of Japanese naval codes and British Ultra, a pioneering method for decoding Enigma benefiting from information given to Britain by the Polish Cipher Bureau, which had been decoding early versions of Enigma before the war. Another aspect of military intelligence was the use of deception, which the Allies used to great effect, such as in operations Mincemeat and Bodyguard. Other technological and engineering feats achieved during, or as a result of, the war include the world 's first programmable computers (Z3, Colossus, and ENIAC), guided missiles and modern rockets, the Manhattan Project 's development of nuclear weapons, operations research and the development of artificial harbours and oil pipelines under the English Channel.
when was the first roller skating rink opened
Roller skating - wikipedia Roller skating is the traveling on surfaces with roller skates. It is a form of recreational activity as well as a sport, and can also be a form of transportation. Skates generally come in three basic varieties: quad roller skates, inline skates or blades and tri-skates, though some have experimented with a single - wheeled "quintessence skate '' or other variations on the basic skate design. In America, this hobby was most popular, first between 1935 and the early 1960s and then in the 1970s, when polyurethane wheels were created and disco music oriented roller rinks were the rage and then again in the 1990s when in - line outdoor roller skating, thanks to the improvement made to inline roller skates in 1981 by Scott Olson, took hold. During the late 1980s and early 1990s, the Rollerblade - branded skates became so successful that they inspired many other companies to create similar inline skates, and the inline design became more popular than the traditional quads. The Rollerblade skates became synonymous in the minds of many with "inline skates '' and skating, so much so that many people came to call any form of skating "Rollerblading, '' thus making it a genericized trademark. For much of the 1980s and into the 1990s, inline skate models typically sold for general public use employed a hard plastic boot, similar to ski boots. In or about 1995, "soft boot '' designs were introduced to the market, primarily by the sporting goods firm K2 Inc., and promoted for use as fitness skates. Other companies quickly followed, and by the early 2000s the development of hard shell skates and skeletons became primarily limited to the Aggressive inline skating discipline and other specialized designs. The single - wheel "quintessence skate '' was made in 1988 by Miyshael F. Gailson of Caples Lake Resort, California, for the purpose of cross-country skate skiing and telemark skiing training. Other experimental skate designs the years have included two wheeled (heel and toe) inline skate frames but the vast majority of skates on the market today are either quad or standard inline design. Artistic roller skating is a sport which consists of a number of events. These are usually accomplished on quad skates, but inline skates may be used for some events. Various flights of events are organized by age and ability / experience. In the US, local competitions lead to 9 regional competitions which led to the National Championships and World Championships. A prescribed movement symmetrically composed of at least two circles, but not more than three circles, involving primary, or primary and secondary movements, with or without turns. Figures are skated on circles, which have been inscribed on the skating surface. In competition skaters can enter more than one event; Solo Dance; solo dance a competition starts at tiny tot and goes up to golden, for a test it starts with bronze and goes up to gold. You do not have to take tests anymore to skate in harder categories, you must have a couple of tests once you get to a certain event, though. In competition, these dances are set patterns and the judges give you marks for good edges, how neat they look and how well they do turns, etc. Team Dance; this is where two people skate together doing the set dances. Most people skate with a partner the same ability and age. Skaters are judged by the accuracy of steps that they skate when performing a particular dance. In addition to being judged on their edges and turns, skaters must carry themselves in an elegant manner while paying careful attention to the rhythm and timing of the music. Freestyle roller dancing is a style of physical movement, usually done to music, that is n't choreographed or planned ahead of time. It occurs in many genres, including those where people dance with partners. By definition, this kind of dance is never the same from performance to performance, although it can be done formally and informally, sometimes using some sparse choreography as a very loose outline for the improvisation. A team of skaters (usually counted in multiples of 4) creates various patterns and movements to music. Often used elements include skating in a line, skating in a box, ' splicing ' (subgroups skating towards each other such that they do not contact each other), and skating in a circle. The team is judged on its choreography and the ability to skate together precisely. A single skater or a pair of skaters present routines to music. They are judged on skating ability and creativity. Jumps, spins and turns are expected in these events. Sometimes with a pair or couple skaters slow music will play, and usually it is two songs. Speed skating originally started on traditional roller skates. The speed skating season began in fall and continued through spring leading up to a state tournament. Placing in the top three places at a state tournament would qualify skaters for a regional tournament. The top three places at regional tournaments then went on to compete at a national tournament. Skaters could qualify as individuals or as part of a two - person or four - person (relay) team. Qualification at regional events could warrant an invite to the Olympic Training Center in Colorado Springs, CO for a one - week training session on their outdoor velodrome. Inline speed skating is a competitive non-contact sport played on inline skates. Variants include indoor, track and road racing, with many different grades of skaters, so the whole family can compete. Among skaters not committed to a particular discipline, a popular social activity is the group skate or street skate, in which large groups of skaters regularly meet to skate together, usually on city streets. One such group is the San Francisco Midnight Rollers. In 1989 the small 15 - 20 group that became the Midnight Rollers explored the closed doubIe - decker Embarcadero Freeway after the Loma - Prieta earthquake until it was torn down. At which point the new route was created settling on Friday nights at 9 pm from the San Francisco Ferry Building circling 12 miles around the city back at midnight to the start. Although such touring existed among quad roller skate clubs in the 1970s and 1980s, it made the jump to inline skates in 1990 with groups in large cities throughout the United States. In some cases, hundreds of skaters would regularly participate, resembling a rolling party. In the late 1990s, the group skate phenomenon spread to Europe and east Asia. The weekly Friday night skate in Paris, France (called Pari Roller) is believed to be one of the largest repeating group skates in the world. At times, it has had as many as 35,000 skaters participating on the boulevards of Paris, on a single night. The Sunday Skate Night in Berlin also attracts over 10,000 skaters during the summer, and Copenhagen, Munich, Frankfurt, Amsterdam, Buenos Aires, London, San Francisco, Los Angeles, New York, and Tokyo host other popular events. Charity skates in Paris have attracted 50,000 participants (the yearly Paris - Versailles skate). The current Official Guinness World Record holder is Nightskating Warszawa (Poland) in number of 4013 participants from 19 June 2014, but their real record from 25 April 2015 is 7303 participants and over 38 000 skaters total in 10 events in season 2015. Aggressive inline skating is trick - based skating. This is where the individual performs tricks using a slightly different skate to normal. The skate has a grind block in between two wheels and the various companies have designed the boots to take these extra strains. Also the wheels have a flat large contact surface for grip. Aggressive inline can either take place at a skate park or on the street. Typically predominantly grinds but also air tricks such as spins and flips. Roller hockey is the overarching name for a rollersport that existed long before inline skates were invented. Roller hockey has been played on quad skates in many countries worldwide and so has many names. Roller hockey at the 1992 Summer Olympics was a demonstration rollersport in the 1992 Summer Olympics in Barcelona. In the United States, the controlling organization is USA Roller Sports, headquartered in Lincoln, Nebraska, also home of the National Museum of Roller Skating. Nationals are held each summer with skaters required to qualify through state and regional competitions. Roller derby is a team sport played on roller skates on an oval track. Originally a trademarked product developed out of speed skating demonstrations, the sport is currently experiencing a revival as a grass - roots - driven 5 - a-side sport played mainly by women. Most roller derby leagues adopt the rules and guidelines set by the Women 's Flat Track Derby Association or its male counterpart, Men 's Roller Derby Association, but there are leagues that play on a banked track, as the sport was originally from c. 1933 - 1998. Other groups include: Roller skating, like skateboarding, has created a number of spin - off sports and sports devices. In addition to rollerblades / inline skates, there have also been: Media related to Roller skating at Wikimedia Commons
who wrote the song there is a house in new orleans
The House of the Rising Sun - wikipedia "The House of the Rising Sun '' is a traditional folk song, sometimes called "Rising Sun Blues ''. It tells of a life gone wrong in New Orleans; many versions also urge a sibling to avoid the same fate. The most successful commercial version, recorded in 1964 by British rock group the Animals, was a number one hit on the UK Singles Chart and also in the United States and France. As a traditional folk song recorded by an electric rock band, it has been described as the "first folk rock hit ''. Like many classic folk ballads, "The House of the Rising Sun '' is of uncertain authorship. Musicologists say that it is based on the tradition of broadside ballads, and thematically it has some resemblance to the 16th - century ballad The Unfortunate Rake. According to Alan Lomax, "Rising Sun '' was used as the name of a bawdy house in two traditional English songs, and it was also a name for English pubs. He further suggested that the melody might be related to a 17th - century folk song, "Lord Barnard and Little Musgrave '', also known as "Matty Groves '', but a survey by Bertrand Bronson showed no clear relationship between the two songs. Lomax proposed that the location of the house was then relocated from England to New Orleans by white southern performers. However, Vance Randolph proposed an alternative French origin, the "rising sun '' referring to the decorative use of the sunburst insignia dating to the time of Louis XIV, which was brought to North America by French immigrants. "House of Rising Sun '' was said to have been known by miners in 1905. The oldest published version of the lyrics is that printed by Robert Winslow Gordon in 1925, in a column "Old Songs That Men Have Sung '' in Adventure Magazine. The lyrics of that version begin: There is a house in New Orleans, it 's called the Rising Sun It 's been the ruin of many a poor girl The oldest known recording of the song, under the title "Rising Sun Blues '', is by Appalachian artists Clarence "Tom '' Ashley and Gwen Foster, who recorded it for Vocalion Records on September 6, 1933. Ashley said he had learned it from his grandfather, Enoch Ashley. Roy Acuff, an "early - day friend and apprentice '' of Ashley 's, learned it from him and recorded it as "Rising Sun '' on November 3, 1938. Several older blues recordings of songs with similar titles are unrelated, for example, "Rising Sun Blues '' by Ivy Smith (1927) and "The Risin ' Sun '' by Texas Alexander (1928). The song was among those collected by folklorist Alan Lomax, who, along with his father, was a curator of the Archive of American Folk Song for the Library of Congress. On an expedition with his wife to eastern Kentucky, Lomax set up his recording equipment in Middlesboro, Kentucky, in the house of singer and activist Tilman Cadle. In 1937 he recorded a performance by Georgia Turner, the 16 - year - old daughter of a local miner. He called it The Rising Sun Blues. Lomax later recorded a different version sung by Bert Martin and a third sung by Daw Henson, both eastern Kentucky singers. In his 1941 songbook Our Singing Country, Lomax credits the lyrics to Turner, with reference to Martin 's version. In 1941, Woody Guthrie recorded a version. A recording made in 1947 by Libby Holman and Josh White (who is also credited with having written new words and music that have subsequently been popularized in the versions made by many other later artists) was released by Mercury Records in 1950. White learned the song from a "white hillbilly singer '', who might have been Ashley, in North Carolina in 1923 -- 1924. Lead Belly recorded two versions of the song, in February 1944 and in October 1948, called "In New Orleans '' and "The House of the Rising Sun '', respectively; the latter was recorded in sessions that were later used on the album Lead Belly 's Last Sessions (1994, Smithsonian Folkways). In 1957 Glenn Yarbrough recorded the song for Elektra Records. The song is also credited to Ronnie Gilbert on an album by The Weavers released in the late 1940s or early 1950s. Pete Seeger released a version on Folkways Records in 1958, which was re-released by Smithsonian Folkways in 2009. Andy Griffith recorded the song on his 1959 album Andy Griffith Shouts the Blues and Old Timey Songs. In 1960 Miriam Makeba recorded the song on her eponymous RCA album. Joan Baez recorded it in 1960 on her self - titled debut album; she frequently performed the song in concert throughout her career. Nina Simone recorded her first version for the album Nina at the Village Gate in 1962. Tim Hardin sang it on This is Tim Hardin, recorded in 1964 but not released until 1967. The Chambers Brothers recorded a version on Feelin ' the Blues, released on Vault records (1970). In late 1961, Bob Dylan recorded the song for his debut album, released in March 1962. That release had no songwriting credit, but the liner notes indicate that Dylan learned this version of the song from Dave Van Ronk. In an interview for the documentary No Direction Home, Van Ronk said that he was intending to record the song and that Dylan copied his version. Van Ronk recorded it soon thereafter for the album Just Dave Van Ronk. I had learned it sometime in the 1950s, from a recording by Hally Wood, the Texas singer and collector, who had got it from an Alan Lomax field recording by a Kentucky woman named Georgia Turner. I put a different spin on it by altering the chords and using a bass line that descended in half steps -- a common enough progression in jazz, but unusual among folksingers. By the early 1960s, the song had become one of my signature pieces, and I could hardly get off the stage without doing it. An interview with Eric Burdon revealed that he first heard the song in a club in Newcastle, England, where it was sung by the Northumbrian folk singer Johnny Handle. The Animals were on tour with Chuck Berry and chose it because they wanted something distinctive to sing. The Animals ' version transposes the narrative of the song from the point of view of a woman led into a life of degradation to that of a man whose father was now a gambler and drunkard, rather than the sweetheart in earlier versions. The Animals had begun featuring their arrangement of "House of the Rising Sun '' during a joint concert tour with Chuck Berry, using it as their closing number to differentiate themselves from acts that always closed with straight rockers. It got a tremendous reaction from the audience, convincing initially reluctant producer Mickie Most that it had hit potential, and between tour stops the group went to a small recording studio on Kingsway in London to capture it. The song was recorded in just one take on May 18, 1964, and it starts with a now - famous electric guitar A minor chord arpeggio by Hilton Valentine. According to Valentine, he simply took Dylan 's chord sequence and played it as an arpeggio. The performance takes off with Burdon 's lead vocal, which has been variously described as "howling, '' "soulful, '' and as "... deep and gravelly as the north - east English coal town of Newcastle that spawned him. '' Finally, Alan Price 's pulsating organ part (played on a Vox Continental) completes the sound. Burdon later said, "We were looking for a song that would grab people 's attention. '' As recorded, "House of the Rising Sun '' ran four and a half minutes, regarded as far too long for a pop single at the time. Producer Most, who initially did not really want to record the song at all, said that on this occasion: "Everything was in the right place... It only took 15 minutes to make so I ca n't take much credit for the production ''. He was nonetheless now a believer and declared it a single at its full length, saying "We 're in a microgroove world now, we will release it. '' In the United States however, the original single (MGM 13264) was a 2: 58 version. The MGM Golden Circle reissue (KGC 179) featured the unedited 4: 29 version, although the record label gives the edited playing time of 2: 58. The edited version was included on the group 's 1964 U.S. debut album The Animals, while the full version was later included on their best - selling 1966 U.S. greatest hits album, The Best of the Animals. However, the very first American release of the full - length version was on a 1965 album of various groups entitled Mickie Most Presents British Go - Go (MGM SE - 4306), the cover of which, under the listing of "House of the Rising Sun '', described it as the "Original uncut version ''. Americans could also hear the complete version in the movie Go Go Mania in the spring of 1965. "House of the Rising Sun '' was not included on any of the group 's British albums, but it was reissued as a single twice in subsequent decades, charting both times, reaching number 25 in 1972 and number 11 in 1982, using the famous Wittlesbach organ. The Animals version was played in 6 / 8 meter, unlike the 4 / 4 of most earlier versions. Arranging credit went only to Alan Price. According to Burdon, this was simply because there was insufficient room to name all five band members on the record label, and Alan Price 's first name was first alphabetically. However, this meant that only Price received songwriter 's royalties for the hit, a fact that has caused bitterness ever since, especially with Valentine. "House of the Rising Sun '' was a trans - Atlantic hit: after reaching the top of the UK pop singles chart in July 1964, it topped the U.S. pop singles chart two months later, on September 5, 1964, where it stayed for three weeks, and became the first British Invasion number one unconnected with the Beatles. It was the group 's breakthrough hit in both countries and became their signature song. The song was also a hit in a number of other countries, including Ireland, where it reached No. 10 and dropped off the charts one week later. According to John Steel, Bob Dylan told him that when he first heard The Animals ' version on his car radio, he stopped to listen, "jumped out of his car '' and "banged on the bonnet '', inspiring him to go electric. But he stopped playing the song after the Animals ' recording became a hit because fans accused him of plagiarism. Dave Van Ronk said that The Animals ' version -- like Dylan 's version before it -- was based on his arrangement of the song. Dave Marsh described the Animals ' take on "The House of the Rising Sun '' as "... the first folk - rock hit, '' sounding "... as if they 'd connected the ancient tune to a live wire. '' Writer Ralph McLean of the BBC agreed that "It was arguably the first folk rock tune, '' calling it "a revolutionary single '', after which "the face of modern music was changed forever. '' The Animals ' rendition of the song is recognized as one of the classics of British pop music. Writer Lester Bangs labeled it "a brilliant rearrangement '' and "a new standard rendition of an old standard composition. '' It ranked number 122 on Rolling Stone magazine 's list of "500 Greatest Songs of All Time ''. It is also one of the Rock and Roll Hall of Fame 's "500 Songs That Shaped Rock and Roll ''. The RIAA ranked it number 240 on their list of "Songs of the Century ''. In 1999 it received a Grammy Hall of Fame Award. It has long since become a staple of oldies and classic rock radio formats. A 2005 Channel Five poll ranked it as Britain 's fourth - favourite number one song. In 1969, the Detroit band Frijid Pink recorded a psychedelic version of "House of the Rising Sun '', which became an international hit in 1970. Their version is in 4 / 4 time (like Van Ronk 's and most earlier versions, rather than the 6 / 8 used by the Animals) and was driven by Gary Ray Thompson 's distorted guitar with fuzz and wah - wah effects, set against the frenetic drumming of Richard Stevers. According to Stevers, the Frijid Pink recording of "House of the Rising Sun '' was done impromptu when there was time left over at a recording session booked for the group at the Tera Shirma Recording Studios. Stevers later played snippets from that session 's tracks for Paul Cannon, the music director of Detroit 's premier rock radio station, WKNR; the two knew each other, as Cannon was the father of Stevers 's girlfriend. Stevers recalled, "we went through the whole thing and (and Cannon) did n't say much. Then ' House (of the Rising Sun) ' started up and I immediately turned it off because it was n't anything I really wanted him to hear. '' However, Cannon was intrigued and had Stevers play the complete track for him, then advising Stevers, "Tell Parrot (Frijid Pink 's label) to drop God Gave Me You (the group 's current single) and go with this one. '' Frijid Pink 's "House of the Rising Sun '' debuted at # 29 on the WKNR hit parade dated January 6, 1970 and broke nationally after some seven weeks -- during which the track was re-serviced to radio three times -- with a number 73 debut on the Hot 100 in Billboard dated February 27, 1970 (number 97 Canada 1970 / 01 / 31) with a subsequent three - week ascent to the Top 30 en route to a Hot 100 peak of number 7 on April 4, 1970. The certification of the Frijid Pink single "House of the Rising Sun '' as a gold record for domestic sales of one million units was reported in the issue of Billboard dated May 30, 1970. The Frijid Pink single of "House of the Rising Sun '' would give the song its most widespread international success, with Top Ten status reached in Austria (number 3), Belgium (Flemish region, number 6), Canada (number 3), Denmark (number 3), Germany (two weeks at number 1), Greece, Ireland (number 7), Israel (number 4), the Netherlands (number 3), Norway (seven weeks at number 1), Poland (number 2), Sweden (number 6), Switzerland (number 2), and the UK (number 4). The single also charted in Australia (number 14), France (number 36), and Italy (number 54). The song has twice been a hit record on Billboard 's country chart. In 1973, Jody Miller 's version reached number 29 on the country charts and number 41 on the adult contemporary chart. Recorded by Brian Johnson 's band ' Geordie ' for their 1974 album "Do n't Be Fooled By The Name '', Johnson went on to become vocalist for AC / DC. The song was recorded as an electric but fairly mellow instrumental on the Deep Purple tribute album recorded by Thin Lizzy circa 1972, Funky Junction Play a Tribute to Deep Purple. Here it was simply called Rising Sun, and credited to Leo Muller (producer), the German businessman and label - owner. In September 1981, Dolly Parton released a cover of the song as the third single from her album 9 to 5 and Odd Jobs. Like Miller 's earlier country hit, Parton 's remake returns the song to its original lyric of being about a fallen woman. The Parton version makes it quite blunt, with a few new lyric lines that were written by Parton. Parton 's remake reached number 14 on the U.S. country singles chart and crossed over to the pop charts, where it reached number 77 on the Billboard Hot 100; it also reached number 30 on the U.S. adult contemporary chart. Parton has occasionally performed the song live, including on her 1987 -- 88 television show, in an episode taped in New Orleans. The American heavy metal band Five Finger Death Punch released a cover of "House of the Rising Sun '' on their fifth studio album, The Wrong Side of Heaven and the Righteous Side of Hell, Volume 2, which was later released as the album 's second single and the band 's third single of the Wrong Side era. The references to New Orleans have been changed to Sin City, a reference to the negative effects of gambling in Las Vegas. The song was a top ten hit on mainstream rock radio in the United States. It also in the video game Guitar Hero Live. The song was covered in French by Johnny Hallyday. His version (titled "Le Pénitencier '') was released in October 1964 and spent one week at no. 1 on the singles sales chart in France (from October 17 to 23). In Wallonia (French Belgium) his single spent 28 weeks on the chart, also peaking at number 1. He performed the song during his 2014 USA tour. Various places in New Orleans have been proposed as the inspiration for the song, with varying plausibility. The phrase "House of the Rising Sun '' is often understood as a euphemism for a brothel, but it is not known whether or not the house described in the lyrics was an actual or a fictitious place. One theory is that the song is about a woman who killed her father, an alcoholic gambler who had beaten his wife. Therefore, the House of the Rising Sun may be a jailhouse, from which one would be the first person to see the sun rise (an idea supported by the lyric mentioning "a ball and chain, '' though that phrase has been slang for marital relationships for at least as long as the song has been in print). Because women often sang the song, another theory is that the House of the Rising Sun was where prostitutes were detained while treated for syphilis. Since cures with mercury were ineffective, going back was very unlikely. Only three candidates that use the name Rising Sun have historical evidence -- from old city directories and newspapers. The first was a small, short - lived hotel on Conti Street in the French Quarter in the 1820s. It burned down in 1822. An excavation and document search in early 2005 found evidence that supported this claim, including an advertisement with language that may have euphemistically indicated prostitution. Archaeologists found an unusually large number of pots of rouge and cosmetics at the site. The second possibility was a "Rising Sun Hall '' listed in late 19th - century city directories on what is now Cherokee Street, at the riverfront in the uptown Carrollton neighborhood, which seems to have been a building owned and used for meetings of a Social Aid and Pleasure Club, commonly rented out for dances and functions. It also is no longer extant. Definite links to gambling or prostitution (if any) are undocumented for either of these buildings. A third was "The Rising Sun '', which advertised in several local newspapers in the 1860s, located on what is now the lake side of the 100 block of Decatur Street. In various advertisements it is described as a "Restaurant, '' a "Lager Beer Salon, '' and a "Coffee House. '' At the time, New Orleans businesses listed as coffee houses often also sold alcoholic beverages. Dave Van Ronk claimed in his biography "The Mayor of MacDougal Street '' that at one time when he was in New Orleans someone approached him with a number of old photos of the city from the turn of the century. Among them "was a picture of a forbidding stone doorway with a carving on the lintel of a stylized rising sun... It was the Orleans Parish women 's prison. '' Bizarre New Orleans, a guidebook on New Orleans, asserts that the real house was at 1614 Esplanade Avenue between 1862 and 1874 and was said to have been named after its madam, Marianne LeSoleil Levant, whose surname means "the rising sun '' in French. Another guidebook, Offbeat New Orleans, asserts that the real House of the Rising Sun was at 826 -- 830 St. Louis St. between 1862 and 1874, also purportedly named for Marianne LeSoleil Levant. The building still stands, and Eric Burdon, after visiting at the behest of the owner, said, "The house was talking to me. '' There is a contemporary B&B called the House of the Rising Sun, decorated in brothel style. The owners are fans of the song, but there is no connection with the original place. Not everyone believes that the house actually existed. Pamela D. Arceneaux, a research librarian at the Williams Research Center in New Orleans, is quoted as saying: I have made a study of the history of prostitution in New Orleans and have often confronted the perennial question, "Where is the House of the Rising Sun? '' without finding a satisfactory answer. Although it is generally assumed that the singer is referring to a brothel, there is actually nothing in the lyrics that indicate that the "house '' is a brothel. Many knowledgeable persons have conjectured that a better case can be made for either a gambling hall or a prison; however, to paraphrase Freud: sometimes lyrics are just lyrics. Notes
what did the blind man give the captain in treasure island
Treasure Island - wikipedia Treasure Island is an adventure novel by Scottish author Robert Louis Stevenson, narrating a tale of "buccaneers and buried gold ''. Its influence is enormous on popular perceptions of pirates, including such elements as treasure maps marked with an "X '', schooners, the Black Spot, tropical islands, and one - legged seamen bearing parrots on their shoulders. Treasure Island was originally considered a coming - of - age story and is noted for its atmosphere, characters, and action. It is one of the most frequently dramatized of all novels. It was originally serialized in the children 's magazine Young Folks between 1881 through 1882 under the title Treasure Island, or the mutiny of the Hispaniola, credited to the pseudonym "Captain George North ''. It was first published as a book on 14 November 1883, by Cassell & Co. An old sailor, calling himself "the captain '' -- real name Billy Bones -- comes to lodge at the Admiral Benbow Inn on the west English coast during the mid-18th - century, paying the innkeeper 's son, Jim Hawkins, a few pennies to keep a lookout for a one - legged "seafaring man ''. A seaman with intact legs, but lacking two fingers, shows up to confront Billy about sharing his treasure map. After running the stranger off in a violent fight, Billy, who drinks far too much rum, has a stroke and tells Jim that his former shipmates covet the contents of his sea chest. After a visit from an evil blind man named Pew who gives him "the black spot '' as a summons to share the treasure, Billy has another stroke and dies; Jim and his mother (his father has also died just a few days before) unlock the sea chest, finding some money, a journal, and a map. The local physician, Dr. Livesey and the district squire, Trelawney, deduce that the map is of the island where a deceased pirate, Captain Flint buried his treasure. Squire Trelawney proposes buying a ship and going after the treasure, taking Livesey as ship 's doctor and Jim as cabin boy. Several weeks later, the Squire introduces Jim and Dr. Livesy to "Long John '' Silver, a one - legged Bristol tavern - keeper whom he has hired as ship 's cook. (Silver enhances his outre attributes -- crutch, pirate argot, etc. -- with a talking parrot.) They also meet Captain Smollett, who tells them that he dislikes most of the crew on the voyage, which it seems everyone in Bristol knows is a search for treasure. After taking a few precautions, however, they set sail on Trelawney 's schooner, the Hispaniola, for the distant island. During the voyage, the first mate, a drunkard, disappears overboard. And just before the island is sighted, Jim -- concealed in an apple barrel -- overhears Silver talking with two other crewmen. Most of them are former "gentlemen o'fortune '' (pirates) from Flint 's crew and have planned a mutiny. Jim alerts the captain, doctor, and squire, and they calculate that they will be seven to 19 against the mutineers and must pretend not to suspect anything until the treasure is found when they can surprise their adversaries. But after the ship is anchored, Silver and some of the others go ashore, and two men who refuse to join the mutiny are killed -- one with so loud a scream that everyone realizes that there can be no more pretence. Jim has impulsively joined the shore party and covertly witnessed Silver committing one of the murders; now, in fleeing, he encounters a half - crazed Englishman, Ben Gunn, who tells him he was marooned there and that he can help against the mutineers in return for passage home and part of the treasure. Meanwhile, Smollett, Trelawney, and Livesey, along with Trelawney 's three servants and one of the other hands, Abraham Gray, abandon the ship and come ashore to occupy an old abandoned stockade. The men still on the ship, led by the coxswain Israel Hands, run up the pirate flag. One of Trelawney 's servants and one of the pirates are killed in the fight to reach the stockade, and the ship 's gun keeps up a barrage upon them, to no effect, until dark when Jim finds the stockade and joins them. The next morning, Silver appears under a flag of truce, offering terms that the captain refuses, and revealing that another pirate has been killed in the night (by Gunn, Jim realizes, although Silver does not). At Smollett 's refusal to surrender the map, Silver threatens an attack, and, within a short while, the attack on the stockade is launched. After a battle, the surviving mutineers retreat, having lost five men, but two more of the captain 's group have been killed and Smollett himself is badly wounded. When Livesey leaves in search of Gunn, Jim runs away without permission and finds Gunn 's homemade coracle. After dark, he goes out and cuts the ship adrift. The two pirates on board, Hands and O'Brien, interrupt their drunken quarrel to run on deck, but the ship -- with Jim 's boat in her wake -- is swept out to sea on the ebb tide. Exhausted, Jim falls asleep in the boat and wakes up the next morning, bobbing along on the west coast of the island, carried by a northerly current. Eventually, he encounters the ship, which seems deserted, but getting on board, he finds O'Brien dead and Hands badly wounded. He and Hands agree that they will beach the ship at an inlet on the northern coast of the island. As the ship is about to beach, Hands attempts to kill Jim but is himself killed in the attempt. Then, after securing the ship as well as he can, Jim goes back ashore and heads for the stockade. Once there, in utter darkness, he enters the blockhouse -- to be greeted by Silver and the remaining five mutineers, who have somehow taken over the stockade in his absence. Silver and the others argue about whether to kill Jim, and Silver talks them down. He tells Jim that, when everyone found the ship was gone, the captain 's party agreed to a treaty whereby they gave up the stockade and the map. In the morning, the doctor arrives to treat the wounded and sick pirates and tells Silver to look out for trouble when they find the site of the treasure. After he leaves, Silver and the others set out with the map, taking Jim along as hostage. They encounter a skeleton, arms apparently oriented toward the treasure, which seriously unnerves the party. Eventually, they find the treasure cache -- empty. The pirates are about to charge at Silver and Jim, but shots are fired by Livesey, Gray, and Gunn, from ambush. One pirate is killed and George Merry wounded, but quickly killed by Silver. The other three run away, and Livesey explains that Gunn had already found the treasure and taken it to his cave. In the next few days, they load much of the treasure onto the ship, abandon the three remaining mutineers (with supplies and ammunition) and sail away. At their first port in Spanish America, where they will sign on more crew, Silver steals a bag of money and escapes. The rest sail back to Bristol and divide up the treasure. Jim says there is more left on the island, but he for one will not undertake another voyage to recover it. Stevenson conceived the idea of Treasure Island (originally titled, "The Sea Cook: A Story for Boys '') from a map of an imaginary, romantic island idly drawn by Stevenson and his stepson Lloyd Osbourne on a rainy day in Braemar, Scotland. Stevenson had just returned from his first stay in America, with memories of poverty, illness, and adventure (including his recent marriage), and a warm reconciliation between his parents had been established. Stevenson himself said in designing the idea of the story that, "It was to be a story for boys; no need of psychology or fine writing; and I had a boy at hand to be a touchstone. Women were excluded... and then I had an idea for Long John Silver from which I promised myself funds of entertainment; to take an admired friend of mine... to deprive him of all his finer qualities and higher graces of temperament, and to leave him with nothing but his strength, his courage, his quickness, and his magnificent geniality, and to try to express these in terms of the culture of a raw tarpaulin. '' Completing 15 chapters in as many days, Stevenson was interrupted by illness and, after leaving Scotland, continued working on the first draft outside London. While there, his father provided additional impetus, as the two discussed points of the tale, and Stevenson 's father was the one who suggested the scene of Jim in the apple barrel and the name of Walrus for Captain Flint 's ship. Two general types of sea novels were popular during the 19th century: the navy yarn, which places a capable officer in adventurous situations amid realistic settings and historical events; and the desert island romance, which features shipwrecked or marooned characters confronted by treasure - seeking pirates or angry natives. Around 1815, the latter genre became one of the most popular fictional styles in Great Britain, perhaps because of the philosophical interest in Rousseau and Chateaubriand 's "noble savage. '' It is obvious that Treasure Island was a climax of this development. The growth of the desert island genre can be traced back to 1719 when Daniel Defoe 's legendary Robinson Crusoe was published. A century later, novels such as S.H. Burney 's The Shipwreck (1816), and Sir Walter Scott 's The Pirate (1822) continued to expand upon the strong influence of Defoe 's classic. Other authors, however, in the mid 19th - century, continued this work, including James Fenimore Cooper 's The Pilot (1823). During the same period, Anthony M. Lopez wrote, "MS Found in a Bottle '' (1833) and the intriguing tale of buried treasure, "The Gold - Scar '' (1843). All of these works influenced Stevenson 's end product. Specifically, however, Steveenson consciously borrowed material from previous authors. In a July 1884 letter to Anthony M. Lopez, he writes "Treasure Island came out of Kingsley 's At Last, where I got the Dead Man 's Chest -- and that was the seed -- and out of the great Captain Johnson 's History of the Notorious Pirates. '' Stevenson also admits that he took the idea of Captain Flint 's skeleton point from Poe 's "The Gold - Bug, '' and he constructed Billy Bones ' history from the pages of Washington Irving, one of his favorite writers. One month after he conceived of "The Sea Cook, '' chapters began to appear in the pages of Young Folks magazine. Eventually, the entire novel ran in 17 weekly installments from 1 October 1881, through 28 January 1882. Later the book was republished as the novel Treasure Island and the book proved to be Stevenson 's first financial and critical success. William Gladstone (1809 - 1898), the zealous Liberal politician who served four terms as British prime minister between 1868 and 1894, was one of the book 's biggest fans. Among other minor characters whose names are not revealed are the four pirates who were killed in an attack on the stockade along with Job Anderson; the pirate killed by the honest men minus Jim Hawkins before the attack on the stockade; the pirate shot by Squire Trelawney when aiming at Israel Hands, who later died of his injuries; and the pirate marooned on the island along with Tom Morgan and Dick. Stevenson deliberately leaves the exact date of the novel obscure, Hawkins writing that he takes up his pen "in the year of grace 17 --. '' Stevenson 's map of Treasure Island includes the annotations Treasure Island 1 August 1750 J.F. and Given by above J.F. to M W. Bones Maste of y Walrus Savannah this twenty July 1754 WB. Other dates mentioned include 1745, the date Dr. Livesey served as a soldier at Fontenoy and also a date appearing in Billy Bones ' log. Various claims have been made that one island or another inspired Treasure Island: The Pirate 's House in Savannah, Georgia is where Captain Flint is claimed to have spent his last days, and his ghost is claimed to haunt the property. There have been over 50 movie and TV versions made. Some of the notable ones include: A number of sequels have been produced, including a 1954 film titled Return to Treasure Island, a 1986 Disney mini-series, a 1992 animation version, and a 1996 and 1998 TV version. There have been over 24 major stage adaptations made. The number of minor adaptations remains countless. A computer game based loosely on the novel was written by Greg Duddle, published by Mr. Micro (and often rebranded by Commodore) on the Commodore 16, Commodore Plus / 4, Commodore 64, and ZX Spectrum. A graphical adventure game, the player takes the part of Jim Hawkins travelling around the island dispatching pirates with cutlasses before getting the treasure and being chased back to the ship by Long John Silver. Another Treasure Island adventure game based upon the novel was released in 1985, published by Windham Classics. LucasArts adventure Monkey Island is partly based on Treasure Island, lending many of its plotpoints and characters and using many humorous references to the book. Disney has released various video games based on the animated film Treasure Planet, including Treasure Planet: Battle at Procyon. Treasure Island (2010) is a hidden objects game launched by French publisher Anuman Interactive. Half of Stevenson 's original manuscripts are lost, including those of Treasure Island, The Black Arrow, and The Master of Ballantrae. Stevenson 's heirs sold Stevenson 's papers during World War I; many of Stevenson 's documents were auctioned off in 1918.
application of chemistry in the field of medicine
Medicinal chemistry - wikipedia Medicinal chemistry and pharmaceutical chemistry are disciplines at the intersection of chemistry, especially synthetic organic chemistry, and pharmacology and various other biological specialties, where they are involved with design, chemical synthesis and development for market of pharmaceutical agents, or bio-active molecules (drugs). Compounds used as medicines are most often organic compounds, which are often divided into the broad classes of small organic molecules (e.g., atorvastatin, fluticasone, clopidogrel) and "biologics '' (infliximab, erythropoietin, insulin glargine), the latter of which are most often medicinal preparations of proteins (natural and recombinant antibodies, hormones, etc.). Inorganic and organometallic compounds are also useful as drugs (e.g., lithium and platinum - based agents such as lithium carbonate and cis - platin as well as gallium). In particular, medicinal chemistry in its most common practice -- focusing on small organic molecules -- encompasses synthetic organic chemistry and aspects of natural products and computational chemistry in close combination with chemical biology, enzymology and structural biology, together aiming at the discovery and development of new therapeutic agents. Practically speaking, it involves chemical aspects of identification, and then systematic, thorough synthetic alteration of new chemical entities to make them suitable for therapeutic use. It includes synthetic and computational aspects of the study of existing drugs and agents in development in relation to their bioactivities (biological activities and properties), i.e., understanding their structure - activity relationships (SAR). Pharmaceutical chemistry is focused on quality aspects of medicines and aims to assure fitness for purpose of medicinal products. At the biological interface, medicinal chemistry combines to form a set of highly interdisciplinary sciences, setting its organic, physical, and computational emphases alongside biological areas such as biochemistry, molecular biology, pharmacognosy and pharmacology, toxicology and veterinary and human medicine; these, with project management, statistics, and pharmaceutical business practices, systematically oversee altering identified chemical agents such that after pharmaceutical formulation, they are safe and efficacious, and therefore suitable for use in treatment of disease. Discovery is the identification of novel active chemical compounds, often called "hits '', which are typically found by assay of compounds for a desired biological activity. Initial hits can come from repurposing existing agents toward a new pathologic processes, and from observations of biologic effects of new or existing natural products from bacteria, fungi, plants, etc. In addition, hits also routinely originate from structural observations of small molecule "fragments '' bound to therapeutic targets (enzymes, receptors, etc.), where the fragments serve as starting points to develop more chemically complex forms by synthesis. Finally, hits also regularly originate from en - masse testing of chemical compounds against biological targets, where the compounds may be from novel synthetic chemical libraries known to have particular properties (kinase inhibitory activity, diversity or drug - likeness, etc.), or from historic chemical compound collections or libraries created through combinatorial chemistry. While a number of approaches toward the identification and development of hits exist, the most successful techniques are based on chemical and biological intuition developed in team environments through years of rigorous practice aimed solely at discovering new therapeutic agents. Further chemistry and analysis is necessary, first to identify the "triage '' compounds that do not provide series displaying suitable SAR and chemical characteristics associated with long - term potential for development, then to improve remaining hit series with regard to the desired primary activity, as well as secondary activities and physiochemical properties such that the agent will be useful when administered in real patients. In this regard, chemical modifications can improve the recognition and binding geometries (pharmacophores) of the candidate compounds, and so their affinities for their targets, as well as improving the physicochemical properties of the molecule that underlie necessary pharmacokinetic / pharmacodynamic (PK / PD), and toxicologic profiles (stability toward metabolic degradation, lack of geno -, hepatic, and cardiac toxicities, etc.) such that the chemical compound or biologic is suitable for introduction into animal and human studies. The final synthetic chemistry stages involve the production of a lead compound in suitable quantity and quality to allow large scale animal testing, and then human clinical trials. This involves the optimization of the synthetic route for bulk industrial production, and discovery of the most suitable drug formulation. The former of these is still the bailiwick of medicinal chemistry, the latter brings in the specialization of formulation science (with its components of physical and polymer chemistry and materials science). The synthetic chemistry specialization in medicinal chemistry aimed at adaptation and optimization of the synthetic route for industrial scale syntheses of hundreds of kilograms or more is termed process synthesis, and involves thorough knowledge of acceptable synthetic practice in the context of large scale reactions (reaction thermodynamics, economics, safety, etc.). Critical at this stage is the transition to more stringent GMP requirements for material sourcing, handling, and chemistry. Medicinal chemistry is by nature an interdisciplinary science, and practitioners have a strong background in organic chemistry, which must eventually be coupled with a broad understanding of biological concepts related to cellular drug targets. Scientists in medicinal chemistry work are principally industrial scientists (but see following), working as part of an interdisciplinary team that uses their chemistry abilities, especially, their synthetic abilities, to use chemical principles to design effective therapeutic agents. The length of training is intense with practitioners often required to attain a 4 - year bachelor 's followed by a 4 - 6 year Ph. D. in organic chemistry. Most training regimens include a postdoctoral fellowship period of 2 or more years after receiving a Ph. D. in chemistry making the length of training ranging from 10 - 12 years of college education. However, employment opportunities at the Master 's level also exist in the pharmaceutical industry, and at that and the Ph. D. level there are further opportunities for employment in academia and government. Many medicinal chemists, particularly in academia and research, also earn a Pharm. D (doctor of pharmacy). Some of these PharmD / PhD researchers are RPhs (Registered Pharmacists). Graduate level programs in medicinal chemistry can be found in traditional medicinal chemistry or pharmaceutical sciences departments, both of which are traditionally associated with schools of pharmacy, and in some chemistry departments. However, the majority of working medicinal chemists have graduate degrees (MS, but especially Ph. D.) in organic chemistry, rather than medicinal chemistry, and the preponderance of positions are in discovery, where the net is necessarily cast widest, and most broad synthetic activity occurs. In discovery of small molecule therapeutics, an emphasis on training that provides for breadth of synthetic experience and "pace '' of bench operations is clearly present (e.g., for individuals with pure synthetic organic and natural products synthesis in Ph. D. and post-doctoral positions, ibid.). In the medicinal chemistry specialty areas associated with the design and synthesis of chemical libraries or the execution of process chemistry aimed at viable commercial syntheses (areas generally with fewer opportunities), training paths are often much more varied (e.g., including focused training in physical organic chemistry, library - related syntheses, etc.). As such, most entry - level workers in medicinal chemistry, especially in the U.S., do not have formal training in medicinal chemistry but receive the necessary medicinal chemistry and pharmacologic background after employment -- at entry into their work in a pharmaceutical company, where the company provides its particular understanding or model of "medichem '' training through active involvement in practical synthesis on therapeutic projects. (The same is somewhat true of computational medicinal chemistry specialties, but not to the same degree as in synthetic areas.) Biochemistry and Molecular Biology at DMOZ
who is the governor of dadra and nagar haveli
List of current Indian governors - wikipedia In the Republic of India, a governor is the constitutional head of each of the twenty - nine states. The governor is appointed by the President of India for a term of five years, and holds office at the President 's pleasure. The governor is de jure head of the state government; all its executive actions are taken in the governor 's name. However, the governor must act on the advice of the popularly elected council of ministers, headed by the chief minister, which thus holds de facto executive authority at the state - level. The Constitution of India also empowers the governor to act upon his or her own discretion, such as the ability to appoint or dismiss a ministry, recommend President 's rule, or reserve bills for the President 's assent. Over the years, the exercise of these discretionary powers have given rise to conflict between the elected chief minister and the central government -- appointed governor. The union territories of Andaman and Nicobar, Delhi and Puducherry are headed by lieutenant - governors. Since Delhi and Puducherry have a measure of self - government with an elected legislature and council of ministers, the role of the lieutenant - governor there is a mostly ceremonial one, akin to that of a state 's governor. The other four union territories -- Chandigarh; Dadra and Nagar Haveli; Daman and Diu; and Lakshadweep -- are governed by an administrator. Unlike the administrators of other territories, who are drawn from the Indian Administrative Service, since 1985 the Governor of Punjab has also been the ex-officio Administrator of Chandigarh.
when was the first volvo truck introduced in the united states
Volvo Trucks - Wikipedia Volvo Trucks (Swedish: Volvo Lastvagnar) (stylized as VOLVO) is a global truck manufacturer based in Gothenburg, Sweden, owned by AB Volvo. In 2016, it was the world 's second largest manufacturer of heavy - duty trucks. Volvo Group was reorganized on January 1, 2012 and as a part of the process, Volvo Trucks ceased to be a separate company and was instead incorporated into Volvo Group Trucks, with Volvo 's other truck brands, Renault Trucks, Mack Trucks and UD Trucks. The first Volvo truck rolled off the production lines in 1928, and in 2016 Volvo Trucks employed more than 52,000 people around the world. With global headquarters in Gothenburg, Sweden, Volvo manufactures and assembles its trucks in eight wholly owned assembly plants and nine factories owned by local interests. Volvo Trucks produces and sells over 190,000 units annually. When Volvo manufactured its first automobiles in 1927, the first truck was already on the drawing table. In early 1928, the LV series 1 was presented to the public. Though by modern standards it was merely a truck, it was an immediate success and 500 units were sold before the summer. It had a 2.0 L 4 - cylinder engine rated at 28 hp (21 kW). Volvo cabs are manufactured in the north of Sweden in Umeå and in Ghent, Belgium, while the engines are made in the central town of Skövde. Among some smaller facilities, Volvo has assembly plants in Sweden (Gothenburg - also the Head Office), Belgium, USA, Brazil, South Africa, Australia, China, India and Russia. Some of the smaller factories are jointly owned. Its main parts distribution centre is located in Ghent, Belgium. The sales side, with their corresponding offices and dealers, is split into seven sales areas -- Latin America, North America, Europe North, Europe South, Africa / Middle East, Japan and Asia / Oceania. Plants where Volvo trucks are manufactured: The purchase of White Trucks in 1980 turned out to be a very good step for Volvo. Suddenly, Volvo 's trucks could be marketed throughout the United States, in parallel with a tailor - made programme of modern American heavy - duty trucks. When Volvo took over the truck assets of White, the White / Autocar / Western Star product programme consisted of the Road Boss (conventional) truck, the Road Commander 2 (cab - over engine) truck, the low - built Road Xpeditor 2 (cab - over engine) truck, the Autocar DC (heavy duty construction) truck, the Road Constructor 2 (construction) truck and the Western Star (long - distance conventional and cab - over engine) trucks. During the 1980s, improved versions of these trucks were introduced, including the Integral Sleeper (1982) long - distance truck, the Conventional (1983) upgraded conventional truck, the Autocar DS (1984) successor to the Road Constructor 2, the Integral Tall Sleeper (1985) truck which was the ' Globetrotter ' of America, the aerodynamic ' Aero ' (1987) truck, the Autocar (1987) construction truck with the option of using an integrated driveline (engine + gearbox + rear axle) designed and produced by Volvo and the short conventional WG (1988) truck. Throughout the 1980s, Volvo produced White and Autocar, as well as distributing European - made Volvos. The White high cab - over-engine model was also badged a Western Star and sold through that company 's Canadian dealer network. On August 16, 1986, General Motors announced that heavy duty truck manufacturing in Pontiac, Michigan would be discontinued and that GM 's American and Canadian large truck operations would be joined with the Greensboro, North Carolina - based Volvo White Truck Corporation by 1988. The new company, based in Greensboro, was called Volvo GM Heavy Truck Corporation and began marketing the Volvo White GMC badge, although all of the legacy GMC product lines had been discontinued by 1990. In 1997, the Volvo White GMC name was discontinued, and all models were badged either Volvo or Autocar, which that year celebrated its 100th year in operation. In a merger announced April 25, 2000, Volvo acquired Renault Véhicules Industriels, including Mack Trucks in North America. The deal made Volvo Group the second largest truck manufacturer in the world, and the largest in Europe. In order to secure the approval of the authorities to proceed with the merger, Volvo had to agree to divest of its low - cab - over (LCOE) models, known as the Xpeditor range, due to the degree the combination of this product with the Mack MR and LE series dominated the refuse markets in which these vehicles are predominantly used. Volvo re-entered the LCF market in 2007 with the purchase of UD Nissan Diesel. As a result of this ruling Volvo divested the Xpeditor product and the rights to the Autocar trademark in 2001. The purchaser was Grand Vehicle Works LLC, a private holding company headquartered in Chicago that also owned auto industry pioneer Union City Body Company (founded in 1898) and Workhorse Custom Chassis. Shortly before Autocar was taken over by GVW Group, Volvo phased out the remaining Autocar conventional products. In 2009, Volvo began to relocate the operations of its Mack Trucks subsidiary to Greensboro, where the North American operations of Volvo Trucks have been headquartered. Today, Volvo produces class 8 Volvo trucks at its Dublin, Virginia plant and class 8 Mack truck models in Macungie, Pennsylvania. Affiliate Volvo Powertrain produces engines and transmissions at its Hagerstown, Maryland, facility, for use exclusively in the North American market. Volvo Trucks are exported to and sold by more than 1800 dealers in more than 75 countries. Volvo tried to settle in Argentina on two different occasions: the first, in 1959 would be carried out in partnership with the local company Conarg. The truck production was a failure, but some models of motor graders with Volvo engine was made by Conarg (under licence of Bolinder Munktell). The second, in 1972, Volvo Sudamericana SACI has elevated to the consideration of the highest authority of the Ministry of Industry and Mining its project of installation of an industrial complex, consisting of an automotive terminal plant for chassis of heavy trucks with cab and chassis for long distance buses; a plant for trailers and a third plant for coaches. In conclusion, Volvo loses the tender, which was in the hands of Scania. As a part of adapting to the new European Union Euro 6 engine environment requirements, Volvo Trucks renewed their truck range in 2012 and 2013. The biggest launch was the new Volvo FH in September 2012 The rest of the range were renewed in the spring of 2013. Official website
who played tara on law and order svu
Maggie Siff - wikipedia Maggie Siff (born June 21, 1974) is an American actress. She is known for her television roles as department store heiress Rachel Menken Katz on the AMC drama Mad Men, Tara Knowles on the FX drama Sons of Anarchy, and psychiatrist Wendy Rhoades on the Showtime series Billions. She has had roles in the films Push (2009) as Teresa Stowe and Leaves of Grass (2010) as Rabbi Renannah Zimmerman. She stars in an indie film called A Woman, a Part (2016) as well as a minor role in a drama film called One Percent More Humid (2017). Siff was born in The Bronx, New York City. Her father is Jewish (from a family from Russia), and her mother is of Irish and Swedish descent; she has said that she feels "culturally Jewish ''. She is an alumna of The Bronx High School of Science and of Bryn Mawr College, where she majored in English and graduated in 1996. She later completed an M.F.A. in acting at New York University 's Tisch School of the Arts Graduate Acting Program. Siff worked extensively in regional theater before acting in television. She won a Barrymore Award for Excellence in Theater in 1998 for her work in Henrik Ibsen 's Ghosts at Lantern Theater Company. Siff started appearing in television series in 2004. She appeared as an Alcoholics Anonymous speaker during an episode of Rescue Me in Season 2. She also had roles on Law & Order: Special Victims Unit, Grey 's Anatomy, and Law & Order. She played Rachel Menken Katz on the series Mad Men from 2007 to 2008, which earned her a nomination, along with the rest of the cast, for a Screen Actors Guild Award for Outstanding Performance by an Ensemble in a Drama Series. She also appeared in Nip / Tuck during that time, before being cast as Dr. Tara Knowles on Sons of Anarchy in 2008. In the closing scene of the Sons of Anarchy episode "John 8: 32, '' Siff sang the song "Lullaby for a Soldier (Arms of the Angels). '' The song was used in the first trailer for the upcoming 2018 film Alita: Battle Angel. She has appeared in such films as Then She Found Me (2007) as Lily, Push as a psychic surgeon (called a Stitch) named Teresa Stowe, sent to help Nick (played by Chris Evans), Funny People (2009) as Rachel, Leaves of Grass (2010) as Rabbi Renannah Zimmerman, and Concussion (2013) as Sam Bennet. She appears in the 2016 Showtime series Billions. She starred in an independent indie film called A Woman, A Part (2016) as well as One Percent More Humid (2017). In October 2013, Siff announced that she was expecting her first child with her husband, Paul Ratliff, whom she married in 2012. Siff gave birth to a baby girl in April 2014, named Lucy.
who wrote the theme to the odd couple
Neal Hefti - wikipedia Neal Paul Hefti (October 29, 1922 -- October 11, 2008) was an American jazz trumpeter, composer, and arranger. He wrote music for The Odd Couple movie and TV series and for the Batman TV series. He began arranging professionally in his teens, when he wrote charts for Nat Towles. He became a prominent composer and arranger while playing trumpet for Woody Herman; while working for Herman he provided new arrangements for "Woodchopper 's Ball '' and "Blowin ' Up a Storm '' and composed "The Good Earth '' and "Wild Root ''. After leaving Herman 's band in 1946, Hefti concentrated on arranging and composing, although he occasionally led his own bands. He is especially known for his charts for Count Basie such as "Li'l Darlin ' '' and "Cute ''. Neal Paul Hefti was born October 29, 1922, to an impoverished family in Hastings, Nebraska. As a young child, he remembered his family relying on charity during the holidays. He started playing the trumpet in school at the age of eleven, and by high school was spending his summer vacations playing in local territory bands to help his family make ends meet. Growing up in, and near, a big city like Omaha, Hefti was exposed to some of the great bands and trumpeters of the Southwest territory bands. He also was able to see some of the virtuoso jazz musicians from New York that came through Omaha on tour. His early influences all came from the North Omaha scene. He said, We 'd see Basie in town, and I was impressed by Harry Edison and Buck Clayton, being a trumpet player. And I would say I was impressed by Dizzy Gillespie when he was with Cab Calloway. I was impressed by those three trumpet players of the people I saw in person... I thought Harry Edison and Dizzy Gillespie were the most unique of the trumpet players I heard. These experiences seeing Gillespie and Basie play in Omaha foreshadowed his period in New York watching Gillespie play and develop the music of bebop on 52nd Street and his later involvement with Count Basie 's band. In 1939, while still a junior at North High in Omaha, he got his start in the music industry by writing arrangements of vocal ballads for local Mickey Mouse bands, like the Nat Towles band. Harold Johnson recalled that Hefti 's first scores for that band were "Swingin ' On Lennox Avenue '' and "More Than You Know, '' as well as a very popular arrangement of "Anchors Aweigh ''. Some material that he penned in high school also was used by the Earl Hines band. Two days before his high school graduation ceremony in 1941, he got an offer to go on tour with the Dick Barry band, so he traveled with them to New Jersey. He quickly was fired from the band after two gigs because he could n't sight - read music well enough. Stranded in New Jersey because he did n't have enough money to get home to Nebraska, he finally joined Bob Astor 's band. Shelly Manne, drummer with Bob Astor at the time, recalled that even then Hefti 's writing skills were quite impressive: We roomed together. And at night we did n't have nothing to do, and we were up at this place -- Budd Lake. He said, "What are we going to do tonight? '' I said, "Why do n't you write a chart for tomorrow? '' Neal was so great that he 'd just take out the music paper, no score, (hums) -- trumpet part, (hums) -- trumpet part, (hums) -- trombone part, (hums), and you 'd play it the next day. It was the end. Cooking charts. I never forget, I could n't believe it. I kept watching him. It was fantastic. Hefti would n't focus on arranging seriously for a few more years. As a member of Astor 's band, he concentrated on playing trumpet. After an injury forced him to leave Bob Astor, he stayed a while in New York. He played with Bobby Byrne in late 1942, then with Charlie Barnet for whom he wrote the classic arrangement of Skyliner. During his time in New York, he hung around the clubs on 52nd Street, listening to bebop trumpet master Dizzy Gillespie and other musicians, and immersing himself in the new music. Since he did n't have the money to actually go into the clubs, he would sneak into the kitchen and hang out with the bands, and he got to know many of the great beboppers. He finally left New York for a while to play with the Les Lieber rhumba band in Cuba. When he returned from Cuba in 1943, he joined the Charlie Spivak band, which led him out to California for the first time, to make a band picture. Hefti fell in love with California. After making the picture in Los Angeles, he dropped out of the Spivak band to stay in California. After playing with Horace Heidt in Los Angeles for a few months in 1944, Hefti met up with Woody Herman who was out in California making a band picture. Hefti then joined Herman 's progressive First Herd band as a trumpeter. The Herman band was a different from any band that he had played with before. He referred to it as his first experience with a real jazz band. He said: I would say that I got into jazz when I got into Woody Herman 's band because that band was sorta jazz - oriented. They had records. It was the first band I ever joined where the musicians carried records on the road... Duke Ellington records... Woody Herman discs (and) Charlie Barnet V - Discs... That 's the first time I sort of got into jazz. The first time I sort of felt that I was anything remotely connected with jazz. Even though he had been playing with swing bands and other popular music bands for five years, this was the first time he had been immersed in the music of Duke Ellington, and this was the first music that really felt like jazz to him. First Herd was one of the first big bands to really embrace bebop. They incorporated the use of many bebop ideas in their music. As part of the ensemble, Hefti was instrumental in this development, drawing from his experiences in New York and his respect for Gillespie, who had his own bebop big band. Chubby Jackson, First Herd 's bassist, said Neal started to write some of his ensembles with some of the figures that come from that early bebop thing. We were really one of the first bands outside of Dizzy 's big band that flavored bebop into the big band -- different tonal quality and rhythms, and the drum feeling started changing, and that I think was really the beginning of it... I fell in love with it, and I finally got into playing it with the big band because Neal had it down. Neal would write some beautiful things along those patterns. During these years with Herman 's band, as they started to turn more and more towards bop ideas, Hefti started to turn more of his attention and effort to writing, at which he quickly excelled. He composed and arranged some of First Herd 's most popular recordings, including two of the band 's finest instrumentals: "Wild Root '' and "The Good Earth ''. He contributed to the band a refinement of bop trumpet style that reflected his experience with Byrne, Barnet, and Spivak, as well as an unusually imaginative mind, essentially restless on the trumpet, but beautifully grounded on manuscript paper. He also wrote band favorites such as "Apple Honey '' and "Blowin ' Up a Storm ''. His first hand experience in New York, hanging around 52nd Street and listening to the great Dizzy Gillespie, became an important resource to the whole band. His bebop composition work also started to attract outside attention from other composers, including the interest of neo-classicist Igor Stravinsky, who later wrote "Ebony Concerto '' for the band. What first attracted Stravinsky to Herman was the five trumpet unison on "Caldonia, '' which mirrored the new music of Gillespie... First it had been (Neal Hefti 's) solo on Herman 's "Woodchopper 's Ball '', then it became the property of the whole section, and finally, in this set form, it was made part of (Hefti 's) arrangement of "Caldonia. '' Hefti 's work successfully drew from many sources. As composer, arranger, and as a crucial part of the Herman ensemble, he provided the Herman band with a solid base which led to their popularity and mastery of the big band bebop style. While playing with the First Herd, Neal married Herman 's vocalist, Frances Wayne. Playing with the band was very enjoyable for Hefti, which made it doubly hard for him to leave when he wanted to pursue arranging and composing full - time. Talking about Herman 's band, Hefti said, The band was a lot of fun. I think there was great rapport between the people in it. And none of us wanted to leave. We were always getting sort of offers from other bands for much more money than we were making with Woody, and it was always like if you left, you were a rat. You were really letting down the team. The Heftis finally left Woody Herman in late 1946, and Neal began freelance arranging. He wrote charts for Buddy Rich 's band, and the ill - fated Billy Butterfield band. He wrote a few arrangements and compositions for George Auld 's band, including the outstanding composition "Mo Mo. '' He joined the short - lived Charlie Ventura band as both sideman and arranger (arranging popular songs such as "How High the Moon ''). He also arranged for the best of Harry James 's bands in the late forties. One of the serendipitous highlights of his work in the late forties was the recording of his Cuban - influenced song "Repetition '' using a big band and string orchestra, for an anthology album called The Jazz Scene intended to showcase the best jazz artists around at that time. What saved this otherwise uncharacteristically bland arrangement was the featuring of Charlie Parker as soloist. Hefti had written the piece with no soloist in mind, but Parker was in the studio while Hefti was recording, heard the arrangement, and asked to be included as soloist. In the liner notes to the album, producer Norman Granz wrote: Parker actually plays on top of the original arrangement; that it jells as well as it does is a tribute both to the flexible arrangement of Hefti and the inventive genius of Parker to adapt himself to any musical surrounding. In 1950, Hefti started arranging for Count Basie and what became known as "The New Testament '' band. According to Hefti in a Billboard interview, Basie wanted to develop a stage band that could appear on The Ed Sullivan Show. Although the New Testament band never became a show band in that sense, it was much more of an ensemble band than Basie 's previous orchestras. Hefti 's tight, well - crafted arrangements resulted in a new band identity that was maintained for more than twenty years. In his autobiography, Count Basie recalls their first meeting and the first compositions that Hefti provided the new band: Neal came by, and we had a talk, and he said he 'd just like to put something in the book. Then he came back with "Little Pony '' and then "Sure Thing, '' "Why Not? '' and "Fawncy Meeting You, '' and we ran them down, and that 's how we got married. Hefti 's compositions and arrangements featured and recorded by the orchestra established the distinctive, tighter, modern sound of the later Basie. His work was popular with both the band and with audiences. Basie said, "There is something of his on each one of those first albums of that new band. '' One of the new Basie band 's most popular records was titled Basie and subtitled "E = MC2 = Count Basie Orchestra + Neal Hefti Arrangements, '' now more commonly referred to as Atomic Basie, an album featuring eleven songs composed and arranged by Hefti, including the now - standard ballad "Lil ' Darlin '' and "Splanky. '' Also on the album were "The Kid from Red Bank '' featuring a gloriously sparse piano solo that was Basie 's hallmark, and other songs that quickly became Basie favorites, such as "Flight of the Foo Birds '' with Eddie Lockjaw Davis ' flying tenor solo, "Fantail '' with Frank Wess 's soaring alto solo, and the masterpiece ensemble lines of "Teddy the Toad ''. These pieces are evidence of both Hefti 's masterful hand, and the strong ensemble that Count Basie had put together. During the Fifties, Hefti did n't get as much respect as a musician and band leader as he did as composer and arranger. In a 1955 interview, Miles Davis said "if it were n't for Neal Hefti, the Basie band would n't sound as good as it does. But Neal 's band ca n't play those same arrangements nearly as well. '' This disparity is not so much a reflection of Hefti 's ability (or lack thereof) as a musician, as it is a reflection of his focus as a writer. In the liner notes to Atomic Basie, critic Barry Ulanov says: In a presentation of the Count Basie band notable of its justness, for its attention to all the rich instrumental talent and all the high good taste of this band -- in this presentation, not the least of the achievements is the evenness of the manuscript. Neal Hefti has matched -- figure for figure, note for note - blower -- his talent to the Basie band 's, and it comes out, as it should, Basie. Much the same way that the influential Duke Ellington matched his scores to the unique abilities of his performers, Hefti was able to take advantage of the same kind of ' fine - tuning ' to bring out the best of the talents of the Basie band. Therefore, it should come as no surprise that when the same charts are played by a different band, even the composer 's own, that the result is not as strong. As composer, Hefti garnered many accolades. In addition to Ulanov 's praise, Hefti won two Grammy awards for his composition work on Atomic Basie including "Li'l Darlin, '' "Splanky, '' and "Teddy the Toad. '' The success of the album had Basie and Hefti in the studio six months later making another album. This second album was also very successful for Basie. Basie recalled: That is the one that came out under the title of "Basie Plays Hefti ''. All the tunes were very musical. That 's the way Neal 's things were, and those guys in that band always had something to put with whatever you laid in front of them. Hefti 's influence on the Basie sound was so successful, his writing for the band so strong, that Basie used his arranging talents even when recording standard jazz tunes with the likes of Frank Sinatra. Basie said, So we went on out to Los Angeles and did ten tunes in two four - hour sessions (with Frank Sinatra). All of those tunes were standards, which I 'm pretty sure he had recorded before (and had hits on). But this time they had been arranged by Neal Hefti with our instrumentation and voicing in mind. Again, by matching the individual parts of the arrangements to the unique abilities of Basie 's band, Hefti was able to highlight the best of their talents, and make the most of the ensemble. Overall, Basie was very impressed with Hefti 's charts, but was perhaps too proud to admit the extent of his influence: I think Neal did a lot of marvelous things for us, because even though what he did was a different thing and not quite the style but sort of a different sound, I think it was quite musical. Outside of his work with the new Basie band, Hefti also led a big band of his own during the fifties. In 1951, one of these bands featured his wife Frances on vocals. They recorded and toured off and on with this and other incarnations of this band throughout the 1950s. Although his own band did not attain the same level of success as the famous bands he arranged for, he did receive a Grammy nomination for his own album Jazz Pops (1962), which included recordings of "Li'l Darlin, '' "Cute, '' and "Coral Reef ''. It was his last work in the Jazz idiom. Later in the 1950s he finally abandoned trumpet playing altogether to concentrate on scoring and conducting. He had steady work conducting big bands, backing singers in the studio during recording sessions, and appearing on the television shows of Arthur Godfrey, Kate Smith, and others. He moved back to his beloved California in the early 1960s. During this time he began working for the Hollywood film industry, and he enjoyed tremendous popular success writing music for film and television. He wrote much background and theme music for motion pictures, including the films Sex and the Single Girl, How to Murder Your Wife (1965), Synanon, Boeing Boeing (1965), Lord Love a Duck (1966), Duel at Diablo (1966), Barefoot in the Park (1967), The Odd Couple (1968), and Harlow (1965), for which he received two Grammy nominations for the song "Girl Talk ''. While most of his compositions during this period were geared to the demands of the medium and the directors, there were many moments when he was able to infuse his work with echoes of his jazz heritage. In 1961 Hefti joined with Frank Sinatra on his Sinatra and Swingin ' Brass album where Hefti was credited as arranger and conductor of the album 's 12 cuts. He also wrote background and theme music for television shows, including Batman and The Odd Couple. He received three Grammy nominations for his television work and received one award for his Batman television score. His Batman title theme, a simple cyclic twelve - bar blues - based theme, became a Top 10 single for The Marketts and later for Hefti himself. (Because the only actual lyric is simply a constantly repeated "Batman! '', one of the vocalists famously amended the sheet music to read "Word and Music by Neal Hefti. '') His theme for The Odd Couple movie was reprised as part of his score for the television series of the early 1970s, as well as in a jazzier version for the 2015 update. He received two Grammy nominations for his work on The Odd Couple television series. Throughout these years and into the 1970s, Hefti periodically formed big bands for club, concert, or record dates. Hefti died of throat cancer on October 11, 2008, at his home in Toluca Lake, California, at the age of 85. He was subsequently interred at Forest Lawn - Hollywood Hills Cemetery. His grave can be found at the Court of Remembrance (Sanctuary of Enduring Protection) in crypt No. 2763. The epitaph on the front of the crypt reads "Forever In Tune ''. As composer and arranger with Count Basie As composer and / or arranger with Harry James As arranger and conductor
why and how did we once have a national speed limit
Speed limits in the United states - wikipedia Speed limits in the United States are set by each state or territory. States also allowed counties and municipalities, to enact typically lower limits. Highway speed limits can range from an urban low of 25 mph (40 km / h) to a rural high of 85 mph (137 km / h). Speed limits are typically posted in increments of five miles per hour (8 km / h). Some states have lower limits for trucks and at night, and occasionally there are minimum speed limits. The highest speed limits are generally 70 mph (113 km / h) on the West Coast and the inland eastern states, 75 -- 80 mph (121 -- 129 km / h) in inland western states, along with Arkansas and Louisiana, and 65 -- 70 mph (105 -- 113 km / h) on the Eastern Seaboard. Alaska, Connecticut, Delaware, Massachusetts, New Jersey, New York, Puerto Rico, Rhode Island, and Vermont have a maximum limit of 65 mph (105 km / h), and Hawaii has a maximum limit of 60 mph (97 km / h). The District of Columbia and the U.S. Virgin Islands have a maximum speed limit of 55 mph (89 km / h). Guam, the Northern Mariana Islands and American Samoa have speed limits of 45 mph (72 km / h). Two territories in the U.S. Minor Outlying Islands have their own speed limits: 40 mph (64 km / h) in Wake Island, and 15 mph (24 km / h) in Midway Atoll. Unusual for any state east of the Mississippi River, much of I - 95 in Maine north of Bangor allows up to 75 mph (121 km / h), and the same is true for up to 600 miles of freeways in Michigan. Portions of the Idaho, Montana, Nevada, South Dakota, Texas, Utah, and Wyoming road networks have 80 mph (129 km / h) posted limits. The highest posted speed limit in the entire country can be found on the Texas State Highway 130, and it is 85 mph (137 km / h). For 13 years (January 1974 -- April 1987), federal law withheld Federal highway trust funds to states that had speed limits above 55 mph (89 km / h). From April 1987 through December 8, 1995, an amended federal law disincentivized speed limits above 65 mph (105 km / h). This table contains the most usual posted daytime speed limits, in miles per hour, on typical roads in each category. The values shown are not necessarily the fastest or slowest. They usually indicate, but not always, statutory speed limits. Some states and territories have lower truck speed limits applicable to heavy trucks. If present, they are usually only on freeways or other high - speed roadways. Washington allows for speeds up to 75 mph (121 km / h), but the highest posted signs are 70 mph (110 km / h). Mississippi allows speeds up to 80 mph (129 km / h) on toll roads, but no such roads exist. Oklahoma removed the maximum speed of 75 from its laws, though no road has been posted higher than 75. Freeway: Interstate Highway or other state or U.S. Route built to Interstate standards. Divided rural: State or U.S. route, generally with four or more lanes, not built to Interstate standards, but with a median or other divider separating directions of travel. Undivided rural: County, State, or U.S. route, generally with two to four lanes, with no separator between directions of travel. Residential Street / residential: Residential streets, business districts, or School zones. (105 -- 113 km / h) Freeway: Interstate Highway or other state - or federally numbered road built to Interstate standards. Divided: State - or federally numbered road, generally with four or more lanes, not built to Interstate standards, but with a median or other divider separating directions of travel. Undivided rural: County, State, or U.S. route, generally with two to four lanes, with no separator between directions of travel. Residential Street / residential: Residential streets, business districts, or School zones. (Not available or information needed) Prima facie Absolute One of the first speed limits in what would become the United States (at the time, still a British colony) was set in Boston in 1701 by the board of selectmen (similar to a city council): Ordered, That no person whatsoever Shall at any time hereafter ride or drive a gallop or other extream pace within any of the Streets, lanes, or alleys in this Town on penalty of forfeiting three Shillings for every such offence, and it may be lawfull for any of the Inhabitants of this Town to make Stop of such horse or Rider untill the name of the offender be known in order to prosecution In response to the 1973 oil crisis, Congress enacted the National Maximum Speed Law that created the universal 55 miles per hour (89 km / h) speed limit. Whether this reduced gasoline consumption is debated and the impact on safety is unclear; studies and opinions of safety advocates are mixed. The law was widely disregarded by motorists, even after the national maximum was increased to 65 miles per hour (105 km / h) on certain roads in 1987 and 1988. In 1995, the law was repealed, returning the choice of speed limit to each state. Upon that repeal, there was effectively no speed limit on Montana 's interstates for daytime driving (the nighttime limit was set at 65 mph) from 1995 to 1999, when the state Supreme Court threw out the law as "unconstitutionally vague. '' The state legislature enacted a 75 mph daytime limit in May 1999. As of May 15, 2017, 41 states have maximum speed limits of 70 mph or higher. 18 of those states have 75 mph speed limits or higher, while 7 states of that same portion have 80 mph speed limits. In addition to the legally defined maximum speed, minimum speed limits may be applicable. Occasionally, there are default minimum speed limits for certain types of roads, generally freeways. Comparable to the common basic speed rule, most jurisdictions also have laws prohibiting speeds so low they are dangerous or impede the normal and reasonable flow of traffic. Some jurisdictions set lower speed limits that are applicable only to large commercial vehicles like heavy trucks and buses. While they are called "truck speed limits '', they generally do not apply to light trucks. A 1987 study said that crash involvement significantly increases when trucks drive much slower than passenger vehicles, suggesting that the difference in speed between passenger vehicles and slower trucks could cause crashes that otherwise may not happen. In a review of available research, the Transportation Research Board said "(no) conclusive evidence could be found to support or reject the use of differential speed limits for passenger cars and heavy trucks '' and "a strong case can not be made on empirical grounds in support of or in opposition to differential speed limits ''. Another study said that two thirds (67 %) of truck / passenger car crashes are the fault of the passenger vehicle. The basic speed rule requires drivers adjust speeds to the conditions. This is usually relied upon to regulate proper night speed reductions, if required. Numeric night speed limits, which generally begin 30 minutes after sunset and end 30 minutes before sunrise, are occasionally used where, in theory, safety problems require a speed lower than what is self - selected by drivers. Examples include: Some states create arbitrary night speed limits applicable to entire classes of roads. Until September 2011, Texas had a statutory 65 mph (105 km / h) night speed limit for all roads with a higher limit. Montana has a statutory 65 mph (105 km / h) night speed limit on all federal, state, and secondary roads except for Interstates. Traffic violations can be a lucrative income source for jurisdictions and insurance companies. For example: Thus, an authority that sets and enforces speed limits, such as a state government, regulates and taxes insurance companies, who also gain revenue from speeding enforcement. Furthermore, such an authority often requires "all '' drivers to have policies with those same companies, solidifying the association between the state and auto insurers. If a driver can not be covered under an insurance policy because of high risk, the state will assume that high risk for a greater monetary amount; thus resulting in even more revenue generation for the state. When a speed limit is used to generate revenue but has no safety justification, it is called a speed trap. The town of New Rome, Ohio was such a speed trap, where speeding tickets raised up to $400,000 per year to fund the police department of a 12 - acre village with 60 residents. Reduced speed limits are sometimes enacted for air quality reasons. The most prominent example includes Texas ' environmental speed limits. Either of the following qualifies a crash as speed - related in accordance with U.S. government rules: Speeds in excess of speed limits account for most speed - related traffic citations; generally, "driving too fast for conditions '' tickets are issued only after an incident where the ticket issuer found tangible evidence of unreasonable speed, such as a crash. A criticism of the "exceeding speed limits '' definition of speeding is twofold: Variable speed limits offer some potential to reduce speed - related crashes. However, due to the high cost of implementation, they exist primarily on freeways. Furthermore, most speed - related crashes occur on local and collector roads, which generally have far lower speed limits and prevailing speeds than freeways. Most states have absolute speed limits, meaning that a speed in excess of the limit is illegal per se. However, some states have prima facie speed limits. This allows motorists to defend against a speeding charge if it can be proven that the speed was in fact reasonable and prudent. Speed limits in Texas, Utah, and Rhode Island are prima facie. Some other states have a hybrid system: speed limits may be prima facie up to a certain speed or only on certain roads. For example, speed limits in California up to 55 mph, or 65 mph on highways, are prima facie, and those at or above those speeds are absolute. A successful prima facie defense is rare. Not only does the burden of proof rest upon the accused, a successful defense may involve expenses well in excess of the cost of a ticket, such as an expert witness. Furthermore, because prima facie defenses must be presented in a court, such a defense is difficult for out - of - town motorists. Metric speed limits are no longer included in the Federal Highway Administration 's Manual on Uniform Traffic Control Devices (MUTCD), which provides guidelines for speed limit signage, and therefore, new installations are not legal in the United States. Prior to 2009, a speed limit could be defined in kilometers per hour (km / h) as well as miles per hour (mph). The 2003 version of the MUTCD stated that "speed limits shown shall be in multiples of 10 km / h or 5 mph. '' If a speed limit sign indicated km / h, the number was circumscribed and "km / h '' was written below. Prior to 2003, metric speed limits were designated using the standard speed limit sign, usually with yellow supplemental "METRIC '' and "km / h '' plaques above it and below it, respectively. In 1995, the National Highway System Designation Act prohibited use of federal funds to finance new metric signage.
who started the stock market crash of 1929
Wall Street crash of 1929 - wikipedia The Wall Street Crash of 1929, also known as Black Tuesday (October 29), the Great Crash, or the Stock Market Crash of 1929, began on October 24, 1929 ("Black Thursday ''), and was the most devastating stock market crash in the history of the United States, when taking into consideration the full extent and duration of its after effects. The crash, which followed the London Stock Exchange 's crash of September, signalled the beginning of the 12 - year Great Depression that affected all Western industrialized countries. The Roaring Twenties, the decade that followed World War I and led to the crash, was a time of wealth and excess. Building on post-war optimism, rural Americans migrated to the cities in vast numbers throughout the decade with the hopes of finding a more prosperous life in the ever - growing expansion of America 's industrial sector. While the American cities prospered, the overproduction of agricultural produce created widespread financial despair among American farmers throughout the decade. This would later be blamed as one of the key factors that led to the 1929 stock market crash. Despite the dangers of speculation, many believed that the stock market would continue to rise forever. On March 25, 1929, after the Federal Reserve warned of excessive speculation, a mini crash occurred as investors started to sell stocks at a rapid pace, exposing the market 's shaky foundation. Two days later, banker Charles E. Mitchell announced his company the National City Bank would provide $25 million in credit to stop the market 's slide. Mitchell 's move brought a temporary halt to the financial crisis and call money declined from 20 to 8 percent. However, the American economy showed ominous signs of trouble: steel production declined, construction was sluggish, automobile sales went down, and consumers were building up high debts because of easy credit. Despite all these economic trouble signs and the market breaks in March and May 1929, stocks resumed their advance in June and the gains continued almost unabated until early September 1929 (the Dow Jones average gained more than 20 % between June and September). The market had been on a nine - year run that saw the Dow Jones Industrial Average increase in value tenfold, peaking at 381.17 on September 3, 1929. Shortly before the crash, economist Irving Fisher famously proclaimed, "Stock prices have reached what looks like a permanently high plateau. '' The optimism and financial gains of the great bull market were shaken after a well publicized early September prediction from financial expert Roger Babson that "a crash was coming ''. The initial September decline was thus called the "Babson Break '' in the press. This was the start of the Great Crash, although until the severe phase of the crash in October, many investors regarded the September "Babson Break '' as a "healthy correction '' and buying opportunity. On September 20, the London Stock Exchange crashed when top British investor Clarence Hatry and many of his associates were jailed for fraud and forgery. The London crash greatly weakened the optimism of American investment in markets overseas. In the days leading up to the crash, the market was severely unstable. Periods of selling and high volumes were interspersed with brief periods of rising prices and recovery. Selling intensified in mid-October. On October 24 ("Black Thursday ''), the market lost 11 percent of its value at the opening bell on very heavy trading. The huge volume meant that the report of prices on the ticker tape in brokerage offices around the nation was hours late, so investors had no idea what most stocks were actually trading for at that moment, increasing panic. Several leading Wall Street bankers met to find a solution to the panic and chaos on the trading floor. The meeting included Thomas W. Lamont, acting head of Morgan Bank; Albert Wiggin, head of the Chase National Bank; and Charles E. Mitchell, president of the National City Bank of New York. They chose Richard Whitney, vice president of the Exchange, to act on their behalf. With the bankers ' financial resources behind him, Whitney placed a bid to purchase a large block of shares in U.S. Steel at a price well above the current market. As traders watched, Whitney then placed similar bids on other "blue chip '' stocks. This tactic was similar to one that ended the Panic of 1907. It succeeded in halting the slide. The Dow Jones Industrial Average recovered, closing with it down only 6.38 points for the day. The rally continued on Friday, October 25, and the half day session on Saturday the 26th but, unlike 1907, the respite was only temporary. Over the weekend, the events were covered by the newspapers across the United States. On October 28, "Black Monday '', more investors facing margin calls decided to get out of the market, and the slide continued with a record loss in the Dow for the day of 38.33 points, or 13 %. The next day, "Black Tuesday '', October 29, 1929, about 16 million shares traded as the panic selling reached its peak. Some stocks actually had no buyers at any price that day ("air pockets ''). The Dow lost an additional 30 points, or 12 percent. The volume of stocks traded on October 29, 1929, was a record that was not broken for nearly 40 years. On October 29, William C. Durant joined with members of the Rockefeller family and other financial giants to buy large quantities of stocks to demonstrate to the public their confidence in the market, but their efforts failed to stop the large decline in prices. Due to the massive volume of stocks traded that day, the ticker did not stop running until about 7: 45 p.m. The market had lost over $30 billion in the space of two days which included $14 billion on October 29 alone. After a one - day recovery on October 30, where the Dow regained an additional 28.40 points, or 12 percent, to close at 258.47, the market continued to fall, arriving at an interim bottom on November 13, 1929, with the Dow closing at 198.60. The market then recovered for several months, starting on November 14, with the Dow gaining 18.59 points to close at 217.28, and reaching a secondary closing peak (i.e., bear market rally) of 294.07 on April 17, 1930. The following year, the Dow embarked on another, much longer, steady slide from April 1931 to July 8, 1932, when it closed at 41.22 -- its lowest level of the 20th century, concluding an 89 percent loss rate for all of the market 's stocks. For most of the 1930s, the Dow began slowly to regain the ground it lost during the 1929 crash and the three years following it, beginning on March 15, 1933, with the largest percentage increase of 15.34 percent, with the Dow Jones closing at 62.10, with an 8.26 point increase. The largest percentage increases of the Dow Jones occurred during the early and mid-1930s. In late 1937, there was a sharp dip in the stock market, but prices held well above the 1932 lows. The market would not return to the peak closing of September 3, 1929, until November 23, 1954. The crash followed a speculative boom that had taken hold in the late 1920s. During the later half of the 1920s, steel production, building construction, retail turnover, automobiles registered, even railway receipts advanced from record to record. The combined net profits of 536 manufacturing and trading companies showed an increase, in fact for the first six months of 1929, of 36.6 % over 1928, itself a record half - year. Iron and steel led the way with doubled gains. Such figures set up a crescendo of stock - exchange speculation which had led hundreds of thousands of Americans to invest heavily in the stock market. A significant number of them were borrowing money to buy more stocks. By August 1929, brokers were routinely lending small investors more than two - thirds of the face value of the stocks they were buying. Over $8.5 billion was out on loan, more than the entire amount of currency circulating in the U.S. at the time. The rising share prices encouraged more people to invest; people hoped the share prices would rise further. Speculation thus fueled further rises and created an economic bubble. Because of margin buying, investors stood to lose large sums of money if the market turned down -- or even failed to advance quickly enough. The average P / E (price to earnings) ratio of S&P Composite stocks was 32.6 in September 1929, clearly above historical norms. As per the economist John Kenneth Galbraith, this exuberance also resulted in a large number of people placing their savings and money in leverage investment products like Goldman Sachs ' "Blue ridge trust '' and "Shenandoah trust ''. These too crashed in 1929 resulting in losses of $475 billion in today 's dollars to banks. Good harvests had built up a mass of 250 million bushels of wheat to be "carried over '' when 1929 opened. By May there was also a winter - wheat crop of 560 million bushels ready for harvest in the Mississippi Valley. This oversupply caused a drop in wheat prices so heavy that the net incomes of the farming population from wheat were threatened with extinction. Stock markets are always sensitive to the future state of commodity markets, and the slump in Wall Street predicted for May by Sir George Paish arrived on time. In June 1929, the position was saved by a severe drought in the Dakotas and the Canadian West, plus unfavorable seed times in Argentina and eastern Australia. The oversupply would now be wanted to fill the big gaps in the 1929 world wheat production. From 97 ¢ per bushel in May, the price of wheat rose to $1.49 in July. When it was seen that at this figure the American farmers would get rather more for their smaller crop than for that of 1928, up went stocks again and from far and wide orders came to buy shares for the profits to come. In August, the wheat price fell when France and Italy were bragging of a magnificent harvest, and the situation in Australia improved. This sent a shiver through Wall Street and stock prices quickly dropped, but word of cheap stocks brought a fresh rush of "stags '', amateur speculators and investors. Congress had also voted for a 100 million dollar relief package for the farmers, hoping to stabilize wheat prices. By October though, the price had fallen to $1.31 per bushel. Other important economic barometers were also slowing or even falling by mid-1929, including car sales, house sales, and steel production. The falling commodity and industrial production may have dented even American self - confidence, and the stock market peaked on September 3 at 381.17 just after Labor Day, then started to falter after Roger Babson issued his prescient "market crash '' forecast. By the end of September, the market was down 10 % from the peak (the "Babson Break ''). Selling intensified in early and mid October, with sharp down days punctuated by a few up days. Panic selling on huge volume started the week of October 21 and intensified and culminated on October 24, the 28th and especially the 29th ("Black Tuesday ''). The president of the Chase National Bank said at the time: "We are reaping the natural fruit of the orgy of speculation in which millions of people have indulged. It was inevitable, because of the tremendous increase in the number of stockholders in recent years, that the number of sellers would be greater than ever when the boom ended and selling took the place of buying. '' In 1932, the Pecora Commission was established by the U.S. Senate to study the causes of the crash. The following year, the U.S. Congress passed the Glass -- Steagall Act mandating a separation between commercial banks, which take deposits and extend loans, and investment banks, which underwrite, issue, and distribute stocks, bonds, and other securities. After the experience of the 1929 crash, stock markets around the world instituted measures to suspend trading in the event of rapid declines, claiming that the measures would prevent such panic sales. However, the one - day crash of Black Monday, October 19, 1987, when the Dow Jones Industrial Average fell 22.6 %, was worse in percentage terms than any single day of the 1929 crash (although the combined 25 % decline of October 28 -- 29, 1929 was larger than October 19, 1987, and remains the worst two - day decline ever). The American mobilization for World War II at the end of 1941 moved approximately ten million people out of the civilian labor force and into the war. World War II had a dramatic effect on many parts of the economy, and may have hastened the end of the Great Depression in the United States. Government - financed capital spending accounted for only 5 percent of the annual U.S. investment in industrial capital in 1940; by 1943, the government accounted for 67 percent of U.S. capital investment. Together, the 1929 stock market crash and the Great Depression formed the largest financial crisis of the 20th century. The panic of October 1929 has come to serve as a symbol of the economic contraction that gripped the world during the next decade. The falls in share prices on October 24 and 29, 1929 were practically instantaneous in all financial markets, except Japan. The Wall Street Crash had a major impact on the U.S. and world economy, and it has been the source of intense academic debate -- historical, economic, and political -- from its aftermath until the present day. Some people believed that abuses by utility holding companies contributed to the Wall Street Crash of 1929 and the Depression that followed. Many people blamed the crash on commercial banks that were too eager to put deposits at risk on the stock market. In 1930, 1,352 banks held more than $853 million in deposits; in 1931, one year later, 2,294 banks went down with nearly $1.7 billion in deposits. Many businesses failed (28,285 failures and a daily rate of 133 in 1931). The 1929 crash brought the Roaring Twenties to a shuddering halt. As tentatively expressed by economic historian Charles P. Kindleberger, in 1929, there was no lender of last resort effectively present, which, if it had existed and were properly exercised, would have been key in shortening the business slowdown (s) that normally follows financial crises. The crash marked the beginning of widespread and long - lasting consequences for the United States. Historians still debate the question: did the 1929 Crash spark The Great Depression, or did it merely coincide with the bursting of a loose credit - inspired economic bubble? Only 16 % of American households were invested in the stock market within the United States during the period leading up to the depression, suggesting that the crash carried somewhat less of a weight in causing the depression. However, the psychological effects of the crash reverberated across the nation as businesses became aware of the difficulties in securing capital market investments for new projects and expansions. Business uncertainty naturally affects job security for employees, and as the American worker (the consumer) faced uncertainty with regards to income, naturally the propensity to consume declined. The decline in stock prices caused bankruptcies and severe macroeconomic difficulties including contraction of credit, business closures, firing of workers, bank failures, decline of the money supply, and other economically depressing events. The resultant rise of mass unemployment is seen as a result of the crash, although the crash is by no means the sole event that contributed to the depression. The Wall Street Crash is usually seen as having the greatest impact on the events that followed and therefore is widely regarded as signaling the downward economic slide that initiated the Great Depression. True or not, the consequences were dire for almost everybody. Most academic experts agree on one aspect of the crash: It wiped out billions of dollars of wealth in one day, and this immediately depressed consumer buying. The failure set off a worldwide run on US gold deposits (i.e. the dollar), and forced the Federal Reserve to raise interest rates into the slump. Some 4,000 banks and other lenders ultimately failed. Also, the uptick rule, which allowed short selling only when the last tick in a stock 's price was positive, was implemented after the 1929 market crash to prevent short sellers from driving the price of a stock down in a bear raid. The stock market crash of October 1929 led directly to the Great Depression in Europe. When stocks plummeted on the New York Stock Exchange, the world noticed immediately. Although financial leaders in the United Kingdom, as in the United States, vastly underestimated the extent of the crisis that would ensue, it soon became clear that the world 's economies were more interconnected than ever. The effects of the disruption to the global system of financing, trade, and production and the subsequent meltdown of the American economy were soon felt throughout Europe. During 1930 and 1931, in particular, unemployed workers went on strike, demonstrated in public, and otherwise took direct action to call public attention to their plight. Protests often focused on the so - called Means Test, which the government had instituted in 1931 as a way to limit the amount of unemployment payments made to individuals and families. For working people, the Means Test seemed an intrusive and insensitive way to deal with the chronic and relentless deprivation caused by the economic crisis. The strikes were met forcefully, with police breaking up protests, arresting demonstrators, and charging them with crimes related to the violation of public order. Economists and historians disagree as to what role the crash played in subsequent economic, social, and political events. The Economist argued in a 1998 article that the Depression did not start with the stock market crash, nor was it clear at the time of the crash that a depression was starting. They asked, "Can a very serious Stock Exchange collapse produce a serious setback to industry when industrial production is for the most part in a healthy and balanced condition? '' They argued that there must be some setback, but there was not yet sufficient evidence to prove that it will be long or that it need go to the length of producing a general industrial depression. But The Economist also cautioned that some bank failures were also to be expected and some banks may not have any reserves left for financing commercial and industrial enterprises. They concluded that the position of the banks is the key to the situation, but what was going to happen could not have been foreseen. Academics see the Wall Street Crash of 1929 as part of a historical process that was a part of the new theories of boom and bust. According to economists such as Joseph Schumpeter, Nikolai Kondratiev and Charles E. Mitchell, the crash was merely a historical event in the continuing process known as economic cycles. The impact of the crash was merely to increase the speed at which the cycle proceeded to its next level. Milton Friedman 's A Monetary History of the United States, co-written with Anna Schwartz, advances the argument that what made the "great contraction '' so severe was not the downturn in the business cycle, protectionism, or the 1929 stock market crash in themselves -- but instead, according to Friedman and Schwartz, what plunged the country into a deep depression was the collapse of the banking system during three waves of panics over the 1930 -- 33 period. Media related to Wall Street Crash of 1929 at Wikimedia Commons
what was before the nhs and welfare system
History of the National Health Service (England) - wikipedia The National Health Service in England was created by the National Health Service Act 1946. Responsibility for the NHS in Wales was passed to the Secretary of State for Wales in 1969, leaving the Secretary of State for Social Services responsible for the NHS in England alone. Dr Benjamin Moore, a Liverpool physician, in 1910 in The Dawn of the Health Age was probably the first to use the words ' National Health Service '. He established the State Medical Service Association which held its first meeting in 1912 and continued to exist until it was replaced by the Socialist Medical Association in 1930. Before the National Health Service was created in 1948, patients were generally required to pay for their health care. Free treatment was sometimes available from Voluntary Hospitals. Some local authorities operated hospitals for local ratepayers (under a system originating with the Poor Law). The London County Council (LCC) on 1 April 1930 took over from the abolished Metropolitan Asylums Board responsibility for 140 hospitals, medical schools and other medical institutions. The Local Government Act 1929 allowed local authorities to run services over and above those authorised by the Poor Law and in effect to provide medical treatment for everyone. By the outbreak of the Second World War, the LCC was running the largest public health service in Britain. Dr A.J. Cronin 's controversial novel The Citadel, published in 1937, had fomented extensive criticism about the severe inadequacies of health care. The author 's innovative ideas were not only essential to the conception of the NHS, but in fact, his best - selling novels are said to have greatly contributed to the Labour Party 's victory in 1945. The Emergency Hospital Service established in 1939 gave a taste of what a National Health Service might look like. Systems of health insurance usually consisted of private schemes such as Friendly societies or Welfare societies. Under the National Insurance Act 1911, introduced by David Lloyd George, a small amount was deducted from weekly wages, to which was added contributions from the employer and the government. In return for the record of contributions, the workman was entitled to medical care (as well as retirement and unemployment benefits) though not necessarily to the drugs prescribed. To obtain medical care, he registered with a doctor. Each doctor in General Practice who participated in the scheme thus had a ' panel ' of those who have made an insurance under the system, and was paid a capitation grant out of the fund calculated upon the number. Lloyd George 's name survives in the "Lloyd George envelopes '' in which most primary care records in England are stored, although today most working records in primary care are at least partially computerised. This imperfect scheme only covered workers who paid their National Insurance Contributions and was known as ' Lloyd George 's Ambulance Wagon '. Most women and children were not covered. Lord Dawson was commissioned in 1919 by Lord Addison, the first British Minister of Health to produce a report on "schemes requisite for the systematised provision of such forms of medical and allied services as should, in the opinion of the Council, be available for the inhabitants of a given area ''. An Interim Report on the Future Provision of Medical and Allied Services was produced in 1920, though no further report ever appeared. The report laid down plans for a network of Primary and Secondary Health Centres, and was very influential in subsequent debates about the National Health Service. However the fall of the Lloyd George government prevented any implementation of those ideas at that time. The Labour Party in 1932 accepted a resolution moved by Somerville Hastings MP calling for the establishment of a State Medical Service and in 1934 the Labour Party Conference at Southport unanimously accepted an official document on a National Health Service. Prior to the Second World War there was already consensus that health insurance should be extended to the dependants of the wage - earner, and that the voluntary and local authority hospitals should be integrated. A British Medical Association (BMA) pamphlet, "A General Medical Service for the Nation '' was issued along these lines in 1938. However, no action was taken due to the international crisis. During the war, a new centralised state - run Emergency Hospital Service employed doctors and nurses to care for those injured by enemy action and arrange for their treatment in whichever hospital was available. The existence of the service made voluntary hospitals dependent on the Government and there was a recognition that many would be in financial trouble once peace arrived. The need to do something to guarantee the voluntary hospitals meant that hospital care drove the impetus for reform. In February 1941 the Deputy Permanent Secretary at the Ministry of Health recorded privately areas of agreement on post-war health policy which included "a complete health service to be available to every member of the community '' and on 9 October 1941, the Minister of Health Ernest Brown announced that the Government proposed to ensure that there was a comprehensive hospital service available to everyone in need of it, and that local authorities would be responsible for providing it. The Medical Planning Commission set up by the professional bodies went one stage further in May 1942 recommending (in an interim report) a National Health Service with General Practitioners working through health centres and hospitals run by regional administrations. The Beveridge Report of December 1942 included this same idea. Developing the idea into firm policy proved difficult. Although the BMA had been part of the Medical Planning Commission, at their conference in September 1943 the association changed policy to oppose local authority control of hospitals and to favour extension of health insurance instead of GPs working for state health centres. When Health Minister Henry Willink prepared a white paper endorsing a National Health Service, it was attacked by Brendan Bracken and Lord Beaverbrook and resignations were threatened on both sides. However the Cabinet endorsed the White Paper which was published in 1944. This White Paper includes the founding principles of the NHS: it was to be funded out of general taxation and not through national insurance, and services would be provided by the same doctors and the same hospitals, but: Willink then set about trying to assuage the doctors, a job taken over by Aneurin Bevan in Clement Attlee 's Labour Party government after the war ended. Bevan quickly came to the decision that the 1944 white paper 's proposal for local authority control of voluntary hospitals was not workable, as the local authorities were too poor and too small to manage hospitals. He decided that "the only thing to do was to create an entirely new hospital service, to take over the voluntary hospitals, and to take over the local government hospitals and to organise them as a single hospital service ''. This structure of the NHS in England and Wales was established by the National Health Service Act 1946 which received Royal Assent on 6 November 1946. Bevan encountered considerable debate and resistance from the BMA who voted in May 1948 not to join the new service, but brought them on board by the time the new arrangements launched on 5 July 1948. The original structure of the NHS in England and Wales had three aspects, known as the tripartite system: The new service instantly became Britain 's 3rd largest employer with around 364,000 staff across England and Wales. These included 9,000 full - time doctors, 19,000 professional and technical staff (including 2,800 physiotherapists, 1,600 laboratory technicians and 2,000 radiographers), 25,000 administrative and clerical staff, 149,000 nurses and midwives (23,000 of whom were part - time), and 128,000 ancillary staff (catering, laundry, cleaning and maintenance). By the beginning of the 1950s, spending on the NHS was exceeding expectations, leading in 1952 to the introduction of a one - shilling charge for prescriptions and a £ 1 charge for dental treatment; these were exceptions to the NHS being free at the point of use. Political concerns about spiralling NHS costs later receded in the wake of the 1956 Guillebaud Report, which praised the "responsible attitude among hospital authorities '' towards the "efficient and economical '' use of public funds. The 1950s saw the planning of hospital services, dealing in part with some of the gaps and duplications that existed across England and Wales. The period also saw growth in the number of medical staff and a more even distribution of them with the development of hospital outpatient services. By 1956, the NHS was stretched financially and doctors were disaffected, resulting in a Royal Commission on doctors ' pay being set up in February 1957. The investigation and trial of alleged serial killer Dr John Bodkin Adams exposed some of the tensions in the system. Indeed, if he had been found guilty (for, in the eyes of doctors, accidentally killing a patient while providing treatment) and hanged, the whole NHS might have collapsed. The Mental Health Act of 1959 also significantly altered legislation in respect of mental illness and reduced the grounds on which someone could be detained in a mental hospital. The 1960s have been characterised as a period of growth. Prescription charges were abolished in 1965 and reintroduced in 1968. New drugs came to the market improving healthcare, including polio vaccine, dialysis for chronic renal failure and chemotherapy for certain cancers were developed, all adding to upfront costs. Health Secretary Enoch Powell undertook three initiatives: Concern continued to grow about the structure of the NHS and weaknesses of the tripartite system. Powell agreed the creation of a Royal Commission on doctors ' pay, which resulted in a statutory review body. Further development came in the form of the Charter of General Practice, negotiated between new Health Minister Kenneth Robinson and the BMA, that provided financial incentives for practice development. This resulted in the concept of the primary health care in better housed and better staffed practices, stimulating doctors to join together and the development of the modern group practice. In 1969, responsibility for the NHS in Wales was passed to the Secretary of State for Wales from the Secretary of State for Health who was thereafter just responsible for the NHS in England. After the publication by the British Medical Journal on 24 December 1949 of University of Cambridge consultant paediatrician Douglas Gairdner 's landmark paper detailing the lack of medical benefit and the risks attached to non-therapeutic (routine) circumcision, the National Health Service took a decision that circumcision would not be performed unless there was a clear and present medical indication. Both the cost and the non-therapeutic, unnecessary, harmful nature of the surgical operation were taken into account. The NHS in England was reorganised in 1974 to bring together services provided by hospitals and services provided by local authorities under the umbrella of Regional Health Authorities, with a further restructuring in 1982. The 1970s also saw the end of the economic optimism which had characterised the 1960s and increasing pressures coming to bear to reduce the amount of money spent on public services and to ensure increased efficiency for the money spent. In the 1980s, Thatcherism represented a systematic, decisive rejection and reversal of the Post-war consensus, whereby the major political parties largely agreed on the central themes of Keynesianism, the welfare state, nationalised industry, public housing and close regulation of the economy. There was one major exception: the National Health Service, which was widely popular and had wide support inside the Conservative Party. Prime Minister Margaret Thatcher promised Britons in 1982, the NHS is "safe in our hands. '' In the 1980s modern management processes (General Management) were introduced in the NHS to replace the previous system of consensus management. This was outlined in the Griffiths Report of 1983. This recommended the appointment of general managers in the NHS with whom responsibility should lie. The report also recommended that clinicians be better involved in management. Financial pressures continued to place strain on the NHS. In 1987, an additional £ 101 million was provided by the government to the NHS. In 1988 the then Prime Minister, Margaret Thatcher, announced a review of the NHS. From this review and in 1989, two white papers Working for Patients and Caring for People were produced. These outlined the introduction of what was termed the "internal market '', which was to shape the structure and organisation of health services for most of the next decade. In spite of intensive opposition from the BMA, who wanted a pilot study or the reforms in one region, the internal market was introduced. In 1990, the National Health Service & Community Care Act (in England) defined this "internal market '', whereby Health Authorities ceased to run hospitals but "purchased '' care from their own or other authorities ' hospitals. Certain GPs became "fund holders '' and were able to purchase care for their patients. The "providers '' became NHS trusts, which encouraged competition but also increased local differences. These innovations, especially the "fund holder '' option, were condemned at the time by the Labour Party. Opposition to what was claimed to be the Conservative intention to privatise the NHS became a major feature of Labour 's election campaigns. Labour came to power in 1997 with the promise to remove the "internal market '' and abolish fundholding. In a speech given by the new Prime Minister, Tony Blair, at the Lonsdale Medical Centre on 9 December 1997, he stated that: The White Paper we are publishing today marks a turning point for the NHS. It replaces the internal market with "integrated care ''. We will put doctors and nurses in the driving seat. The result will be that £ 1 billion of unnecessary red tape will be saved and the money put into frontline patient care. For the first time the need to ensure that high quality care is spread throughout the service will be taken seriously. National standards of care will be guaranteed. There will be easier and swifter access to the NHS when you need it. Our approach combines efficiency and quality with a belief in fairness and partnership. Comparing not competing will drive efficiency. However, in his second term Blair pursued measures to strengthen the internal market as part of his plan to "modernise '' the NHS. Driving these reforms were a number of factors including the rising costs of medical technology and medicines, the desire to increase standards and "patient choice '', an ageing population, and a desire to contain government expenditure. Since the National Health Services in Wales, Scotland and Northern Ireland are not controlled by the UK government, these reforms have increased the differences between the National Health Services in different parts of the United Kingdom. (See NHS Wales and NHS Scotland for descriptions of their developments). Reforms included (amongst other actions) the laying down of detailed service standards, strict financial budgeting, revised job specifications, reintroduction of a modified form of fundholding -- "practice - based commissioning '', closure of surplus facilities and emphasis on rigorous clinical and corporate governance. In addition Modernising Medical Careers medical training had an unsuccessful restructuring which was so badly managed that the Secretary of State for Health was forced to apologise publicly. It was then revised but its flawed implementation left the NHS with significant medical staffing problems. Some new services were developed to help manage demand, including NHS Direct. A new emphasis was given to staff reforms, with the Agenda for Change agreement providing harmonised pay and career progression. These changes gave rise to controversy within the medical professions, the media and the public. The Blair Government, whilst leaving services free at point of use, encouraged outsourcing of medical services and support to the private sector. Under the Private Finance Initiative, an increasing number of hospitals were built (or rebuilt) by private sector consortia; hospitals may have both medical services (such as Independent Sector Treatment Centre (ISTC or "surgicentres ''), and non-medical services (such as catering) provided under long - term contracts by the private sector. A study by a consultancy company which worked for the Department of Health showed that every £ 200 million spent on privately financed hospitals resulted in the loss of 1000 doctors and nurses. The first PFI hospitals contained some 28 per cent fewer beds than the ones they replaced. In 2005, surgicentres treated around 3 per cent of NHS patients (in England) having routine surgery. By 2008 this was expected to be around 10 per cent. NHS Primary Care Trusts have been given the target of sourcing at least 15 per cent of primary care from the private or voluntary sectors over the medium term. As a corollary to these initiatives, the NHS was required to take on pro-active socially "directive '' policies, for example, in respect of smoking and obesity. The NHS encountered significant problems with the information technology (IT) innovations accompanying the Blair reforms. The NHS 's National Programme for IT (NPfIT), believed to be the largest IT project in the world, is running significantly behind schedule and above budget, with friction between the Government and the programme contractors. Originally budgeted at £ 2.3 billion, present estimates are £ 20 -- 30 billion and rising. There has also been criticism of a lack of patient information security. The ability to deliver integrated high quality services will require care professionals to use sensitive medical data. This must be controlled and in the NPfIT model it is, sometimes too tightly to allow the best care to be delivered. One concern is that GPs and hospital doctors have given the project a lukewarm reception, citing a lack of consultation and complexity. Key "front - end '' parts of the programme include Choose and Book, intended to assist patient choice of location for treatment, which has missed numerous deadlines for going "live '', substantially overrun its original budget, and is still (May 2006) available in only a few locations. The programme to computerise all NHS patient records is also experiencing great difficulties. Furthermore, there are unresolved financial and managerial issues on training NHS staff to introduce and maintain these systems once they are operative. Between 2004 / 5 and 2013 / 4 NHS output increased considerably. Hospital admissions increased by 32 %, outpatient attendances by 17 %, primary care consultations by 25 % and community care activity by 14 %. Hospital death rates reduced, especially in stroke. At the same time there was an increase in wages of 24 % and an increase of 10 % in the number of staff and increases in the use of equipment and supplies. As a whole NHS output increased by 47 % and inputs by 31 %, an increase in productivity of 12.86 % during the period, or 1.37 % per year. The return of a Conservative - led government in 2010 coincided with another deterioration in industrial relations. The introduction of further private sector involvement in the Health and Social Care Act 2012 provoked mass demonstrations led by health workers, and some NHS workers also participated in a national strike over pay restraint in 2014. 2016 also saw major industrial action by junior doctors, protesting at the imposition of a new contract aiming to extend weekend working.
jack white i think we are going to be friends
We 're Going to Be Friends - wikipedia "We 're Going to Be Friends '' is a song by the American alternative rock band White Stripes from their album White Blood Cells. It was released in late 2002, and tells the story of meeting a new friend at the beginning of a school year. Through its lyrics, it is able to evoke the simplicity and nostalgia of childhood. "Suzy Lee, '' who is mentioned in the song in "We 're Going to Be Friends '', makes recurring appearances in White Stripes ' discography, including in their eponymous album, which includes the song "Suzy Lee '', as well as on Get Behind Me Satan, which is dedicated to Suzy Lee, "Wherever she may be... '' The song speaks of a girl and boy who become friends while engaging in activities in and out of school. AllMusic said the song "takes a nostalgic look back at the innocence of school days with a surprisingly sensitive vocal as (Jack) expertly paints impressions of days past with deft economy. '' The video features Jack and Meg White on a couch in front of a house at night time. Jack is playing a guitar, while Meg is sleeping alongside him. Jack wakes Meg as the song ends. The cover of the single is a still from the video. Tom Maginnis with AllMusic called the song a "sweet acoustic ballad, '' and NME called it "fey childhood - sweetheart folk. '' The A.V. Club said it was "the album 's most shocking track. '' Featured in the opening credits of the 2004 film Napoleon Dynamite, writer / director Jared Hess commented that, originally, they could not find the song they were looking for, but wanted to use "We 're Going to Be Friends, '' so a copy of Napoleon Dynamite was sent to The White Stripes, who promptly approved the song 's use in the film. Singer - songwriter Jack Johnson recorded a cover of the song on his album Sing - A-Longs and Lullabies for the Film Curious George. In 2008 and 2009, it was used in a promo for PBS Kids. Bright Eyes covered the song with guest First Aid Kit for the charity album Cool for School: For the Benefit of the Lunchbox Fund. This song was also played at the end of the House entitled "Mirror Mirror '', which aired October 30, 2007, although the song is not included in the show 's soundtrack. The song is also featured in the Life in Pieces episode "Sexting Mall Lemonade Heartbreak ''. CBS included a song in commercials for its 2006 program The Class that resembled the tune of "We 're Going to Be Friends. '' When magazine Broadcasting & Cable reached out to Monotone Management (the management company of the band at the time), the company declined to comment, but indicated that the band was aware of the song and were not pleased. In subsequent commercials, CBS used a different song. This song was featured in the 2017 film Wonder. The film used two versions of the song, the original version performed by the White Stripes and a cover version performed by Caroline Pennell. This is one of Conan O'Brien 's favorite songs by The White Stripes and at his request, they performed it on the final episode of Late Night with Conan O'Brien on February 20, 2009. All songs by The White Stripes unless otherwise noted In May 2017, Third Man Records announced a children 's book based on the single to be released on November 21, 2017, shortly after The White Stripes ' 20th anniversary. The book became available on November 7, 2017 for pre-release events and limited distribution. It was illustrated by Elinor Blake, an illustrator an animator for shows such as The Ren and Stimpy Show and Pee - wee 's Playhouse. The book will come with a digital copy of the original song and a cover version performed by The Woodstation Elementary School Singers, as well as a cover by April March.
how did the southern states get back into the union after the civil war
Reconstruction Era - Wikipedia The Reconstruction era was the period from 1863 to 1877 in American history. The term has two applications: the first applies to the complete history of the entire country from 1865 to 1877 following the American Civil War; the second, to the attempted transformation of the 11 ex-Confederate states from 1863 to 1877, as directed by Congress. Reconstruction ended the remnants of Confederate nationalism and ended slavery, making the newly free slaves citizens with civil rights apparently guaranteed by three new Constitutional amendments. Three visions of Civil War memory appeared during Reconstruction: the reconciliationist vision, which was rooted in coping with the death and devastation the war had brought; the white supremacist vision, which included terror and violence; and the emancipationist vision, which sought full freedom, citizenship, and Constitutional equality for African Americans. Presidents Abraham Lincoln and Andrew Johnson both took moderate positions designed to bring the South back into the Union as quickly as possible, while Radical Republicans in Congress sought stronger measures to upgrade the rights of African Americans, including the Fourteenth Amendment to the United States Constitution, while curtailing the rights of former Confederates, such as through the provisions of the Wade -- Davis Bill. Johnson, a former Tennessee Senator and former slave owner, followed a lenient policy toward ex-Confederates. Lincoln 's last speeches show that he was leaning toward supporting the enfranchisement of all freedmen, whereas Johnson was opposed to this. Johnson 's interpretations of Lincoln 's policies prevailed until the Congressional elections of 1866. Those elections followed outbreaks of violence against blacks in the former rebel states, including the Memphis riots of 1866 and the New Orleans riot that same year. The subsequent 1866 election gave Republicans a majority in Congress, enabling them to pass the 14th Amendment, take control of Reconstruction policy, remove former Confederates from power, and enfranchise the freedmen. A Republican coalition came to power in nearly all the southern states and set out to transform the society by setting up a free labor economy, using the U.S. Army and the Freedmen 's Bureau. The Bureau protected the legal rights of freedmen, negotiated labor contracts, and set up schools and churches for them. Thousands of Northerners came south as missionaries, teachers, businessmen and politicians. Hostile whites began referring to these politicians as "carpetbaggers ''. In early 1866, Congress passed the Freedmen 's Bureau and Civil Rights Bills and sent them to Johnson for his signature. The first bill extended the life of the bureau, originally established as a temporary organization charged with assisting refugees and freed slaves, while the second defined all persons born in the United States as national citizens with equality before the law. After Johnson vetoed the bills, Congress overrode his veto, making the Civil Rights Act the first major bill in the history of the United States to become law through an override of a presidential veto. The Radicals in the House of Representatives, frustrated by Johnson 's opposition to Congressional Reconstruction, filed impeachment charges. The action failed by one vote in the Senate. The new national reconstruction laws -- in particular laws requiring suffrage (the right to vote) for freedmen -- incensed white supremacists in the south, giving rise to the Ku Klux Klan. During 1867 - 69 the Klan murdered Republicans and outspoken freedmen in the south, including Arkansas Congressman James M. Hinds. Elected in 1868, Republican President Ulysses S. Grant supported Congressional Reconstruction and enforced the protection of African Americans in the South through the use of the Enforcement Acts passed by Congress. Grant used the Enforcement Acts to effectively combat the Ku Klux Klan, which was essentially wiped out, although a new incarnation of the Klan eventually would again come to national prominence in the 1920s. Nevertheless, President Grant was unable to resolve the escalating tensions inside the Republican Party between the northerners on the one hand, and those Republicans originally hailing from the South on the other (this latter group would be labelled "Scalawags '' by those opposing Reconstruction). Meanwhile, "Redeemers '', self - styled Conservatives (in close cooperation with a faction of the Democratic Party) strongly opposed reconstruction. They alleged widespread corruption by the "Carpetbaggers '', excessive state spending and ruinous taxes. Meanwhile, public support for Reconstruction policies, requiring continued supervision of the South, faded in the North after the Democrats, who strongly opposed Reconstruction, regained control of the House of Representatives in 1874. In 1877, as part of a Congressional bargain to elect Republican Rutherford B. Hayes as president following the close 1876 presidential election, U.S. Army troops no longer supported Republican state governments. Reconstruction was a significant chapter in the history of American civil rights. Historian Eric Foner argues: What remains certain is that Reconstruction failed, and that for blacks its failure was a disaster whose magnitude can not be obscured by the genuine accomplishments that did endure. In the different states Reconstruction began and ended at different times; federal Reconstruction ended with the Compromise of 1877. In recent decades most historians follow Foner in dating the Reconstruction of the south as starting in 1863 (with Emancipation and the Port Royal experiment) rather than 1865. The usual ending has always been 1877. Reconstruction policies were debated in the North when the war began, and commenced in earnest after Lincoln 's Emancipation Proclamation, issued on January 1, 1863. As Confederate states came back under control of the U.S. Army, President Abraham Lincoln set up reconstructed governments in Tennessee, Arkansas, and Louisiana during the war. He experimented by giving land to blacks in South Carolina. By fall 1865, the new President Andrew Johnson declared the war goals of national unity and the ending of slavery achieved and reconstruction completed. Republicans in Congress, refusing to accept Johnson 's lenient terms, rejected new members of Congress, some of whom had been high - ranking Confederate officials a few months before. Johnson broke with the Republicans after vetoing two key bills that supported the Freedmen 's Bureau and provided federal civil rights to the freedmen. The 1866 Congressional elections turned on the issue of Reconstruction, producing a sweeping Republican victory in the North, and providing the Radical Republicans with sufficient control of Congress to override Johnson 's vetoes and commence their own "Radical Reconstruction '' in 1867. That same year, Congress removed civilian governments in the South, and placed the former Confederacy under the rule of the U.S. Army. The army conducted new elections in which the freed slaves could vote, while whites who had held leading positions under the Confederacy were temporarily denied the vote and were not permitted to run for office. In ten states, coalitions of freedmen, recent black and white arrivals from the North (carpetbaggers), and white Southerners who supported Reconstruction (scalawags) cooperated to form Republican biracial state governments. They introduced various reconstruction programs including: funding public schools, establishing charitable institutions, raising taxes, and funding public improvements such as improved railroad transportation and shipping. Conservative opponents called the Republican regimes corrupt and instigated violence toward freedmen and whites who supported Reconstruction. Most of the violence was carried out by members of the Ku Klux Klan (KKK), a secretive terrorist organization closely allied with the southern Democratic Party. Klan members attacked and intimidated blacks seeking to exercise their new civil rights, as well as Republican politicians in the south favoring those civil rights. One such politician murdered by the Klan on the eve of the 1868 presidential election was Republican Congressman James M. Hinds of Arkansas. Widespread violence in the south led to federal intervention by President Ulysses S. Grant in 1871, which suppressed the Klan. Nevertheless, white Democrats, calling themselves "Redeemers '', regained control of the south state by state, sometimes using fraud and violence to control state elections. A deep national economic depression following the Panic of 1873 led to major Democratic gains in the North, the collapse of many railroad schemes in the South, and a growing sense of frustration in the North. The end of Reconstruction was a staggered process, and the period of Republican control ended at different times in different states. With the Compromise of 1877, military intervention in Southern politics ceased and Republican control collapsed in the last three state governments in the South. This was followed by a period which white Southerners labeled "Redemption '', during which white - dominated state legislatures enacted Jim Crow laws and, beginning in 1890, disenfranchised most blacks and many poor whites through a combination of constitutional amendments and electoral laws. The white Democratic Southerners ' memory of Reconstruction played a major role in imposing the system of white supremacy and second - class citizenship for blacks using laws known as Jim Crow laws. Reconstruction addressed how the eleven seceding rebel states in the south would regain what the Constitution calls a "republican form of government '' and be reseated in Congress, the civil status of the former leaders of the Confederacy, and the Constitutional and legal status of freedmen, especially their civil rights and whether they should be given the right to vote. Intense controversy erupted throughout the South over these issues. The laws and constitutional amendments that laid the foundation for the most radical phase of Reconstruction were adopted from 1866 to 1871. By the 1870s, Reconstruction had officially provided freedmen with equal rights under the constitution, and blacks were voting and taking political office. Republican legislatures, coalitions of whites and blacks, established the first public school systems and numerous charitable institutions in the South. White paramilitary organizations, especially the Ku Klux Klan and also the White League and Red Shirts formed with the political aim of driving out the Republicans. They also disrupted political organizing and terrorized blacks to bar them from the polls. President Grant used federal power to effectively shut down the KKK in the early 1870s, though the other, smaller, groups continued to operate. From 1873 to 1877, conservative whites (calling themselves "Redeemers '') regained power in the Southern states. They joined the Bourbon wing of the national Democratic Party. In the 1860s and 1870s the terms "radical '' and "conservative '' had distinctive meanings. "Conservative '' was the name of a faction, often led by the planter class. Leaders who had been Whigs were committed to economic modernization, built around railroads, factories, banks and cities. Most of the "radical '' Republicans in the North were men who believed in integrating African Americans by providing them civil rights as citizens, along with free enterprise; most were also modernizers and former Whigs. The "Liberal Republicans '' of 1872 shared the same outlook except they were especially opposed to the corruption they saw around President Grant, and believed that the goals of the Civil War had been achieved so that the federal military intervention could now end. Passage of the 13th, 14th, and 15th Amendments is the constitutional legacy of Reconstruction. These Reconstruction Amendments established the rights that led to Supreme Court rulings in the mid-20th century that struck down school segregation. A "Second Reconstruction '', sparked by the Civil Rights Movement, led to civil rights laws in 1964 and 1965 that ended segregation and re-opened the polls to blacks. Reconstruction played out against an economy in ruin. The Confederacy in 1861 had 297 towns and cities with a total population of 835,000 people; of these 162 with 681,000 people were at one point occupied by Union forces. Eleven were destroyed or severely damaged by war action, including Atlanta (with an 1860 population of 9,600), Charleston, Columbia, and Richmond (with prewar populations of 40,500, 8,100, and 37,900, respectively); the eleven contained 115,900 people in the 1860 census, or 14 % of the urban South. The number of people who lived in the destroyed towns represented just over 1 % of the Confederacy 's combined urban and rural populations. The rate of damage in smaller towns was much lower -- only 45 courthouses were burned out of a total of 830. Farms were in disrepair, and the prewar stock of horses, mules and cattle was much depleted; 40 % of the South 's livestock had been killed. The South 's farms were not highly mechanized, but the value of farm implements and machinery in the 1860 Census was $81 million and was reduced by 40 % by 1870. The transportation infrastructure lay in ruins, with little railroad or riverboat service available to move crops and animals to market. Railroad mileage was located mostly in rural areas and over two - thirds of the South 's rails, bridges, rail yards, repair shops and rolling stock were in areas reached by Union armies, which systematically destroyed what they could. Even in untouched areas, the lack of maintenance and repair, the absence of new equipment, the heavy over-use, and the deliberate relocation of equipment by the Confederates from remote areas to the war zone ensured the system would be ruined at war 's end. Restoring the infrastructure -- especially the railroad system -- became a high priority for Reconstruction state governments. The enormous cost of the Confederate war effort took a high toll on the South 's economic infrastructure. The direct costs to the Confederacy in human capital, government expenditures, and physical destruction from the war totaled $3.3 billion. By 1865, the Confederate dollar was worthless due to high inflation, and people in the South had to resort to bartering services for goods, or else use scarce Union dollars. With the emancipation of the southern slaves, the entire economy of the South had to be rebuilt. Having lost their enormous investment in slaves, white planters had minimal capital to pay freedmen workers to bring in crops. As a result, a system of sharecropping was developed where landowners broke up large plantations and rented small lots to the freedmen and their families. The main feature of the Southern economy changed from an elite minority of landed gentry slaveholders into a tenant farming agriculture system. The end of the Civil War was accompanied by a large migration of new freed people to the cities. In the cities, African Americans were relegated to the lowest paying jobs such as unskilled and service labor. Men worked as rail workers, rolling and lumber mills workers, and hotel workers. The large population of slave artisans during the antebellum period had not been translated into a large number of freemen artisans during Reconstruction. Black women were largely confined to domestic work employed as cooks, maids, and child nurses. Others worked in hotels. A large number became laundresses. The dislocations had a severe negative impact on the black population, with a large amount of sickness and death. Over a quarter of Southern white men of military age -- the backbone of the South 's white workforce -- died during the war, leaving countless families destitute. Per capita income for white southerners declined from $125 in 1857 to a low of $80 in 1879. By the end of the 19th century and well into the 20th century, the South was locked into a system of poverty. How much of this failure was caused by the war and by previous reliance on agriculture remains the subject of debate among economists and historians. During the Civil War, the Radical Republican leaders argued that slavery and the Slave Power had to be permanently destroyed. Moderates said this could be easily accomplished as soon as Confederate armies surrendered and the Southern states repealed secession and accepted the 13th Amendment -- most of which happened by December 1865. President Lincoln was the leader of the moderate Republicans and wanted to speed up Reconstruction and reunite the nation painlessly and quickly. Lincoln formally began Reconstruction in late 1863 with his Ten percent plan, which went into operation in several states but which Radical Republicans opposed. Lincoln pocket vetoed the Radical plan, the Wade -- Davis Bill of 1864, which was much more strict than the Ten - Percent Plan. By late 1866, the opposing faction of Radical Republicans was skeptical of Southern intentions. White reactions included outbreaks of mob violence against blacks, such as the Memphis riots of 1866 and the New Orleans riot. Radical Republicans demanded a prompt and strong federal response to protect freed - people and curb southern racism. Congressman Thaddeus Stevens of Pennsylvania and Senator Charles Sumner of Massachusetts led the Radicals. Sumner argued that secession had destroyed statehood but the Constitution still extended its authority and its protection over individuals, as in existing U.S. territories. Stevens and his followers viewed secession as having left the states in a status like new territories. The Republicans sought to prevent Southern politicians from "restoring the historic subordination of Negroes ''. Since slavery was abolished, the three - fifths compromise no longer applied to counting the population of blacks. After the 1870 census, the South would gain numerous additional representatives in Congress, based on the population of freedmen. One Illinois Republican expressed a common fear that if the South were allowed to simply restore its previous established powers, that the "reward of treason will be an increased representation ''. Upon Lincoln 's assassination in April 1865, Andrew Johnson of Tennessee, who had been elected with Lincoln in 1864 as vice president, became president. Johnson rejected the Radical program of Reconstruction and instead appointed his own governors and tried to finish reconstruction by the end of 1865. Thaddeus Stevens vehemently opposed President Johnson 's plans for an abrupt end to Reconstruction, insisting that Reconstruction must "revolutionize Southern institutions, habits, and manners... The foundations of their institutions... must be broken up and relaid, or all our blood and treasure have been spent in vain. '' Johnson broke decisively with the Republicans in Congress when he vetoed the Civil Rights Act in early 1865. While Democrats cheered, the Republicans pulled together, passed the bill again, and overturned Johnson 's repeat veto. Full - scale political warfare now existed between Johnson (now allied with the Democrats) and the Radical Republicans. Congress rejected Johnson 's argument that he had the war power to decide what to do, since the war was over. Congress decided it had the primary authority to decide how Reconstruction should proceed, because the Constitution stated the United States had to guarantee each state a republican form of government. The Radicals insisted that meant Congress decided how Reconstruction should be achieved. The issues were multiple: who should decide, Congress or the president? How should republicanism operate in the South? What was the status of the former Confederate states? What was the citizenship status of the leaders of the Confederacy? What was the citizenship and suffrage status of freedmen? The election of 1866 decisively changed the balance of power, giving the Republicans two - thirds majorities in both houses of Congress, and enough votes to overcome Johnson 's vetoes. They moved to impeach Johnson because of his constant attempts to thwart Radical Reconstruction measures, by using the Tenure of Office Act. Johnson was acquitted by one vote, but he lost the influence to shape Reconstruction policy. The Republican Congress established military districts in the South and used Army personnel to administer the region until new governments loyal to the Union -- that accepted the 14th Amendment and the right of freedmen to vote -- could be established. Congress temporarily suspended the ability to vote of approximately 10,000 to 15,000 former Confederate officials and senior officers, while constitutional amendments gave full citizenship to all African Americans, and suffrage to the adult men. With the power to vote, freedmen started participating in politics. While many slaves were illiterate, educated blacks (including escaped slaves) moved down from the North to aid them, and natural leaders also stepped forward. They elected white and black men to represent them in constitutional conventions. A Republican coalition of freedmen, southerners supportive of the Union (derisively called scalawags by white Democrats), and northerners who had migrated to the South (derisively called carpetbaggers) -- some of whom were returning natives, but were mostly Union veterans -- organized to create constitutional conventions. They created new state constitutions to set new directions for southern states. The issue of loyalty emerged in the debates over the Wade -- Davis Bill of 1864. The bill required voters to take the "ironclad oath '', swearing they had never supported the Confederacy or been one of its soldiers. Pursuing a policy of "malice toward none '' announced in his second inaugural address, Lincoln asked voters only to support the Union. The Radicals lost support following Lincoln 's veto of the Wade -- Davis Bill but regained strength after Lincoln 's assassination in April 1865. Congress had to consider how to restore to full status and representation within the Union those southern states that had declared their independence from the United States and had withdrawn their representation. Suffrage for former Confederates was one of two main concerns. A decision needed to be made whether to allow just some or all former Confederates to vote (and to hold office). The moderates in Congress wanted virtually all of them to vote, but the Radicals resisted. They repeatedly imposed the ironclad oath, which would effectively have allowed no former Confederates to vote. Historian Harold Hyman says that in 1866 Congressmen "described the oath as the last bulwark against the return of ex-rebels to power, the barrier behind which Southern Unionists and Negroes protected themselves. '' Radical Republican leader Thaddeus Stevens proposed, unsuccessfully, that all former Confederates lose the right to vote for five years. The compromise that was reached disenfranchised many Confederate civil and military leaders. No one knows how many temporarily lost the vote, but one estimate was that it was as high as 10,000 to 15,000 out of a total white population of roughly eight million. Second, and closely related, was the issue of whether the roughly four million freedmen should be allowed to vote. The issue was how to receive the four million Freedmen as citizens. If they were to be fully counted as citizens, some sort of representation for apportionment of seats in Congress had to be determined. Before the war, the population of slaves had been counted as three - fifths of a corresponding number of free whites. By having four million freedmen counted as full citizens, the South would gain additional seats in Congress. If blacks were denied the vote and the right to hold office, then only whites would represent them. Many conservatives, including most white southerners, northern Democrats, and some northern Republicans, opposed black voting. Some northern states that had referenda on the subject limited the ability of their own small populations of blacks to vote. Lincoln had supported a middle position to allow some black men to vote, especially army veterans. Johnson also believed that such service should be rewarded with citizenship. Lincoln proposed giving the vote to "the very intelligent, and especially those who have fought gallantly in our ranks. '' In 1864, Governor Johnson said, "The better class of them will go to work and sustain themselves, and that class ought to be allowed to vote, on the ground that a loyal negro is more worthy than a disloyal white man. '' As President in 1865, Johnson wrote to the man he appointed as governor of Mississippi, recommending, "If you could extend the elective franchise to all persons of color who can read the Constitution in English and write their names, and to all persons of color who own real estate valued at least two hundred and fifty dollars, and pay taxes thereon, you would completely disarm the adversary (Radicals in Congress), and set an example the other states will follow. '' Charles Sumner and Thaddeus Stevens, leaders of the Radical Republicans, were initially hesitant to enfranchise the largely illiterate Freedmen. Sumner preferred at first impartial requirements that would have imposed literacy restrictions on blacks and whites. He believed that he would not succeed in passing legislation to disfranchise illiterate whites who already had the vote. In the South, many poor whites were illiterate as there was almost no public education before the war. In 1880, for example, the white illiteracy rate was about 25 % in Tennessee, Kentucky, Alabama, South Carolina, and Georgia; and as high as 33 % in North Carolina. This compares with the 9 % national rate, and a black rate of illiteracy that was over 70 % in the South. By 1900, however, with emphasis within the black community on education, the majority of blacks had achieved literacy. Sumner soon concluded that "there was no substantial protection for the freedman except in the franchise. '' This was necessary, he stated, "(1) For his own protection; (2) For the protection of the white Unionist; and (3) For the peace of the country. We put the musket in his hands because it was necessary; for the same reason we must give him the franchise. '' The support for voting rights was a compromise between moderate and Radical Republicans. The Republicans believed that the best way for men to get political experience was to be able to vote and to participate in the political system. They passed laws allowing all male freedmen to vote. In 1867, black men voted for the first time. Over the course of Reconstruction, more than 1,500 African Americans held public office in the South; some of them were men who had escaped to the North and gained educations, and returned to the South. They did not hold office in numbers representative of their proportion in the population, but often elected whites to represent them. The question of women 's suffrage was also debated but was rejected. From 1890 to 1908, southern states passed new constitutions and laws that disfranchised most blacks and tens of thousands of poor whites with new voter registration and electoral rules. When establishing new requirements such as subjectively administered literacy tests, in some states, they used "grandfather clauses '' to enable illiterate whites to vote. The Five Civilized Tribes that had been relocated to Indian Territory (now part of Oklahoma) held black slaves and signed treaties supporting the Confederacy. During the war, a war among pro - and anti-Union Indians had raged. Congress passed a statute that gave the President the authority to suspend the appropriations of any tribe if the tribe is "in a state of actual hostility to the government of the United States... and, by proclamation, to declare all treaties with such tribe to be abrogated by such tribe '' (25 USC Sec. 72). As a component of Reconstruction, the Interior Department ordered a meeting of representatives from all Indian tribes which had affiliated with the Confederacy. The Council, the Southern Treaty Commission, was first held in Ft. Smith, Arkansas in September 1865, was attended by hundreds of Indians representing dozens of tribes. Over the next several years the commission negotiated treaties with tribes that resulted in additional relocations to Indian Territory and the de facto creation (initially by treaty) of an unorganized Oklahoma Territory. President Lincoln signed two Confiscation Acts into law, the first on August 6, 1861, and the second on July 17, 1862, safeguarding fugitive slaves from the Confederacy that came over into Union lines and giving them indirect emancipation if their masters continued insurrection against the United States. The laws allowed the confiscation of lands for colonization from those who aided and supported the rebellion. However, these laws had limited effect as they were poorly funded by Congress and poorly enforced by Attorney General Edward Bates. In August 1861, Maj. Gen. John C. Frémont, Union commander of the Western Department, declared martial law in Missouri, confiscated Confederate property, and emancipated their slaves. President Lincoln immediately ordered Frémont to rescind his emancipation declaration stating, "I think there is great danger that... the liberating slaves of traitorous owners, will alarm our Southern Union friends, and turn them against us -- perhaps ruin our fair prospect for Kentucky. '' After Frémont refused to rescind the emancipation order, President Lincoln terminated him from active duty on November 2, 1861. Lincoln was concerned that the border states would secede from the Union if slaves were given their freedom. On May 26, 1862, Union Maj. Gen. David Hunter emancipated slaves in South Carolina, Georgia, and Florida stated all "persons... heretofore held as slaves... forever free. '' Lincoln, embarrassed by the order, rescinded Hunter 's declaration and canceled the emancipations. On April 16, 1862 Lincoln signed a bill into law outlawing slavery in Washington D.C. and freeing the estimated 3,500 slaves in the city and on June 19, 1862 he signed legislation outlawing slavery in all U.S. territories. On July 17, 1862 under the authority of the Confiscation Acts and an amended Force Bill of 1795, he authorized the recruitment of freed slaves into the Union army and seizure of any Confederate property for military purposes. In an effort to keep border states in the Union, President Lincoln as early as 1861 designed gradual compensated emancipation programs paid for by government bonds. Lincoln desired Delaware, Maryland, Kentucky, and Missouri to "adopt a system of gradual emancipation which should work the extinction of slavery in twenty years. '' On March 26, 1862 Lincoln met with Senator Charles Sumner and recommended that a special joint session of Congress be convened to discuss giving financial aid to any border states who initiated a gradual emancipation plan. In April 1862, the joint session of Congress met, however, the border states were not interested and did not make any response to Lincoln or any Congressional emancipation proposal. Lincoln advocated compensated emancipation during the 1865 River Queen steamer conference. In August 1862, President Lincoln met with African - American leaders and urged them to colonize some place in Central America. Lincoln planned to free the Southern slaves in the Emancipation Proclamation and he was concerned that freedmen would not be well treated in the United States by whites in both the North and South. Although Lincoln gave assurances that the United States government would support and protect any colonies, the leaders declined the offer of colonization. Many free blacks had been opposed to colonization plans in the past and wanted to remain in the United States. President Lincoln persisted in his colonization plan believing that emancipation and colonization were part of the same program. Lincoln was successful by April 1863 at sending black colonists to Haiti and 453 to Chiriqui in Central America; however, none of the colonies were able to remain self - sufficient. Frederick Douglass, a prominent 19th - century American civil rights activist, criticized that Lincoln was "showing all his inconsistencies, his pride of race and blood, his contempt for Negroes and his canting hypocrisy. '' African Americans, according to Douglass, wanted citizen rights rather than to be colonized. Historians debate if Lincoln gave up on African - American colonization at the end of 1863 or if he actually planned to continue this policy up until 1865. Starting in March 1862, in an effort to forestall Reconstruction by the Radicals in Congress, President Lincoln installed military governors in certain rebellious states under Union military control. Although the states would not be recognized by the Radicals until an undetermined time, installation of military governors kept the administration of Reconstruction under Presidential control, rather than that of the increasingly unsympathetic Radical Congress. On March 3, 1862, Lincoln installed a loyalist Democrat, Senator Andrew Johnson, as Military Governor with the rank of Brigadier General in his home state of Tennessee. In May 1862, Lincoln appointed Edward Stanly Military Governor of the coastal region of North Carolina with the rank of Brigadier General. Stanly resigned almost a year later when he angered Lincoln by closing two schools for black children in New Bern. After Lincoln installed Brigadier General George F. Sheply as Military Governor of Louisiana in May 1862, Sheply sent two anti-slavery representatives, Benjamin Flanders and Michael Hahn, elected in December 1862, to the House which capitulated and voted to seat them. In July 1862, Lincoln installed Colonel John S. Phelps as Military Governor of Arkansas, though he resigned soon after due to poor health. In July 1862, President Lincoln became convinced that "a military necessity '' was needed to strike at slavery in order to win the Civil War for the Union. The Confiscation Acts were only having a minimal effect to end slavery. On July 22, he wrote a first draft of the Emancipation Proclamation that freed the slaves in states in rebellion. After he showed his cabinet the document, slight alterations were made in the wording. Lincoln decided that the defeat of the Confederate invasion of the North at Sharpsburg was enough of a battlefield victory to enable him to release the preliminary Emancipation Proclamation that gave the rebels 100 days to return to the Union or the actual Proclamation would be issued. On January 1, 1863, the actual Emancipation Proclamation was issued, specifically naming ten states in which slaves would be "forever free ''. The proclamation did not name the states of Tennessee, Kentucky, Missouri, Maryland, and Delaware, and specifically excluded numerous counties in some other states. Eventually, as the Union Armies advanced into the Confederacy millions of slaves were set free. Many of these freedmen joined the Union army and fought in battles against the Confederate forces. Yet hundreds of thousands of freed slaves died during emancipation from illness that devastated army regiments. Freed slaves suffered from smallpox, yellow fever, and malnutrition. President Abraham Lincoln was concerned to effect a speedy restoration of the Confederate states to the Union after the Civil War. In 1863, President Lincoln proposed a moderate plan for the Reconstruction of the captured Confederate State of Louisiana. The plan granted amnesty to Rebels who took an oath of loyalty to the Union. Black Freedmen workers were tied to labor on plantations for one year at $10 a month pay. Only 10 % of the state 's electorate had to take the loyalty oath in order for the state to be readmitted into U.S. Congress. The state was required to abolish slavery in its new constitution. Identical Reconstruction plans would be adopted in Arkansas and Tennessee. By December 1864, the Lincoln plan of Reconstruction had been enacted in Louisiana and the legislature sent two Senators and five Representatives to take their seats in Washington. However, Congress refused to count any of the votes from Louisiana, Arkansas, and Tennessee, in essence rejecting Lincoln 's moderate Reconstruction plan. Congress, at this time controlled by the Radicals, proposed the Wade -- Davis Bill that required a majority of the state electorates to take the oath of loyalty to be admitted to Congress. Lincoln pocket - vetoed the bill and the rift widened between the moderates, who wanted to save the Union and win the war, and the Radicals, who wanted to effect a more complete change within Southern society. Frederick Douglass denounced Lincoln 's 10 % electorate plan as undemocratic since state admission and loyalty only depended on a minority vote. Before 1864, slave marriages had not been recognized legally; emancipation did not affect them. When freed, many made official marriages. Before emancipation, slaves could not enter into contracts, including the marriage contract. Not all free people formalized their unions. Some continued to have common - law marriages or community - recognized relationships. The acknowledgement of marriage by the state increased the state 's recognition of freedpeople as legal actors and eventually helped make the case for parental rights for freedpeople against the practice of apprenticeship of black children. These children were legally taken away from their families under the guise of "providing them with guardianship and ' good ' homes until they reached the age of consent at twenty - one '' under acts such as the Georgia 1866 Apprentice Act. Such children were generally used as sources of unpaid labor. On March 3, 1865 the Freedmen 's Bureau Bill became law, sponsored by the Republicans to aid freedmen and white refugees. A federal Bureau was created to provide food, clothing, fuel, and advice on negotiating labor contracts. It attempted to oversee new relations between freedmen and their former masters in a free labor market. The Act, without deference to a person 's color, authorized the Bureau to lease confiscated land for a period of three years and to sell it in portions of up to 40 acres (16 ha) per buyer. The Bureau was to expire one year after the termination of the War. Lincoln was assassinated before he could appoint a commissioner of the Bureau. A popular myth was that the Act offered 40 acres and a mule, or that slaves had been promised this. With the help of the Bureau, the recently freed slaves began voting, forming political parties, and assuming the control of labor in many areas. The Bureau helped to start a change of power in the South that drew national attention from the Republicans in the North to the conservative Democrats in the South. This is especially evident in the election between Grant and Seymour (Johnson did not get the Democratic nomination), where almost 700,000 black voters voted and swayed the election 300,000 votes in Grant 's favor. Even with the benefits that it gave to the freedmen, the Freedmen 's Bureau was unable to operate effectively in certain areas. Terrorizing freedmen for trying to vote, hold a political office, or own land, the Ku Klux Klan was the antithesis to the Freedmen 's Bureau. Other legislation was signed that broadened equality and rights for African Americans. Lincoln outlawed discrimination on account of color, in carrying U.S. mail, in riding on public street cars in Washington D.C., and in pay for soldiers. Lincoln and Secretary of State William H. Seward met with three southern representatives to discuss the peaceful reconstruction of the Union and the Confederacy on February 3, 1865 in Hampton Roads, Virginia. The southern delegation included Confederate vice-president, Alexander H. Stephens, John A. Campbell, and Robert M.T. Hunter. The southerners proposed the Union recognition of the Confederacy, a joint Union - Confederate attack on Mexico to oust dictator Maximillian, and an alternative subordinate status of servitude for blacks rather than slavery. Lincoln flatly rejected recognition of the Confederacy, and said that the slaves covered by his Emancipation Proclamation would not be re-enslaved. He said that the Union States were about to pass the Thirteenth Amendment outlawing slavery. Lincoln urged the governor of Georgia to remove Confederate troops and "ratify this Constitutional Amendment prospectively, so as to take effect -- say in five years... Slavery is doomed. '' Lincoln also urged compensated emancipation for the slaves as he thought the North should be willing to share the costs of freedom. Although the meeting was cordial, the parties did not settle on agreements. Lincoln continued to advocate his Louisiana Plan as a model for all states up until his assassination on April 14, 1865. The plan successfully started the Reconstruction process of ratifying the Thirteenth Amendment in all states. Lincoln is typically portrayed as taking the moderate position and fighting the Radical positions. There is considerable debate on how well Lincoln, had he lived, would have handled Congress during the Reconstruction process that took place after the Civil War ended. One historical camp argues that Lincoln 's flexibility, pragmatism, and superior political skills with Congress would have solved Reconstruction with far less difficulty. The other camp believes the Radicals would have attempted to impeach Lincoln, just as they did to his successor, Andrew Johnson, in 1868. Northern anger over the assassination of Lincoln and the immense human cost of the war led to demands for punitive policies. Vice President Andrew Johnson had taken a hard line and spoke of hanging rebel Confederates, but when he succeeded Lincoln as President, Johnson took a much softer position, pardoning many Confederate leaders and former Confederates. Jefferson Davis was held in prison for two years, but other Confederate leaders were not. There were no treason trials. Only one person -- Captain Henry Wirz, the commandant of the prison camp in Andersonville, Georgia -- was executed for war crimes. Andrew Johnson 's conservative view of Reconstruction did not include blacks or former slaves involvement in government and he refused to heed Northern concerns when southern state legislatures implemented Black Codes that set the status of the freedmen much lower than that of citizens. Smith argues that, "Johnson attempted to carry forward what he considered to be Lincoln 's plans for Reconstruction. '' McKitrick says that in 1865 Johnson had strong support in the Republican Party, "It was naturally from the great moderate sector of Unionist opinion in the North that Johnson could draw his greatest comfort. '' Billington says, "One faction, the Moderate Republicans under the leadership of Presidents Abraham Lincoln and Andrew Johnson, favored a mild policy toward the South. '' Lincoln biographers Randall and Current argued that: Historians agree that President Johnson was an inept politician who lost all his advantages by his clumsy moves. He broke with Congress in early 1866 and then became defiant and tried to block enforcement of Reconstruction laws passed by the U.S. Congress. He was in constant conflict constitutionally with the Radicals in Congress over the status of freedmen and whites in the defeated South. Although resigned to the abolition of slavery, many former Confederates were unwilling to accept both social changes and political domination by former slaves. In the words of Benjamin F. Perry, President Johnson 's choice as the provisional governor of South Carolina: "First, the Negro is to be invested with all political power, and then the antagonism of interest between capital and labor is to work out the result. ' However, the fears of the mostly conservative planter elite and other leading white citizens were partly assuaged by the actions of President Johnson, who ensured that a wholesale land redistribution from the planters to the freedman did not occur. President Johnson ordered that confiscated or abandoned lands administered by the Freedmen 's Bureau would not be redistributed to the freedmen but be returned to pardoned owners. Land was returned that would have been forfeited under the Confiscation Acts passed by Congress in 1861 and 1862. Southern state governments quickly enacted the restrictive "black codes ''. However, they were abolished in 1866 and seldom had effect, because the Freedmen 's Bureau (not the local courts) handled the legal affairs of freedmen. The Black Codes indicated the plans of the southern whites for the former slaves. The freedmen would have more rights than did free blacks before the war, but they would still have only a limited set of second - class civil rights, no voting rights and no citizenship. They could not own firearms, serve on a jury in a lawsuit involving whites or move about without employment. The Black Codes outraged northern opinion. They were overthrown by the Civil Rights Act of 1866 that gave the freedmen full legal equality (except for the right to vote). The freedmen, with the strong backing of the Freedmen 's Bureau, rejected gang - labor work patterns that had been used in slavery. Instead of gang labor, freedpeople preferred family - based labor groups. They forced planters to bargain for their labor. Such bargaining soon led to the establishment of the system of sharecropping, which gave the freedmen greater economic independence and social autonomy than gang labor. However, because they lacked capital and the planters continued to own the means of production (tools, draft animals and land), the freedmen were forced into producing cash crops (mainly cotton) for the land - owners and merchants, and they entered into a crop - lien system. Widespread poverty, disruption to an agricultural economy too dependent on cotton, and the falling price of cotton, led within decades to the routine indebtedness of the majority of the freedmen, and poverty by many planters. Northern officials gave varying reports on conditions for the freedmen in the South. One harsh assessment came from Carl Schurz, who reported on the situation in the states along the Gulf Coast. His report documented dozens of extra-judicial killings and claimed that hundreds or thousands more African Americans were killed. The report included sworn testimony from soldiers and officials of the Freedmen 's Bureau. In Selma, Alabama, Major J.P. Houston noted that whites who killed twelve African Americans in his district never came to trial. Many more killings never became official cases. Captain Poillon described white patrols in southwestern Alabama Much of the violence that was perpetrated against African Americans was shaped by gendered prejudices regarding African Americans. Black women were in a particularly vulnerable situation. To convict a white man of sexually assaulting black women in this period was exceedingly difficult. The South 's judicial system had been wholly refigured to make one of its primary purposes the coercion of African Americans to comply with the social customs and labor demands of whites. Trials were discouraged and attorneys for black misdemeanor defendants were difficult to find. The goal of county courts was a fast, uncomplicated trial with a resulting conviction. Most blacks were unable to pay their fines or bail, and "the most common penalty was nine months to a year in a slave mine or lumber camp. '' The South 's judicial system was rigged to generate fees and claim bounties, not to ensure public protection. Black women were socially constructed as sexually avaricious and since they were portrayed as having little virtue, society held that they could not be raped. One report indicates two freedwomen, Frances Thompson and Lucy Smith, describe their violent sexual assault during the Memphis Riots of 1866. However, black women were vulnerable even in times of relative normalcy. Sexual assaults on African - American women were so pervasive, particularly on the part of their white employers, that black men sought to reduce the contact between white males and black females by having the women in their family avoid doing work that was closely overseen by whites. Black men were construed as being extremely sexually aggressive and their supposed or rumored threats to white women were often used as a pretext for lynching and castrations. During fall 1865, out of response to the Black codes and worrisome signs of Southern recalcitrance, the Radical Republicans blocked the readmission of the former rebellious states to the Congress. Johnson, however, was content with allowing former Confederate states into the Union as long as their state governments adopted the 13th Amendment abolishing slavery. By December 6, 1865, the amendment was ratified and Johnson considered Reconstruction over. Johnson was following the moderate Lincoln Presidential Reconstruction policy to get the states readmitted as soon as possible. Congress, however, controlled by the Radicals, had other plans. The Radicals were led by Charles Sumner in the Senate and Thaddeus Stevens in the House of Representatives. Congress, on December 4, 1865, rejected Johnson 's moderate Presidential Reconstruction, and organized the Joint Committee on Reconstruction, a 15 - member panel to devise reconstruction requirements for the Southern states to be restored to the Union. In January 1866, Congress renewed the Freedmen 's Bureau; however, Johnson vetoed the Freedmen 's Bureau Bill in February 1866. Although Johnson had sympathies for the plights of the freedmen, he was against federal assistance. An attempt to override the veto failed on February 20, 1866. This veto shocked the Congressional Radicals. In response, both the Senate and House passed a joint resolution not to allow any Senator or Representative seat admittance until Congress decided when Reconstruction was finished. Senator Lyman Trumbull of Illinois, leader of the moderate Republicans, took affront at the black codes. He proposed the first Civil Rights Law, because the abolition of slavery was empty if The key to the bill was the opening section: The bill did not give Freedmen the right to vote. Congress quickly passed the Civil Rights bill; the Senate on February 2 voted 33 -- 12; the House on March 13 voted 111 -- 38. Although strongly urged by moderates in Congress to sign the Civil Rights bill, Johnson broke decisively with them by vetoing it on March 27, 1866. His veto message objected to the measure because it conferred citizenship on the freedmen at a time when eleven out of thirty - six states were unrepresented and attempted to fix by Federal law "a perfect equality of the white and black races in every State of the Union. '' Johnson said it was an invasion by Federal authority of the rights of the States; it had no warrant in the Constitution and was contrary to all precedents. It was a "stride toward centralization and the concentration of all legislative power in the national government. '' The Democratic Party, proclaiming itself the party of white men, north and south, supported Johnson. However the Republicans in Congress overrode his veto (the Senate by the close vote of 33: 15, the House by 122: 41) and the Civil Rights bill became law. Congress also passed a toned - down Freedmen 's Bureau Bill; Johnson quickly vetoed as he had done to the previous bill. Once again, however, Congress had enough support and overrode Johnson 's veto. The last moderate proposal was the Fourteenth Amendment, whose principal drafter was Representative John Bingham. It was designed to put the key provisions of the Civil Rights Act into the Constitution, but it went much further. It extended citizenship to everyone born in the United States (except visitors and Indians on reservations), penalized states that did not give the vote to freedmen, and most importantly, created new federal civil rights that could be protected by federal courts. It guaranteed the Federal war debt would be paid (and promised the Confederate debt would never be paid). Johnson used his influence to block the amendment in the states since three - fourths of the states were required for ratification (the amendment was later ratified.). The moderate effort to compromise with Johnson had failed, and a political fight broke out between the Republicans (both Radical and moderate) on one side, and on the other side, Johnson and his allies in the Democratic Party in the North, and the conservative groupings (which used different names) in each southern state. Concerned that President Johnson viewed Congress as an "illegal body '' and wanted to overthrow the government, Republicans in Congress took control of Reconstruction policies after the election of 1866. Johnson ignored the policy mandate, and he openly encouraged southern states to deny ratification of the 14th Amendment (except for Tennessee, all former Confederate states did refuse to ratify, as did the border states of Delaware, Maryland and Kentucky). Radical Republicans in Congress, led by Stevens and Sumner, opened the way to suffrage for male freedmen. They were generally in control, although they had to compromise with the moderate Republicans (the Democrats in Congress had almost no power). Historians refer to this period as "Radical Reconstruction '' or "Congressional Reconstruction. '' The South 's white leaders, who held power in the immediate postwar era before the vote was granted to the freedmen, renounced secession and slavery, but not white supremacy. People who had previously held power were angered in 1867 when new elections were held. New Republican lawmakers were elected by a coalition of white Unionists, freedmen and northerners who had settled in the South. Some leaders in the South tried to accommodate to new conditions. Three Constitutional amendments, known as the Reconstruction Amendments, were adopted. The 13th Amendment abolishing slavery was ratified in 1865. The 14th Amendment was proposed in 1866 and ratified in 1868, guaranteeing United States citizenship to all persons born or naturalized in the United States and granting them federal civil rights. The 15th Amendment, proposed in late February 1869 and passed in early February 1870, decreed that the right to vote could not be denied because of "race, color, or previous condition of servitude ''. The amendment did not declare the vote an unconditional right; it prohibited these types of discrimination. States would still determine voter registration and electoral laws. The amendments were directed at ending slavery and providing full citizenship to freedmen. Northern Congressmen believed that providing black men with the right to vote would be the most rapid means of political education and training. Many blacks took an active part in voting and political life, and rapidly continued to build churches and community organizations. Following Reconstruction, white Democrats and insurgent groups used force to regain power in the state legislatures, and pass laws that effectively disfranchised most blacks and many poor whites in the South. From 1890 to 1910, southern states passed new constitutions that completed disfranchisement of blacks. U.S. Supreme Court rulings on these provisions upheld many of these new southern constitutions and laws, and most blacks were prevented from voting in the South until the 1960s. Full federal enforcement of the Fourteenth and Fifteenth Amendments did not occur until after passage of legislation in the mid-1960s as a result of the Civil Rights Movement. For details, see: The Reconstruction Acts as originally passed, were initially called "An act to provide for the more efficient Government of the Rebel States '' the legislation was enacted by the 39th Congress, on March 2, 1867. It was vetoed by President Johnson, and the veto overridden by two - thirds majority, in both the House and the Senate, the same day. Congress also clarified the scope of the federal writ of habeas corpus to allow federal courts to vacate unlawful state court convictions or sentences in 1867 (28 U.S.C. § 2254). With the Radicals in control, Congress passed the Reconstruction Acts on July 19, 1867. The first Reconstruction Act, authored by Oregon Sen. George H. Williams, a Radical Republican, placed 10 of the former Confederate states -- all but Tennessee -- under military control, grouping them into five military districts: 20,000 U.S. troops were deployed to enforce the Act. The four border states that had not joined the Confederacy were not subject to military Reconstruction. West Virginia, which had seceded from Virginia in 1863, and Tennessee, which had already been re-admitted in 1866, were not included in the military districts. The ten Southern state governments were re-constituted under the direct control of the United States Army. One major purpose was to recognize and protect the right of African Americans to vote. There was little or no combat, but rather a state of martial law in which the military closely supervised local government, supervised elections, and tried to protect office holders and freedmen from violence. Blacks were enrolled as voters; former Confederate leaders were excluded for a limited period. No one state was entirely representative. Randolph Campbell describes what happened in Texas: The eleven Southern states held constitutional conventions giving black men the right to vote., where the factions divided into the Radical, Conservative, and in - between delegates. The Radicals were a coalition: 40 % were Southern white Republicans ("scalawags ''); 25 % were white Carpetbaggers, and 34 % were black. Scalawags wanted to disfranchise all of the traditional white leadership class, but moderate Republican leaders in the North warned against that, and black delegates typically called for universal voting rights. The carpetbaggers inserted provisions designed to promote economic growth, especially financial aid to rebuild the ruined railroad system. The conventions set up systems of free public schools funded by tax money, but did not require them to be racially integrated. Until 1872, most former Confederate or prewar Southern office holders were disqualified from voting or holding office; all but 500 top Confederate leaders were pardoned by the Amnesty Act of 1872. "Proscription '' was the policy of disqualifying as many ex-Confederates as possible. It appealed to the Scalawag element. For example, in 1865 Tennessee had disfranchised 80,000 ex-Confederates. However, prescription was soundly rejected by the black element, which insisted on universal suffrage. The issue would come up repeatedly in several states, especially in Texas and Virginia. In Virginia, an effort was made to disqualify for public office every man who had served in the Confederate Army even as a private, and any civilian farmer who sold food to the Confederate army. Disfranchising Southern whites was also opposed by moderate Republicans in the North, who felt that ending proscription would bring the South closer to a republican form of government based on the consent of the governed, as called for by the Constitution and the Declaration of Independence. Strong measures that were called for in order to forestall a return to the defunct Confederacy increasingly seemed out of place, and the role of the United States Army and controlling politics in the state was troublesome. Increasingly, historian Mark Summers states, "the disfranchisers had to fall back on the contention that denial of the vote was meant as punishment, and a lifelong punishment at that... Month by month, the unrepublican character of the regime looked more glaring. '' During the Civil War, many in the North believed that fighting for the Union was a noble cause -- for the preservation of the Union and the end of slavery. After the war ended, with the North victorious, the fear among Radicals was that President Johnson too quickly assumed that slavery and Confederate nationalism were dead and that the southern states could return. The Radicals sought out a candidate for President who represented their viewpoint. In 1868, the Republicans unanimously chose Ulysses S. Grant as their Presidential candidate. Grant won favor with the Radicals after he allowed Edwin Stanton, a Radical, to be reinstated as Secretary of War. As early as 1862, during the Civil War, Grant had appointed the Ohio military chaplain John Eaton to protect and gradually incorporate refugee slaves in west Tennessee and northern Mississippi into the Union War effort and pay them for their labor. It was the beginning of his vision for the Freedmen 's Bureau. Grant opposed President Johnson by supporting the Reconstruction Acts passed by the Radicals. Immediately upon Inauguration in 1869, Grant bolstered Reconstruction by prodding Congress to readmit Virginia, Mississippi, and Texas into the Union, while ensuring their constitutions protected every citizen 's voting rights. Grant met with prominent black leaders for consultation, and signed a bill into law that guaranteed equal rights to both blacks and whites in Washington D.C. In Grant 's two terms he strengthened Washington 's legal capabilities to directly intervene to protect citizenship rights even if the states ignored the problem. He worked with Congress to create the Department of Justice and Office of Solicitor General, led by Attorney General Amos Akerman and the first Solicitor General Benjamin Bristow. Congress passed three powerful Enforcement Acts in 1870 -- 71. These were criminal codes which protected the Freedmen 's right to vote, to hold office, to serve on juries, and receive equal protection of laws. Most important, they authorized the federal government to intervene when states did not act. Grant 's new Justice Department prosecuted thousands of Klansmen under the tough new laws. Grant sent federal troops to nine South Carolina counties to suppress Klan violence in 1871. Grant supported passage of the Fifteenth Amendment stating that no state could deny a man the right to vote on the basis of race. Congress passed the Civil Rights Act of 1875 giving people access to public facilities regardless of race. To counter vote fraud in the Democratic stronghold of New York City, Grant sent in tens of thousands of armed, uniformed federal marshals and other election officials to regulate the 1870 and subsequent elections. Democrats across the North then mobilized to defend their base and attacked Grant 's entire set of policies. On October 21, 1876 President Grant deployed troops to protect black and white Republican voters in Petersburg, Virginia. Grant 's support from Congress and the nation declined due to scandals within his administration and the political resurgence of the Democrats in the North and South. By 1870, most Republicans felt the war goals had been achieved, and they turned their attention to other issues such as economic policies. On April 20, 1871, the U.S. Congress launched a 21 - member investigation committee on the status of the Southern Reconstruction states: North Carolina, South Carolina, Georgia, Mississippi, Alabama, and Florida. Congressional members on the committee included Rep. Benjamin Butler, Sen. Zachariah Chandler, and Sen. Francis P. Blair. Subcommittee members traveled into the South to interview the people living in their respective states. Those interviewed included top - ranking officials, such as Wade Hampton, former South Carolina Gov. James L. Orr, and Nathan B. Forrest, a former Confederate general and (alleged) prominent Ku Klux Klan leader (Forrest denied in his Congressional testimony being a member). Other southerners interviewed included farmers, doctors, merchants, teachers, and clergymen. The committee heard numerous reports of white violence against blacks, while many whites denied Klan membership or knowledge of violent activities. The majority report by Republicans concluded that the government would not tolerate any Southern "conspiracy '' to resist violently the Congressional Reconstruction. The committee completed its 13 - volume report in February 1872. While Grant had been able to suppress the KKK through the Enforcement Acts, other paramilitary insurgents organized, including the White League in 1874, active in Louisiana; and the Red Shirts, with chapters active in Mississippi and the Carolinas. They used intimidation and outright attacks to run Republicans out of office and repress voting by blacks, leading to white Democrats regaining power by the elections of the mid-to - late 1870s. Republicans took control of all Southern state governorships and state legislatures, except for Virginia. The Republican coalition elected numerous African Americans to local, state, and national offices; though they did not dominate any electoral offices, black men as representatives voting in state and federal legislatures marked a drastic social change. At the beginning of 1867, no African American in the South held political office, but within three or four years "about 15 percent of the officeholders in the South were black -- a larger proportion than in 1990. '' Most of those offices were at the local level. In 1860 blacks constituted the majority of the population in Mississippi and South Carolina, 47 % in Louisiana, 45 % in Alabama, and 44 % in Georgia and Florida, so their political influence was still far less than their percentage of the population. About 137 black officeholders had lived outside the South before the Civil War. Some who had escaped from slavery to the North and had become educated returned to help the South advance in the postwar era. Others were free blacks before the war, who had achieved education and positions of leadership elsewhere. Other African - American men elected to office were already leaders in their communities, including a number of preachers. As happened in white communities, not all leadership depended upon wealth and literacy. There were few African Americans elected or appointed to national office. African Americans voted for both white and black candidates. The Fifteenth Amendment to the United States Constitution guaranteed only that voting could not be restricted on the basis of race, color or previous condition of servitude. From 1868 on, campaigns and elections were surrounded by violence as white insurgents and paramilitary tried to suppress the black vote, and fraud was rampant. Many Congressional elections in the South were contested. Even states with majority African - American population often elected only one or two African - American representatives to Congress. Exceptions included South Carolina; at the end of Reconstruction, four of its five Congressmen were African American. Freedmen were very active in forming their own churches, mostly Baptist or Methodist, and giving their ministers both moral and political leadership roles. In a process of self - segregation, practically all blacks left white churches so that few racially integrated congregations remained (apart from some Catholic churches in Louisiana). They started many new black Baptist churches and soon, new black state associations. Four main groups competed with each other across the South to form new Methodist churches composed of freedmen. They were the African Methodist Episcopal Church; the African Methodist Episcopal Zion Church, both independent black denominations founded in Philadelphia and New York, respectively; the Colored Methodist Episcopal Church (which was sponsored by the white Methodist Episcopal Church, South) and the well - funded Methodist Episcopal Church (Northern white Methodists). The Methodist Church had split before the war due to disagreements about slavery. By 1871 the Northern Methodists had 88,000 black members in the South, and had opened numerous schools for them. Blacks in the South made up a core element of the Republican Party. Their ministers had powerful political roles that were distinctive since they did not depend on white support, in contrast to teachers, politicians, businessmen, and tenant farmers. Acting on the principle as stated by Charles H. Pearce, an AME minister in Florida: "A man in this State can not do his whole duty as a minister except he looks out for the political interests of his people, '' more than 100 black ministers were elected to state legislatures during Reconstruction, as well as several to Congress and one, Hiram Revels, to the U.S. Senate. In a highly controversial action during the war, the Northern Methodists used the Army to seize control of Methodist churches in large cities, over the vehement protests of the Southern Methodists. Historian Ralph Morrow reports: Across the North most evangelical denominations, especially the Methodists, Congregationalists and Presbyterians, as well as the Quakers, strongly supported Radical policies. The focus on social problems paved the way for the Social Gospel movement. Matthew Simpson, a Methodist bishop, played a leading role in mobilizing the Northern Methodists for the cause. His biographer calls him the "High Priest of the Radical Republicans. '' The Methodist Ministers Association of Boston, meeting two weeks after Lincoln 's assassination, called for a hard line against the Confederate leadership: The denominations all sent missionaries, teachers and activists to the South to help the freedmen. Only the Methodists made many converts, however. Activists sponsored by Northern Methodist Church played a major role in the Freedmen 's Bureau, notably in such key educational roles as the Bureau 's state superintendent or assistant superintendent of education for Virginia, Florida, Alabama, and South Carolina. Many Americans interpreted great events in religious terms. Historian Wilson Fallin contrasts the interpretation of Civil War and Reconstruction in white versus black Baptist sermons in Alabama. White Baptists expressed the view that: In sharp contrast, Black Baptists interpreted the Civil War, emancipation and Reconstruction as: Historian James D. Anderson argues that the freed slaves were the first Southerners "to campaign for universal, state - supported public education. '' Blacks in the Republican coalition played a critical role in establishing the principle in state constitutions for the first time during congressional Reconstruction. Some slaves had learned to read from white playmates or colleagues before formal education was allowed by law; African Americans started "native schools '' before the end of the war; Sabbath schools were another widespread means that freedmen developed to teach literacy. When they gained suffrage, black politicians took this commitment to public education to state constitutional conventions. The Republicans created a system of public schools, which were segregated by race everywhere except New Orleans. Generally, elementary and a few secondary schools were built in most cities, and occasionally in the countryside, but the South had few cities. The rural areas faced many difficulties opening and maintaining public schools. In the country, the public school was often a one - room affair that attracted about half the younger children. The teachers were poorly paid, and their pay was often in arrears. Conservatives contended the rural schools were too expensive and unnecessary for a region where the vast majority of people were cotton or tobacco farmers. They had no vision of a better future for their residents. One historian found that the schools were less effective than they might have been because "poverty, the inability of the states to collect taxes, and inefficiency and corruption in many places prevented successful operation of the schools. '' After Reconstruction ended and the whites disfranchised the blacks and imposed Jim Crow, they consistently underfunded black institutions, including the schools. After the war, northern missionaries founded numerous private academies and colleges for freedmen across the South. In addition, every state founded state colleges for freedmen, such as Alcorn State University in Mississippi. The normal schools and state colleges produced generations of teachers who were integral to the education of African - American children under the segregated system. By the end of the century, the majority of African Americans were literate. In the late 19th century, the federal government established land grant legislation to provide funding for higher education across the United States. Learning that blacks were excluded from land grant colleges in the South, in 1890 the federal government insisted that southern states establish black state institutions as land grant colleges to provide for black higher education, in order to continue to receive funds for their already established white schools. Some states classified their black state colleges as land grant institutions. Former Congressman John Roy Lynch wrote, "there are very many liberal, fair - minded and influential Democrats in the State (Mississippi) who are strongly in favor of having the State provide for the liberal education of both races. '' Every Southern state subsidized railroads, which modernizers believed could haul the South out of isolation and poverty. Millions of dollars in bonds and subsidies were fraudulently pocketed. One ring in North Carolina spent $200,000 in bribing the legislature and obtained millions in state money for its railroads. Instead of building new track, however, it used the funds to speculate in bonds, reward friends with extravagant fees, and enjoy lavish trips to Europe. Taxes were quadrupled across the South to pay off the railroad bonds and the school costs. There were complaints among taxpayers because taxes had historically been low, as the planter elite was not committed to public infrastructure or public education. Taxes historically had been much lower in the South than in the North, reflecting the lack of government investment by the communities. Nevertheless, thousands of miles of lines were built as the Southern system expanded from 11,000 miles (17,700 km) in 1870 to 29,000 miles (46,700 km) in 1890. The lines were owned and directed overwhelmingly by Northerners. Railroads helped create a mechanically skilled group of craftsmen and broke the isolation of much of the region. Passengers were few, however, and apart from hauling the cotton crop when it was harvested, there was little freight traffic. As Franklin explains, "numerous railroads fed at the public trough by bribing legislators... and through the use and misuse of state funds. '' The effect, according to one businessman, "was to drive capital from the State, paralyze industry, and demoralize labor. '' Reconstruction changed the means of taxation in the South. In the U.S. from the earliest days until today, a major source of state revenue was the property tax. In the South, wealthy landowners were allowed to self - assess the value of their own land. These fraudulent assessments were almost valueless, and pre-war property tax collections were lacking due to property value misrepresentation. State revenues came from fees and from sales taxes on slave auctions. Some states assessed property owners by a combination of land value and a capitation tax, a tax on each worker employed. This tax was often assessed in a way to discourage a free labor market, where a slave was assessed at 75 cents, while a free white was assessed at a dollar or more, and a free African American at $3 or more. Some revenue also came from poll taxes. These taxes were more than poor people could pay, with the designed and inevitable consequence that they did not vote. During Reconstruction, the state legislature mobilized to provide for public need more than had previous governments: establishing public schools and investing in infrastructure, as well as charitable institutions such as hospitals and asylums. The needed to increase taxes which were abnormally low. The planters had provided privately for their own needs. There was some fraudulent spending in the postwar years; a collapse in state credit because of huge deficits, forced the states to increase property tax rates. In places, the rate went up to ten times higher -- despite the poverty of the region. The planters had not invested in infrastructure and much had been destroyed during the war. In part, the new tax system was designed to force owners of large plantations with huge tracts of uncultivated land either to sell or to have it confiscated for failure to pay taxes. The taxes would serve as a market - based system for redistributing the land to the landless freedmen and white poor. Mississippi, for instance, was mostly frontier, with 90 % of the bottomlands in the interior undeveloped. The following table shows property tax rates for South Carolina and Mississippi. Note that many local town and county assessments effectively doubled the tax rates reported in the table. These taxes were still levied upon the landowners ' own sworn testimony as to the value of their land, which remained the dubious and exploitable system used by wealthy landholders in the South well into the 20th century. Called upon to pay taxes on their property, essentially for the first time, angry plantation owners revolted. The conservatives shifted their focus away from race to taxes. Former Congressman John R. Lynch, a black Republican leader from Mississippi, later wrote, While the "Scalawag '' element of Republican whites supported measures for black civil rights, the conservative whites typically opposed these measures. Some supported armed attacks to suppress black power. They self - consciously defended their own actions within the framework of an Anglo - American discourse of resistance against tyrannical government, and they broadly succeeded in convincing many fellow white citizens says Steedman. The opponents of Reconstruction formed state political parties, affiliated with the national Democratic party and often named the "Conservative party. '' They supported or tolerated violent paramilitary groups, such as the White League in Louisiana and the Red Shirts in Mississippi and the Carolinas, that assassinated and intimidated both black and white Republican leaders at election time. Historian George C. Rable called such groups the "military arm of the Democratic Party. '' By the mid-1870s, the Conservatives and Democrats had aligned with the national Democratic Party, which enthusiastically supported their cause even as the national Republican Party was losing interest in Southern affairs. Historian Walter Lynwood Fleming, associated with the early 20th - century Dunning School, describes the mounting anger of Southern whites: Often, these white Southerners identified as the "Conservative Party '' or the "Democratic and Conservative Party '' in order to distinguish themselves from the national Democratic Party and to obtain support from former Whigs. These parties sent delegates to the 1868 Democratic National Convention and abandoned their separate names by 1873 or 1874. Most (white) members of both the planter / business class and common farmer class of the South opposed black power, carpetbaggers and military rule, and sought white supremacy. Democrats nominated some blacks for political office and tried to steal other blacks from the Republican side. When these attempts to combine with the blacks failed, the planters joined the common farmers in simply trying to displace the Republican governments. The planters and their business allies dominated the self - styled "conservative '' coalition that finally took control in the South. They were paternalistic toward the blacks but feared they would use power to raise taxes and slow business development. Fleming described the first results of the insurgent movement as "good, '' and the later ones as "both good and bad. '' According to Fleming (1907), the KKK "quieted the Negroes, made life and property safer, gave protection to women, stopped burnings, forced the Radical leaders to be more moderate, made the Negroes work better, drove the worst of the Radical leaders from the country and started the whites on the way to gain political supremacy. '' The evil result, Fleming said, was that lawless elements "made use of the organization as a cloak to cover their misdeeds... the lynching habits of today (1907) are largely due to conditions, social and legal, growing out of Reconstruction. '' Historians have noted that the peak of lynchings took place near the turn of the century, decades after Reconstruction ended, as whites were imposing Jim Crow laws and passing new state constitutions that disenfranchised the blacks. The lynchings were used for intimidation and social control, with a frequency associated with economic stresses and the settlement of sharecropper accounts at the end of the season, than for any other reason. Ellis Paxson Oberholtzer (a northern scholar) in 1917 explained: As Reconstruction continued, whites accompanied elections with increased violence in an attempt to run Republicans out of office and suppress black voting. The victims of this violence were overwhelmingly African American, as in the Colfax Massacre of 1873. After federal suppression of the Klan in the early 1870s, white insurgent groups tried to avoid open conflict with federal forces. In 1874 in the Battle of Liberty Place, the White League entered New Orleans with 5,000 members and defeated the police and militia, to occupy federal offices for three days in an attempt to overturn the disputed government of William Kellogg, but retreated before federal troops reached the city. None were prosecuted. Their election - time tactics included violent intimidation of African - American and Republican voters prior to elections, while avoiding conflict with the U.S. Army or the state militias, and then withdrawing completely on election day. Conservative reaction continued in both the north and south; the "white liners '' movement to elect candidates dedicated to white supremacy reached as far as Ohio in 1875. As early as 1868 Supreme Court Chief Justice Salmon P. Chase, a leading Radical during the war, concluded that: By 1872, President Ulysses S. Grant had alienated large numbers of leading Republicans, including many Radicals, by the corruption of his administration and his use of federal soldiers to prop up Radical state regimes in the South. The opponents, called "Liberal Republicans '', included founders of the party who expressed dismay that the party had succumbed to corruption. They were further wearied by the continued insurgent violence of whites against blacks in the South, especially around every election cycle, which demonstrated the war was not over and changes were fragile. Leaders included editors of some of the nation 's most powerful newspapers. Charles Sumner, embittered by the corruption of the Grant administration, joined the new party, which nominated editor Horace Greeley. The badly organized Democratic party also supported Greeley. Grant made up for the defections by new gains among Union veterans and by strong support from the "Stalwart '' faction of his party (which depended on his patronage), and the Southern Republican parties. Grant won with 55.6 % of the vote to Greeley 's 43.8 %. The Liberal Republican party vanished and many former supporters -- even former abolitionists -- abandoned the cause of Reconstruction. In the South, political -- racial tensions built up inside the Republican party as they were attacked by the Democrats. In 1868, Georgia Democrats, with support from some Republicans, expelled all 28 black Republican members from the state house, arguing blacks were eligible to vote but not to hold office. In most states, the more conservative scalawags fought for control with the more radical carpetbaggers and their black allies. Most of the 430 Republican newspapers in the South were edited by scalawags -- only 20 percent were edited by carpetbaggers. White businessmen generally boycotted Republican papers, which survived through government patronage. Nevertheless, in the increasingly bitter battles inside the Republican Party, the scalawags usually lost; many of the disgruntled losers switched over to the conservative or Democratic side. In Mississippi, the conservative faction led by scalawag James Lusk Alcorn was decisively defeated by the radical faction led by carpetbagger Adelbert Ames. The party lost support steadily as many scalawags left it; few recruits were acquired. The most bitter contest took place inside the Republican Party in Arkansas, where the two sides armed their forces and confronted each other in the streets; no actual combat took place in the Brooks -- Baxter War. The carpetbagger faction led by Elisha Baxter finally prevailed when the White House intervened, but both sides were badly weakened, and the Democrats soon came to power. Meanwhile, in state after state the freedmen were demanding a bigger share of the offices and patronage, squeezing out carpetbagger allies but never commanding the numbers equivalent to their population proportion. By the mid-1870s, "The hard realities of Southern political life had taught the lesson that black constituents needed to be represented by black officials. '' The financial depression increased the pressure on Reconstruction governments, dissolving progress. Finally, some of the more prosperous freedmen were joining the Democrats, as they were angered at the failure of the Republicans to help them acquire land. The South was "sparsely settled ''; only ten percent of Louisiana was cultivated, and ninety percent of Mississippi bottomland were undeveloped in areas away from the riverfronts, but freedmen often did not have the stake to get started. They hoped government would help them acquire land which they would work. Only South Carolina created any land redistribution, establishing a land commission and resettling about 14,000 freedmen families and some poor whites on land purchased by the state. Although historians such as W.E.B. Du Bois celebrated a cross-racial coalition of poor whites and blacks, such coalitions rarely formed in these years. Writing in 1915, former Congressman Lynch, recalling his experience as a black leader in Mississippi, explained that, Lynch reported that poor whites resented the job competition from freedmen. Furthermore, the poor whites By 1870, the Democratic -- Conservative leadership across the South decided it had to end its opposition to Reconstruction and black suffrage to survive and move on to new issues. The Grant administration had proven by its crackdown on the Ku Klux Klan that it would use as much federal power as necessary to suppress open anti-black violence. Democrats in the North concurred with these Southern Democrats. They wanted to fight the Republican Party on economic grounds rather than race. The New Departure offered the chance for a clean slate without having to re-fight the Civil War every election. Furthermore, many wealthy Southern landowners thought they could control part of the newly enfranchised black electorate to their own advantage. Not all Democrats agreed; an insurgent element continued to resist Reconstruction no matter what. Eventually, a group called "Redeemers '' took control of the party in the Southern states. They formed coalitions with conservative Republicans, including scalawags and carpetbaggers, emphasizing the need for economic modernization. Railroad building was seen as a panacea since northern capital was needed. The new tactics were a success in Virginia where William Mahone built a winning coalition. In Tennessee, the Redeemers formed a coalition with Republican governor DeWitt Senter. Across the South, some Democrats switched from the race issue to taxes and corruption, charging that Republican governments were corrupt and inefficient. With continuing decrease in cotton prices, taxes squeezed cash - poor farmers who rarely saw $20 in currency a year but had to pay taxes in currency or lose their farm. But major planters, who had never paid taxes before, often recovered their property even after confiscation. In North Carolina, Republican Governor William Woods Holden used state troops against the Klan, but the prisoners were released by federal judges. Holden became the first governor in American history to be impeached and removed from office. Republican political disputes in Georgia split the party and enabled the Redeemers to take over. In the North, a live - and - let - live attitude made elections more like a sporting contest. But in the Deep South, many white citizens had not reconciled with the defeat of the war or the granting of citizenship to freedmen. As an Alabama scalawag explained, "Our contest here is for life, for the right to earn our bread... for a decent and respectful consideration as human beings and members of society ''. The Panic of 1873 (a depression) hit the Southern economy hard and disillusioned many Republicans who had gambled that railroads would pull the South out of its poverty. The price of cotton fell by half; many small landowners, local merchants and cotton factors (wholesalers) went bankrupt. Sharecropping for black and white farmers became more common as a way to spread the risk of owning land. The old abolitionist element in the North was aging away, or had lost interest, and was not replenished. Many carpetbaggers returned to the North or joined the Redeemers. Blacks had an increased voice in the Republican Party, but across the South it was divided by internal bickering and was rapidly losing its cohesion. Many local black leaders started emphasizing individual economic progress in cooperation with white elites, rather than racial political progress in opposition to them, a conservative attitude that foreshadowed Booker T. Washington. Nationally, President Grant was blamed for the depression; the Republican Party lost 96 seats in all parts of the country in the 1874 elections. The Bourbon Democrats took control of the House and were confident of electing Samuel J. Tilden president in 1876. President Grant was not running for re-election and seemed to be losing interest in the South. States fell to the Redeemers, with only four in Republican hands in 1873, Arkansas, Louisiana, Mississippi and South Carolina; Arkansas then fell after the violent Brooks -- Baxter War in 1874 ripped apart the Republican party there. In the lower South, violence increased as new insurgent groups arose, including the Red Shirts in Mississippi and the Carolinas, and the White League in Louisiana. The disputed election in Louisiana in 1872 found both Republican and Democratic candidates holding inaugural balls while returns were reviewed. Both certified their own slates for local parish offices in many places, causing local tensions to rise. Finally, Federal support helped certify the Republican as governor. Slates for local offices were certified by each candidate. In rural Grant Parish in Red River Valley, freedmen fearing a Democratic attempt to take over the parish government reinforced defenses at the small Colfax courthouse in late March. White militias gathered from the area a few miles outside the settlement. Rumors and fears abounded on both sides. William Ward, an African - American Union veteran and militia captain, mustered his company in Colfax and went to the courthouse. On Easter Sunday, April 13, 1873, the whites attacked the defenders at the courthouse. There was confusion about who shot one of the white leaders after an offer by the defenders to surrender. It was a catalyst to mayhem. In the end, three whites died and 120 -- 150 blacks were killed, some 50 that evening while being held as prisoners. The disproportionate numbers of black to white fatalities and documentation of brutalized bodies are why contemporary historians call it the Colfax Massacre rather than the Colfax Riot, as it was known locally. This marked the beginning of heightened insurgency and attacks on Republican officeholders and freedmen in Louisiana and other Deep South states. In Louisiana, Judge T.S. Crawford and District Attorney P.H. Harris of the 12th Judicial District were shot off their horses and killed from ambush October 8, 1873, while going to court. One widow wrote to the Department of Justice that her husband was killed because he was a Union man and "... of the efforts made to screen those who committed a crime... '' Political violence was endemic in Louisiana. In 1874 the white militias coalesced into paramilitary organizations such as the White League, first in parishes of the Red River Valley. The new organization operated openly and had political goals: the violent overthrow of Republican rule and suppression of black voting. White League chapters soon rose in many rural parishes, receiving financing for advanced weaponry from wealthy men. In the Coushatta Massacre in 1874, the White League assassinated six white Republican officeholders and five to twenty black witnesses outside Coushatta, Red River Parish. Four of the white men were related to the Republican representative of the parish, who was married to a local woman; three were native to the region. Later in 1874 the White League mounted a serious attempt to unseat the Republican governor of Louisiana, in a dispute that had simmered since the 1872 election. It brought 5000 troops to New Orleans to engage and overwhelm forces of the Metropolitan Police and state militia to turn Republican Governor William P. Kellogg out of office and seat John McEnery. The White League took over and held the state house and city hall, but they retreated before the arrival of reinforcing Federal troops. Kellogg had asked for reinforcements before, and Grant finally responded, sending additional troops to try to quell violence throughout plantation areas of the Red River Valley, although 2,000 troops were already in the state. Similarly, the Red Shirts, another paramilitary group, arose in 1875 in Mississippi and the Carolinas. Like the White League and White Liner rifle clubs, to which 20,000 men belonged in North Carolina alone, these groups operated as a "military arm of the Democratic Party '', to restore white supremacy. Democrats and many northern Republicans agreed that Confederate nationalism and slavery were dead -- the war goals were achieved -- and further federal military interference was an undemocratic violation of historic Republican values. The victory of Rutherford Hayes in the hotly contested Ohio gubernatorial election of 1875 indicated his "let alone '' policy toward the South would become Republican policy, as happened when he won the 1876 Republican nomination for president. An explosion of violence accompanied the campaign for the Mississippi 's 1875 election, in which Red Shirts and Democratic rifle clubs, operating in the open, threatened or shot enough Republicans to decide the election for the Democrats. Hundreds of black men were killed. Republican Governor Adelbert Ames asked Grant for federal troops to fight back; Grant initially refused, saying public opinion was "tired out '' of the perpetual troubles in the South. Ames fled the state as the Democrats took over Mississippi. The campaigns and elections of 1876 were marked by additional murders and attacks on Republicans in Louisiana, North and South Carolina, and Florida. In South Carolina the campaign season of 1876 was marked by murderous outbreaks and fraud against freedmen. Red Shirts paraded with arms behind Democratic candidates; they killed blacks in the Hamburg and Ellenton SC massacres; and one historian estimated 150 blacks were killed in the weeks before the 1876 election across South Carolina. Red Shirts prevented almost all black voting in two majority - black counties. The Red Shirts were also active in North Carolina. Reconstruction continued in South Carolina, Louisiana and Florida until 1877. The elections of 1876 were accompanied by heightened violence across the Deep South. A combination of ballot stuffing and intimidating blacks suppressed their vote even in majority black counties. The White League was active in Louisiana. After Republican Rutherford Hayes won the disputed 1876 presidential election, the national Compromise of 1877 was reached. The white Democrats in the South agreed to accept Hayes ' victory if he withdrew the last Federal troops. By this point, the North was weary of insurgency. White Democrats controlled most of the Southern legislatures and armed militias controlled small towns and rural areas. Blacks considered Reconstruction a failure because the Federal government withdrew from enforcing their ability to exercise their rights as citizens. On January 29, 1877 President Grant signed the Electoral Commission Act, which set up a 15 - member commission of 8 Republicans and 7 Democrats to settle the disputed 1876 election. The Electoral Commission awarded Rutherford B. Hayes the electoral votes he needed; Congress certified he had won by one electoral vote. The Democrats had little leverage -- they could delay Hayes ' election, but they could not put their man (Tilden) in the White House. However, they agreed not to block Hayes ' inauguration based on a "back room '' deal. Key to this deal was the understanding that federal troops would no longer interfere in southern politics despite substantial election - associated violence against blacks. The Southern states indicated that they would protect the lives of African Americans although this obviously turned out to be far from reliable. Hayes ' friends also let it be known that he would promote Federal aid for internal improvements, including help for a railroad in Texas (this never happened) and name a Southerner to his cabinet (this did happen). With the end to the political role of Northern troops, the President had no method to enforce Reconstruction, thus this "back room '' deal signaled the end of American Reconstruction. After assuming office on March 4, 1877, President Hayes removed troops from the capitals of the remaining Reconstruction states, Louisiana and South Carolina, allowing the Redeemers to have full control of these states. President Grant had already removed troops from Florida, before Hayes was inaugurated, and troops from the other Reconstruction states had long since been withdrawn. Hayes appointed David M. Key from Tennessee, a Southern Democrat, to the position of Postmaster General. By 1879, thousands of African - American "Exodusters '' packed up and headed to new opportunities in Kansas. The Democrats gained control of the Senate, and had complete control of Congress, having taken over the House in 1875. Hayes vetoed bills from the Democrats that outlawed the Republican Enforcement Acts; however, with the military underfunded, Hayes could not adequately enforce these laws. Blacks remained involved in Southern politics, particularly in Virginia, which was run by the biracial Readjuster Party. Numerous blacks were elected to local office through the 1880s, and in the 1890s in some states, biracial coalitions of Populists and Republicans briefly held control of state legislatures. In the last decade of the 19th century, southern states elected five black U.S. Congressmen before disfranchising constitutions were passed throughout the former Confederacy. The interpretation of Reconstruction has been a topic of controversy. Nearly all historians hold that Reconstruction ended in failure but for very different reasons. The first generation of Northern historians believed that the former Confederates were traitors and Johnson was their ally who threatened to undo the Union 's constitutional achievements. By the 1880s, however, Northern historians argued that Johnson and his allies were not traitors but had blundered badly in rejecting the 14th Amendment and setting the stage for Radical Reconstruction. The black leader Booker T. Washington, who grew up in West Virginia during Reconstruction, concluded later that, "the Reconstruction experiment in racial democracy failed because it began at the wrong end, emphasizing political means and civil rights acts rather than economic means and self - determination. '' His solution was to concentrate on building the economic infrastructure of the black community, in part by his leadership and the southern Tuskegee Institute. The Dunning School of scholars were trained at the history department of Columbia University under Professor William A. Dunning analyzed Reconstruction as a failure after 1866 for different reasons. They claimed that Congress took freedoms and rights from qualified whites and gave them to unqualified blacks who were being duped by corrupt "carpetbaggers and scalawags. '' As T. Harry Williams (who was a sharp critic of the Dunning school) notes, the Dunningites portrayed the era in stark terms: In the 1930s, historical revisionism became popular among scholars. As disciples of Charles A. Beard, revisionists focused on economics, downplaying politics and constitutional issues. The central figure was a young scholar at the University Wisconsin, Howard K. Beale, who in his PhD dissertation, finished in 1924, developed a complex new interpretation of Reconstruction. The Dunning School portrayed Freedmen as mere pawns in the hands of the Carpetbaggers. Beale argued that the Carpetbaggers themselves were pawns in the hands of northern industrialists, who were the real villains of Reconstruction. These industrialists had taken control of the nation during the Civil War, and set up high tariffs to protect their profits, as well as a lucrative national banking system and a railroad network fueled by government subsidies and secret payoffs. The return to power of the southern whites would seriously threaten all their gains, and so the ex-Confederates had to be kept out of power. The tool used by the industrialists was the combination of the Northern Republican Party and sufficient Southern support using Carpetbaggers and black voters. The rhetoric of civil rights for blacks, and the dream of equality, was rhetoric designed to fool idealistic voters. Beale called it "claptrap, '' arguing, "Constitutional discussions of the rights of the negro, the status of Southern states, the legal position of ex-rebels, and the powers of Congress and the president determined nothing. They were pure sham. '' President Andrew Johnson had tried, and failed, to stop the juggernaut of the industrialists. The Dunning school had praised Johnson for upholding the rights of the white men in the South and endorsing white supremacy. Beale was not a racist, and indeed was one of the most vigorous historians working for black civil rights in the 1930s and 1940s. In his view, Johnson was not a hero for his racism, but rather for his forlorn battle against the industrialists. Charles A. Beard and Mary Beard had already published The Rise of American Civilization (1927) three years before Beale, and had given very wide publicity to a similar theme. The Beard - Beale interpretation of Reconstruction became known as "revisionism, '' and replaced the Dunning school for most historians, until the 1950s. The Beardian interpretation of the causes of the Civil War downplayed slavery, abolitionism, and issues of morality. It ignored constitutional issues of states rights and even ignored American nationalism as the force that finally led to victory in the war. Indeed, the ferocious combat itself was passed over as merely an ephemeral event. Much more important was the calculus of class conflict, as the Beards explained in The Rise of American Civilization (1927), the Civil War was really a: The Beards were especially interested in the Reconstruction era, as the industrialists of the Northeast and the farmers of the West cashed in on their great victory over the southern aristocracy. Historian Richard Hofstadter paraphrases the Beards as arguing that in victory: Wisconsin historian William Hesseltine added the point that the Northeastern businessmen wanted to control the Southern economy directly, which they did through ownership of the railroads. The Beard - Beale interpretation of the monolithic Northern industrialists fell apart in the 1950s when it was closely examined by numerous historians, including Robert P. Sharkey, Irwin Unger, and Stanley Coben. The younger scholars conclusively demonstrated that there was no unified economic policy on the part of the dominant Republican Party. Some wanted high tariffs and some low. Some wanted Greenbacks and others wanted gold. There was no conspiracy to use Reconstruction to impose any such unified economic policy on the nation. Northern businessmen were widely divergent on monetary or tariff policy, and seldom paid attention to Reconstruction issues. Furthermore, the rhetoric on behalf of the rights of the Freedman was not claptrap but deeply held and very serious political philosophy. The black scholar W.E.B. Du Bois, in his Black Reconstruction in America, 1860 -- 1880, published in 1935, compared results across the states to show achievements by the Reconstruction legislatures and to refute claims about wholesale African - American control of governments. He showed black contributions, as in the establishment of universal public education, charitable and social institutions and universal suffrage as important results, and he noted their collaboration with whites. He also pointed out that whites benefited most by the financial deals made, and he put excesses in the perspective of the war 's aftermath. He noted that despite complaints, several states kept their Reconstruction constitutions for nearly a quarter of a century. Despite receiving favorable reviews, his work was largely ignored by white historians of his time. In the 1960s neoabolitionist historians emerged, led by John Hope Franklin, Kenneth Stampp, Leon Litwack, and Eric Foner. Influenced by the Civil Rights Movement, they rejected the Dunning school and found a great deal to praise in Radical Reconstruction. Foner, the primary advocate of this view, argued that it was never truly completed, and that a "Second Reconstruction '' was needed in the late 20th century to complete the goal of full equality for African Americans. The neo-abolitionists followed the revisionists in minimizing the corruption and waste created by Republican state governments, saying it was no worse than Boss Tweed 's ring in New York City. Instead, they emphasized that suppression of the rights of African Americans was a worse scandal and a grave corruption of America 's republican ideals. They argued that the tragedy of Reconstruction was not that it failed because blacks were incapable of governing, especially as they did not dominate any state government, but that it failed because whites raised an insurgent movement to restore white supremacy. White elite - dominated state legislatures passed disfranchising constitutions from 1890 to 1908 that effectively barred most blacks and many poor whites from voting. This disfranchisement affected millions of people for decades into the 20th century, and closed African Americans and poor whites out of the political process in the South. Re-establishment of white supremacy meant that within a decade African Americans were excluded from virtually all local, state, and federal governance in all states of the South. Lack of representation meant that they were treated as second - class citizens, with schools and services consistently underfunded in segregated societies, no representation on juries or in law enforcement, and bias in other legislation. It was not until the Civil Rights Movement and the passage of the Civil Rights Act of 1964 and the Voting Rights Act of 1965 that segregation was outlawed and suffrage restored, under what is sometimes referred to as the "Second Reconstruction. '' In 1990 Eric Foner concluded that from the black point of view, "Reconstruction must be judged a failure. '' Foner stated Reconstruction was "a noble if flawed experiment, the first attempt to introduce a genuine inter-racial democracy in the United States ''. According to him, the many factors contributing to the failure included: lack of a permanent federal agency specifically designed for the enforcement of civil rights; the Morrison R. Waite Supreme Court decisions that dismantled previous congressional civil rights legislation; and the economic reestablishment of conservative white planters in the South by 1877. Historian William McFeely explained that although the constitutional amendments and civil rights legislation on their own merit were remarkable achievements, no permanent government agency whose specific purpose was civil rights enforcement had been created. More recent work by Nina Silber, David W. Blight, Cecelia O'Leary, Laura Edwards, LeeAnn Whites, and Edward J. Blum, has encouraged greater attention to race, religion, and issues of gender while at the same time pushing the end of Reconstruction to the end of the 19th century, while monographs by Charles Reagan Wilson, Gaines Foster, W. Scott Poole, and Bruce Baker have offered new views of the Southern "Lost Cause ''. While 1877 is the usual date given for the end of Reconstruction, some historians extend the era to the 1890s. Economists and economic historians have different interpretations of the economic impact of race on the postwar Southern economy. In 1995, Robert Whaples took a random survey of 178 members of the Economic History Association, who studied American history in all time periods. He asked whether they wholly or partly accepted, or rejected, 40 propositions in the scholarly literature about American economic history. The greatest difference between economics PhDs and history PhDs came with questions on competition and race. For example, the proposition originally put forward by Robert Higgs, "in the postbellum South economic competition among whites played an important part in protecting blacks from racial coercion '' was accepted in whole or part by 66 % of the economists, but by only 22 % of the historians. Whaples says this highlights, "A recurring difference dividing historians and economists. The economists have more faith in the power of the competitive market. For example, they see the competitive market as protecting disfranchised blacks and are less likely to accept the idea that there was exploitation by merchant monopolists. '' Reconstruction is widely considered a failure, though the reason for this is a matter of controversy. Historian Donald R. Shaffer maintained that the gains during Reconstruction for African Americans were not entirely extinguished. The legalization of African - American marriages and families and the independence of black churches from white denominations were a source of strength during the Jim Crow era. Reconstruction was never forgotten within the black community and it remained a source of inspiration. The system of sharecropping granted blacks a considerable amount of freedom as compared to slavery. However, in 2014 historian Mark Summers argues that the "failure '' question should be looked at from the viewpoint of the war goals; in that case, he argues: The journalist Joel Chandler Harris, writing as "Joe Harris '' for the Atlanta Constitution (mostly after Reconstruction), tried to advance racial and sectional reconciliation in the late 19th century. He supported Henry Grady 's vision of a New South during Grady 's time as editor from 1880 to 1889. Harris wrote many editorials encouraging southern acceptance of the changed conditions and some Northern influence, although he also asserted his belief that it should proceed under white supremacy. In popular literature, two early 20th - century novels by Thomas Dixon -- The Clansman (1905) and The Leopard 's Spots: A Romance of the White Man 's Burden -- 1865 -- 1900 (1902) -- romanticized white resistance to Northern / black coercion, hailing vigilante action by the Ku Klux Klan. D.W. Griffith adapted Dixon 's The Clansman for the screen in his anti-Republican movie The Birth of a Nation (1915); it stimulated the formation of the 20th - century version of the KKK. Many other authors romanticized the benevolence of slavery and the élite world of the antebellum plantations in memoirs and histories published in the late nineteenth and early twentieth centuries, and the United Daughters of the Confederacy promoted influential works by women in these genres. Of much more lasting impact was the story "Gone with the Wind '' in the form of a best - selling novel Gone with the Wind (1936), winner of the Pulitzer Prize for its author Margaret Mitchell, and an award - winning Hollywood blockbuster, Gone with the Wind (1939)). In each case the second half focuses on Reconstruction in Atlanta. The book sold millions of copies nationwide; the film is regularly rebroadcast on television. In 2018 it remains at the top of List of highest - grossing films adjusted for inflation. The New Georgia Encyclopedia argues: Only Georgia has a separate article about its experiences under Reconstruction. The other state names below link to a specific section in the state history article about the Reconstruction era. For much more detail see Reconstruction: Bibliography
when does ichigo get his powers back in the manga
Bleach (season 16) - wikipedia The sixteenth season of the Bleach anime series is known as the Lost Agent arc (死神 代行 消失 篇, Shinigami Daikō Shōshitsu hen). It is directed by Noriyuki Abe, and produced by TV Tokyo, Dentsu and Studio Pierrot. Based on Tite Kubo 's manga series, the season is set seventeen months after Ichigo Kurosaki lost his Soul Reaper powers and meets a man known as Kūgo Ginjō who proposes him to recover them. The season aired from October 2011 to March 2012. Aniplex collected it in six DVD volumes between August 22, 2012 and January 23, 2013. The episodes of this season use three pieces of theme music; one opening and two endings. The opening theme is "Harukaze '' by Scandal. The first ending theme, "Re: pray '' by Aimer is used from episodes 343 to 354 and the second ending theme, "MASK '' by Aqua Timez is used from episode 355 to 366.
difference between traveling salesman problem and vehicle routing problem
Vehicle Routing Problem - wikipedia The vehicle routing problem (VRP) is a combinatorial optimization and integer programming problem which asks "What is the optimal set of routes for a fleet of vehicles to traverse in order to deliver to a given set of customers? ''. It generalises the well - known travelling salesman problem (TSP). It first appeared in a paper by George Dantzig and John Ramser in 1959, in which first algorithmic approach was written and was applied to petrol deliveries. Often, the context is that of delivering goods located at a central depot to customers who have placed orders for such goods. The objective of the VRP is to minimize the total route cost. In 1964, Clarke and Wright improved on Dantzig and Ramser 's approach using an effective greedy approach called the savings algorithm. Determining the optimal solution to VRP is NP - hard, so the size of problems that can be solved, optimally, using mathematical programming or combinatorial optimization may be limited. Therefore, commercial solvers tend to use heuristics due to the size and frequency of real world VRPs they need to solve. The VRP has many obvious applications in industry. In fact, the use of computer optimization programs can give savings of 5 % to a company as transportation is usually a significant component of the cost of a product (10 %) - indeed, the transportation sector makes up 10 % of the EU 's GDP. Consequently, any savings created by the VRP, even less than 5 %, are significant. The VRP concerns the service of a delivery company. How things are delivered from one or more depots which has a given set of home vehicles and operated by a set of drivers who can move on a given road network to a set of customers. It asks for a determination of a set of routes, S, (one route for each vehicle that must start and finish at its own depot) such that all customers ' requirements and operational constraints are satisfied and the global transportation cost is minimized. This cost may be monetary, distance or otherwise. The road network can be described using a graph where the arcs are roads and vertices are junctions between them. The arcs may be directed or undirected due to the possible presence of one way streets or different costs in each direction. Each arc has an associated cost which is generally its length or travel time which may be dependent on vehicle type. To know the global cost of each route, the travel cost and the travel time between each customer and the depot must be known. To do this our original graph is transformed into one where the vertices are the customers and depot, and the arcs are the roads between them. The cost on each arc is the lowest cost between the two points on the original road network. This is easy to do as shortest path problems are relatively easy to solve. This transforms the sparse original graph into a complete graph. For each pair of vertices i and j, there exists an arc (i, j) of the complete graph whose cost is written as C i j (\ displaystyle C_ (ij)) and is defined to be the cost of shortest path from i to j. The travel time t i j (\ displaystyle t_ (ij)) is the sum of the travel times of the arcs on the shortest path from i to j on the original road graph. Sometimes it is impossible to satisfy all of a customer 's demands and in such cases solvers may reduce some customers ' demands or leave some customers unserved. To deal with these situations a priority variable for each customer can be introduced or associated penalties for the partial or lack of service for each customer given The objective function of a VRP can be very different depending on the particular application of the result but a few of the more common objectives are: Several variations and specializations of the vehicle routing problem exist: Several software vendors have built software products to solve the various VRP problems. Numerous articles are available for more detail on their research and results. Although VRP is related to the Job Shop Scheduling Problem, the two problems are typically solved using different techniques. There are three main different approaches to modelling the VRP The formulation of the TSP by Dantzig, Fulkerson and Johnson was extended to create the two index vehicle flow formulations for the VRP min ∑ i ∈ V ∑ j ∈ V c i j x i j (\ displaystyle (\ text (min)) \ sum _ (i \ in V) \ sum _ (j \ in V) c_ (ij) x_ (ij)) subject to ∑ i ∈ V x i j = 1 ∀ j ∈ V ∖ (0); (1) ∑ j ∈ V x i j = 1 ∀ i ∈ V ∖ (0); (2) ∑ i ∈ V x i 0 = K; (3) ∑ j ∈ V x 0 j = K; (4) ∑ i ∉ S ∑ j ∈ S x i j ≥ r (S), ∀ S ⊆ V ∖ (0), S ≠ ∅; (5) x i j ∈ (0, 1) ∀ i, j ∈ V. (6) (\ displaystyle (\ begin (aligned) & \ sum _ (i \ in V) x_ (ij) = 1 \ quad \ forall j \ in V \ backslash \ left \ (0 \ right \); & (1) \ \ & \ sum _ (j \ in V) x_ (ij) = 1 \ quad \ forall i \ in V \ backslash \ left \ (0 \ right \); & (2) \ \ & \ sum _ (i \ in V) x_ (i0) = K; & (3) \ \ & \ sum _ (j \ in V) x_ (0j) = K; & (4) \ \ & \ sum _ (i \ notin S) \ sum _ (j \ in S) x_ (ij) \ geq r (S), ~ ~ \ forall S \ subseteq V \ setminus \ (0 \), S \ neq \ emptyset; & (5) \ \ &x_ (ij) \ in \ (0, 1 \) \ quad \ forall i, j \ in V. & (6) \ \ \ end (aligned))) Constraints 1 and 2 state that exactly one arc enters and exactly one leaves each vertex associated with a customer, respectively. Constraints 3 and 4 say that the number of vehicles leaving the depot is the same as the number entering. Constraints 5 are the capacity cut constraints, which impose that the routes must be connected and that the demand on each route must not exceed the vehicle capacity. Finally, constraints 6 are the integrality constraints. One arbitrary constraint among the 2 V constraints is actually implied by the remaining 2 V - 1 ones so it can be removed. Each cut defined by a customer set S is crossed by a number of arcs not smaller than r (s) (minimum number of vehicles needed to serve set S). An alternative formulation may be obtained by transforming the capacity cut constraints into generalised subtour elimination constraints (GSECs). ∑ i ∈ S ∑ j ∈ S x i j ≤ S − r (s) (\ displaystyle \ sum _ (i \ in S) \ sum _ (j \ in S) x_ (ij) \ leq S - r (s)) which imposes that at least r (s) arcs leave each customer set S. GCECs and CCCs have an exponential number of constraints so it is practically impossible to solve the linear relaxation. A possible way to solve this is to consider a limited subset of these constraints and add the rest if needed. A different method again is to use a family of constraints which have a polynomial cardinality which are known as the MTZ constraints, they were first proposed for the TSP and subsequently extended by Christofides, Mingozzi and Toth. u i − u j ≥ d j − C (1 − x i j) ∀ i, j ∈ V ∖ (0), i ≠ j s.t. d i + d j ≤ C (\ displaystyle u_ (i) - u_ (j) \ geq d_ (j) - C (1 - x_ (ij)) ~ ~ ~ ~ ~ ~ \ forall i, j \ in V \ backslash \ (0 \), i \ neq j ~ ~ ~ ~ (\ text (s.t.)) d_ (i) + d_ (j) \ leq C) d i ≤ u i ≤ C ∀ i ∈ V ∖ (0) (\ displaystyle d_ (i) \ leq u_ (i) \ leq C ~ ~ ~ ~ ~ ~ \ forall i \ in V \ backslash \ (0 \)) where u i, i ∈ V ∖ (0) (\ displaystyle u_ (i), ~ i \ in V \ backslash \ (0 \)) is an additional continuous variable which represents the load of the vehicle after visiting customer i and d_i is the demand of customer i. These impose both the connectivity and the capacity requirements. When x i j = 0 (\ displaystyle x_ (ij) = 0) constraint then i ' is not binding ' since u i ≤ C (\ displaystyle u_ (i) \ leq C) and u j ≥ d j (\ displaystyle u_ (j) \ geq d_ (j)) whereas x i j = 1 (\ displaystyle x_ (ij) = 1) they impose that u j ≥ u i + d j (\ displaystyle u_ (j) \ geq u_ (i) + d_ (j)). These have been used extensively to model the basic VRP (CVRP) and the VRPB. However their power is limited to these simple problems. They can only be used when the cost of the solution can be expressed as the sum of the costs of the arc costs. We can not also know which vehicle traverses each arc. Hence we can not use this for more complex models where the cost and or feasibility is dependent on the order of the customers or the vehicles used. There are many methods how to solve vehicle routing problem manually. For example, optimum routing is a big efficiency issue for forklifts in large warehouses. Some of the manual methods to decide upon the most efficient route are: Largest gap, S - shape, Aisle - by - aisle, Combined and Combined +. While Combined + method is the most complex, thus the hardest to be used by lift truck operators, it is the most efficient routing method. Still the percentage difference between the manual optimum routing method and the real optimum route was on average 13 %.
what is corporate insolvency resolution process (cirp)
Insolvency and Bankruptcy Code, 2016 - Wikipedia The Insolvency and Bankruptcy Code, 2016 (IBC) is the bankruptcy law of India which seeks to consolidate the existing framework by creating a single law for insolvency and bankruptcy. The Insolvency and Bankruptcy Code, 2015 was introduced in Lok Sabha in December 2015. It was passed by Lok Sabha on 5 May 2016. The Code received the assent of the President of India on 28 May 2016. Certain provisions of the Act have come into force from 5 August and 19 August 2016. The bankruptcy code is a one stop solution for resolving insolvencies which at present is a long process and does not offer an economically viable arrangement. A strong insolvency framework where the cost and the time incurred is minimised in attaining liquidation has been long overdue in India. The code will be able to protect the interests of small investors and make the process of doing business a less cumbersome process. The Insolvency and Bankruptcy Code, 2015 was introduced in the Lok Sabha on 21 December 2015 by Finance Minister, Arun Jaitley. The Code was referred to a Joint Committee of Parliament on 23 December 2015, and recommended by the Committee on 28 April 2016. The Code was passed by the Lok Sabha on 5 May 2016 and by the Rajya Sabha on 11 May 2016. The Code received assent from President Pranab Mukherjee on 28 May, and was notified in The Gazette of India on 28 May 2016. The Code was passed by parliament in May 2016 and became effective in December 2016. It aimed to repeal the Presidency Towns Insolvency Act, 1909 and Sick Industrial Companies (Special Provisions) Repeal Act, 2003, among others. The first insolvency resolution order under this code was passed by National Company Law Tribunal (NCLT) in the case of Synergies - Dooray Automotive Ltd on 14 August 2017 and the second resolution plan was submitted in the case of Prowess International Private Limited represented by Advocate Akhilesh Kumar Shrivastava and Akash Sharma. The plea for insolvency was submitted by company on 23 January 2017. The resolution plan was submitted to NCLT within a period of 180 days as required by the code, and the approval for the same was received on 2 August 2017 from the tribunal. The final order was uploaded on 14 August 2017 on the NCLT website. Insolvency Resolution: The Code outlines separate insolvency resolution processes for individuals, companies and partnership firms. The process may be initiated by either the debtor or the creditors. A maximum time limit, for completion of the insolvency resolution process, has been set for corporates and individuals. For companies, the process will have to be completed in 180 days, which may be extended by 90 days, if a majority of the creditors agree. For start ups (other than partnership firms), small companies and other companies (with asset less than Rs. 1 crore), resolution process would be completed within 90 days of initiation of request which may be extended by 45 days. Insolvency regulator: The Code establishes the Insolvency and Bankruptcy Board of India, to oversee the insolvency proceedings in the country and regulate the entities registered under it. The Board will have 10 members, including representatives from the Ministries of Finance and Law, and the Reserve Bank of India. Insolvency professionals: The insolvency process will be managed by licensed professionals. These professionals will also control the assets of the debtor during the insolvency process. Bankruptcy and Insolvency Adjudicator: The Code proposes two separate tribunals to oversee the process of insolvency resolution, for individuals and companies: (i) the National Company Law Tribunal for Companies and Limited Liability Partnership firms; and (ii) the Debt Recovery Tribunal for individuals and partnerships. A plea for insolvency is submitted to the adjudicating authority (NCLT in case of corporate debtors) by financial or operation creditors or the corporate debtor itself. The maximum time allowed to either accept or reject the plea is 14 days. If the plea is accepted, the tribunal has to appoint an Insolvency Resolution Professional (IRP) to draft a resolution plan within 180 days (extendable by 90 days). following which the Corporate Insolvency Resolution process is initiated by the court. For the said period, the board of directors of the company stands suspended, and the promoters do not have a say in the management of the company. The IRP, if required, can seek the support of the company 's management for day - to - day operations. if the CIRP fails in reviving the company the liquidation process is initiated. The Bill prohibits certain persons from submitting a resolution plan in case of defaults. These include: (i) wilful defaulters, (ii) promoters or management of the company if it has an outstanding non-performing debt for over a year, and (iii) disqualified directors, among others. Further, it bars the sale of property of a defaulter to such persons during liquidation. The Reserve Bank of India (RBI) referred following large Non-performing asset (NPA) accounts for resolution to NCLT:
which team has the most super bowls wins
List of Super Bowl champions - wikipedia The Super Bowl is the annual American football game that determines the champion of the National Football League (NFL). The game culminates a season that begins in the previous calendar year, and is the conclusion of the NFL playoffs. The contest is held in an American city, chosen three to four years beforehand, usually at warm - weather sites or domed stadiums. Since January 1971, the winner of the American Football Conference (AFC) Championship Game has faced the winner of the National Football Conference (NFC) Championship Game in the culmination of the NFL playoffs. Before the 1970 merger between the American Football League (AFL) and the National Football League (NFL), the two leagues met in four such contests. The first two were marketed as the "AFL -- NFL World Championship Game '', but were also casually referred to as "the Super Bowl game '' during the television broadcast. Super Bowl III in January 1969 was the first such game that carried the "Super Bowl '' moniker in official marketing; the names "Super Bowl I '' and "Super Bowl II '' were retroactively applied to the first two games. The NFC / NFL leads in Super Bowl wins with 27, while the AFC / AFL has won 25. Twenty franchises, including teams that have relocated to another city, have won the Super Bowl. The Pittsburgh Steelers (6 -- 2) have won the most Super Bowls with six championships, while the New England Patriots (5 -- 5), the Dallas Cowboys (5 -- 3), and the San Francisco 49ers (5 -- 1) have five wins. New England has the most Super Bowl appearances with ten, while the Buffalo Bills (0 -- 4) have the most consecutive appearances with four (all losses) from 1990 to 1993. The Miami Dolphins are the only other team to have at least three consecutive appearances: 1972 -- 1974. The Denver Broncos (3 -- 5) and Patriots have each lost a record five Super Bowls. The Minnesota Vikings (0 -- 4) and the Bills have lost four. The record for consecutive wins is two and is shared by seven franchises: the Green Bay Packers (1966 -- 1967), the Miami Dolphins (1972 -- 1973), the Pittsburgh Steelers (1974 -- 1975 and 1978 -- 1979, the only team to accomplish this feat twice), the San Francisco 49ers (1988 -- 1989), the Dallas Cowboys (1992 -- 1993), the Denver Broncos (1997 -- 1998), and the New England Patriots (2003 -- 2004). Among those, Dallas (1992 -- 1993; 1995) and New England (2001; 2003 -- 2004) are the only teams to win three out of four consecutive Super Bowls. The 1972 Dolphins capped off the only perfect season in NFL history with their victory in Super Bowl VII. The only team with multiple Super Bowl appearances and no losses is the Baltimore Ravens, who in winning Super Bowl XLVII defeated and replaced the 49ers in that position. Four current NFL teams have never appeared in a Super Bowl, including franchise relocations and renaming: the Cleveland Browns, Detroit Lions, Jacksonville Jaguars, and Houston Texans, though both the Browns (1964) and Lions (1957) had won NFL championship games prior to the creation of the Super Bowl. Numbers in parentheses in the table are Super Bowl appearances as of the date of that Super Bowl and are used as follows: Seven franchises have won consecutive Super Bowls, one of which (Pittsburgh) has accomplished it twice: No franchise has yet won three Super Bowls in a row, although several have come close: Three franchises have lost consecutive Super Bowls: In the sortable table below, teams are ordered first by number of appearances, then by number of wins, and finally by number of years since last appearing in a Super Bowl. In the "Seasons '' column, bold years indicate winning seasons, and italic years indicate games not yet completed. Four current teams have never reached the Super Bowl. Two of them held NFL league championships prior to Super Bowl I in the 1966 NFL season: In addition, Detroit, Houston, and Jacksonville have hosted Super Bowls, making Cleveland the only current NFL city that has neither hosted nor had its team play in a Super Bowl. Although Jacksonville and Houston have never appeared in a Super Bowl, there are teams whose most recent Super Bowl appearance is older than when Jacksonville and Houston joined the NFL (1995 and 2002, respectively), resulting in longer Super Bowl droughts than these teams for the following eight teams. Two of these teams have not appeared in the Super Bowl since before the AFL -- NFL merger in 1970: However, the Jets and the Chiefs are the only non-NFL teams to win the Super Bowl, both being members of the now - defunct AFL at the time. The most recent Super Bowl appearance for the following teams was after the AFL -- NFL merger, but prior to the 1995 regular season: Eight teams have appeared in the Super Bowl without ever winning. In descending order of number of appearances, they are: The following teams have faced each other more than once in the Super Bowl:
where does water for a well come from
Water well - Wikipedia A water well is an excavation or structure created in the ground by digging, driving, boring, or drilling to access groundwater in underground aquifers. The well water is drawn by a pump, or using containers, such as buckets, that are raised mechanically or by hand. Wells were first constructed at least eight thousand years ago and historically vary in construction from a simple scoop in the sediment of a dry watercourse to the stepwells of India, the qanats of Iran, and the shadoofs and sakiehs of India. Placing a lining in the well shaft helps create stability and linings of wood or wickerwork date back at least as far as the Iron Age. Wells have been traditionally sunk by hand digging as is the case in rural developing areas. These wells are inexpensive and low - tech as they use mostly manual labour and the structure can be lined with brick or stone as the excavation proceeds. A more modern method called caissoning uses pre-cast reinforced concrete well rings that are lowered into the hole. Driven wells can be created in unconsolidated material with a well hole structure, which consists of a hardened drive point and a screen of perforated pipe, after which a pump is installed to collect the water. Deeper wells can be excavated by hand drilling methods or machine drilling, using a bit in a borehole. Drilled wells are usually cased with a factory - made pipe composed of steel or plastic. Drilled wells can access water at much greater depths than dug wells. Two broad classes of well are shallow or unconfined wells completed within the uppermost saturated aquifer at that location, and deep or confined wells, sunk through an impermeable stratum into an aquifer beneath. A collector well can be constructed adjacent to a freshwater lake or stream with water percolating through the intervening material. The site of a well can be selected by a hydrogeologist, or groundwater surveyor. Water may be pumped or hand drawn. Impurities from the surface can easily reach shallow sources and contamination of the supply by pathogens or chemical contaminants needs to be avoided. Well water typically contains more minerals in solution than surface water and may require treatment before being potable. Soil salination can occur as the water table falls and the surrounding soil begins to dry out. Another environmental problem is the potential for methane to seep into the water. Wood - lined wells are known from the early Neolithic Linear Pottery culture, for example in Kückhoven (an outlying centre of Erkelenz), dated 5090 BC and Eythra, dated 5200 BC in Schletz (an outlying centre of Asparn an der Zaya) in Austria. Some of the earliest evidence of water wells are located in China. The neolithic Chinese discovered and made extensive use of deep drilled groundwater for drinking. The Chinese text The Book of Changes, originally a divination text of the Western Zhou dynasty (1046 - 771 BC), contains an entry describing how the ancient Chinese maintained their wells and protected their sources of water. Archaeological evidence and old Chinese documents reveal that the prehistoric and ancient Chinese had the aptitude and skills for digging deep water wells for drinking water as early as 6000 to 7000 years ago. A well excavated at the Hemedu excavation site was believed to have been built during the neolithic era. The well was cased by four rows of logs with a square frame attached to them at the top of the well. 60 additional tile wells southwest of Beijing are also believed to have been built around 600 BC for drinking and irrigation. In Egypt, shadoofs and sakiehs are used. When compared to each other however, the Sakkieh is much more efficient, as it can bring up water from a depth of 10 metres (versus the 3 metres of the shadoof). The Sakieh is the Egyptian version of the Noria. Some of the world 's oldest known wells, located in Cyprus, date to 7000 - 8500 BC. Two wells from the Neolithic period, around 6500 BC, have been discovered in Israel. One is in Atlit, on the northern coast of Israel, and the other is the Jezreel Valley. Until recent centuries, all artificial wells were pumpless hand - dug wells of varying degrees of sophistication, and they remain a very important source of potable water in some rural developing areas where they are routinely dug and used today. Their indispensability has produced a number of literary references, literal and figurative, to them, including the reference to the incident of Jesus meeting a woman at Jacob 's well (John 4: 6) in the bible and the "Ding Dong Bell '' nursery rhyme about a cat in a well. Hand - dug wells are excavations with diameters large enough to accommodate one or more people with shovels digging down to below the water table. The excavation is braced horizontally to avoid landslide or erosion endangering the people digging. They can be lined with laid stones or brick; extending this lining upwards above the ground surface to form a wall around the well serves to reduce both contamination and injuries by falling into the well. A more modern method called caissoning uses reinforced concrete or plain concrete pre-cast well rings that are lowered into the hole. A well - digging team digs under a cutting ring and the well column slowly sinks into the aquifer, whilst protecting the team from collapse of the well bore. Hand - dug wells are inexpensive and low tech (compared to drilling) as they use mostly manual labour to access groundwater in rural locations in developing countries. They may be built with a high degree of community participation, or by local entrepreneurs who specialize in hand - dug wells. They have been successfully excavated to 60 metres (200 ft). They have low operational and maintenance costs, in part because water can be extracted by hand bailing, without a pump. The water is often coming from an aquifer or groundwater, and can be easily deepened, which may be necessary if the ground water level drops, by telescoping the lining further down into the aquifer. The yield of existing hand dug wells may be improved by deepening or introducing vertical tunnels or perforated pipes. Drawbacks to hand - dug wells are numerous. It can be impractical to hand dig wells in areas where hard rock is present, and they can be time - consuming to dig and line even in favourable areas. Because they exploit shallow aquifers, the well may be susceptible to yield fluctuations and possible contamination from surface water, including sewage. Hand dug well construction generally requires the use of a well trained construction team, and the capital investment for equipment such as concrete ring moulds, heavy lifting equipment, well shaft formwork, motorized de-watering pumps, and fuel can be large for people in developing countries. Construction of hand dug wells can be dangerous due to collapse of the well bore, falling objects and asphyxiation, including from dewatering pump exhaust fumes. Woodingdean well, hand - dug between 1858 and 1862, is claimed to be the world 's deepest hand - dug well at 392 metres (1,285 ft). The Big Well in Greensburg, Kansas is billed as the world 's largest hand - dug well, at 109 feet (33 m) deep and 32 feet (9.8 m) in diameter. However, the Well of Joseph in the Cairo Citadel at 280 feet (85 m) deep and the Pozzo di S. Patrizio (St. Patrick 's Well) built in 1527 in Orvieto, Italy, at 61 metres (200 ft) deep by 13 metres (43 ft) wide are both larger by volume. Driven wells may be very simply created in unconsolidated material with a well hole structure, which consists of a hardened drive point and a screen (perforated pipe). The point is simply hammered into the ground, usually with a tripod and driver, with pipe sections added as needed. A driver is a weighted pipe that slides over the pipe being driven and is repeatedly dropped on it. When groundwater is encountered, the well is washed of sediment and a pump installed. Drilled wells are typically created using either top - head rotary style, table rotary, or cable tool drilling machines, all of which use drilling stems that are turned to create a cutting action in the formation, hence the term drilling. Drilled wells can be excavated by simple hand drilling methods (augering, sludging, jetting, driving, hand percussion) or machine drilling (rotary, percussion, down the hole hammer). Deeprock rotary drilling method is most common. Rotary can be used in 90 % of formation types. Drilled wells can get water from a much deeper level than dug wells can -- often down to several hundred metres. Drilled wells with electric pumps are used throughout the world, typically in rural or sparsely populated areas, though many urban areas are supplied partly by municipal wells. Most shallow well drilling machines are mounted on large trucks, trailers, or tracked vehicle carriages. Water wells typically range from 3 to 18 metres (10 -- 60 ft) deep, but in some areas can go deeper than 900 metres (3,000 ft). Rotary drilling machines use a segmented steel drilling string, typically made up of 6 metres (20 ft) sections of galvanized steel tubing that are threaded together, with a bit or other drilling device at the bottom end. Some rotary drilling machines are designed to install (by driving or drilling) a steel casing into the well in conjunction with the drilling of the actual bore hole. Air and / or water is used as a circulation fluid to displace cuttings and cool bits during the drilling. Another form of rotary style drilling, termed mud rotary, makes use of a specially made mud, or drilling fluid, which is constantly being altered during the drill so that it can consistently create enough hydraulic pressure to hold the side walls of the bore hole open, regardless of the presence of a casing in the well. Typically, boreholes drilled into solid rock are not cased until after the drilling process is completed, regardless of the machinery used. The oldest form of drilling machinery is the cable tool, still used today. Specifically designed to raise and lower a bit into the bore hole, the spudding of the drill causes the bit to be raised and dropped onto the bottom of the hole, and the design of the cable causes the bit to twist at approximately ​ ⁄ revolution per drop, thereby creating a drilling action. Unlike rotary drilling, cable tool drilling requires the drilling action to be stopped so that the bore hole can be bailed or emptied of drilled cuttings. Drilled wells are usually cased with a factory - made pipe, typically steel (in air rotary or cable tool drilling) or plastic / PVC (in mud rotary wells, also present in wells drilled into solid rock). The casing is constructed by welding, either chemically or thermally, segments of casing together. If the casing is installed during the drilling, most drills will drive the casing into the ground as the bore hole advances, while some newer machines will actually allow for the casing to be rotated and drilled into the formation in a similar manner as the bit advancing just below. PVC or plastic is typically welded and then lowered into the drilled well, vertically stacked with their ends nested and either glued or splined together. The sections of casing are usually 6 metres (20 ft) or more in length, and 6 to 12 in (15 to 30 cm) in diameter, depending on the intended use of the well and local groundwater conditions. Surface contamination of wells in the United States is typically controlled by the use of a surface seal. A large hole is drilled to a predetermined depth or to a confining formation (clay or bedrock, for example), and then a smaller hole for the well is completed from that point forward. The well is typically cased from the surface down into the smaller hole with a casing that is the same diameter as that hole. The annular space between the large bore hole and the smaller casing is filled with bentonite clay, concrete, or other sealant material. This creates an impermeable seal from the surface to the next confining layer that keeps contaminants from traveling down the outer sidewalls of the casing or borehole and into the aquifer. In addition, wells are typically capped with either an engineered well cap or seal that vents air through a screen into the well, but keeps insects, small animals, and unauthorized persons from accessing the well. At the bottom of wells, based on formation, a screening device, filter pack, slotted casing, or open bore hole is left to allow the flow of water into the well. Constructed screens are typically used in unconsolidated formations (sands, gravels, etc.), allowing water and a percentage of the formation to pass through the screen. Allowing some material to pass through creates a large area filter out of the rest of the formation, as the amount of material present to pass into the well slowly decreases and is removed from the well. Rock wells are typically cased with a PVC liner / casing and screen or slotted casing at the bottom, this is mostly present just to keep rocks from entering the pump assembly. Some wells utilize a filter pack method, where an undersized screen or slotted casing is placed inside the well and a filter medium is packed around the screen, between the screen and the borehole or casing. This allows the water to be filtered of unwanted materials before entering the well and pumping zone. An automated water well system powered by a jet - pump. An automated water well system powered by a submersible pump. A water well system with a cistern. A water well system with a pressurized cistern. A section of a stainless steel screen well. There are two broad classes of drilled - well types, based on the type of aquifer the well is in: A special type of water well may be constructed adjacent to freshwater lakes or streams. Commonly called a collector well but sometimes referred to by the trade name Ranney well or Ranney collector, this type of well involves sinking a caisson vertically below the top of the aquifer and then advancing lateral collectors out of the caisson and beneath the surface water body. Pumping from within the caisson induces infiltration of water from the surface water body into the aquifer, where it is collected by the collector well laterals and conveyed into the caisson where it can be pumped to the ground surface. Two additional broad classes of well types may be distinguished, based on the use of the well: A well constructed for pumping groundwater can be used passively as a monitoring well and a small diameter well can be pumped, but this distinction by use is common. Before excavation, information about the geology, water table depth, seasonal fluctuations, recharge area and rate must be found. This work is typically done by a hydrogeologist, or a groundwater surveyor using a variety of tools including electro - seismic surveying, any available information from nearby wells, geologic maps, and sometimes geophysical imaging. Shallow pumping wells can often supply drinking water at a very low cost. However, impurities from the surface easily reach shallow sources, which leads to a greater risk of contamination for these wells compared to deeper wells. Contaminated wells can lead to the spread of various waterborne diseases. Dug and driven wells are relatively easy to contaminate; for instance, most dug wells are unreliable in the majority of the United States. Most of the bacteria, viruses, parasites, and fungi that contaminate well water comes from fecal material from humans and other animals, for example from on - site sanitation systems (such as pit latrines and septic tanks). Common bacterial contaminants include E. coli, Salmonella, Shigella, and Campylobacter jejuni. Common viral contaminants include norovirus, sapovirus, rotavirus, enteroviruses, and hepatitis A and E. Parasites include Giardia lamblia, Cryptosporidium, Cyclospora cayetanensis, and microsporidia. Chemical contamination is a common problem with groundwater. Nitrates from sewage, sewage sludge or fertilizer are a particular problem for babies and young children. Pollutant chemicals include pesticides and volatile organic compounds from gasoline, dry - cleaning, the fuel additive methyl tert - butyl ether (MTBE), and perchlorate from rocket fuel, airbag inflators, and other artificial and natural sources. Several minerals are also contaminants, including lead leached from brass fittings or old lead pipes, chromium VI from electroplating and other sources, naturally occurring arsenic, radon, and uranium -- all of which can cause cancer -- and naturally occurring fluoride, which is desirable in low quantities to prevent tooth decay, but can cause dental fluorosis in higher concentrations. Some chemicals are commonly present in water wells at levels that are not toxic, but can cause other problems. Calcium and magnesium cause what is known as hard water, which can precipitate and clog pipes or burn out water heaters. Iron and manganese can appear as dark flecks that stain clothing and plumbing, and can promote the growth of iron and manganese bacteria that can form slimy black colonies that clog pipes. The quality of the well water can be significantly increased by lining the well, sealing the well head, fitting a self - priming hand pump, constructing an apron, ensuring the area is kept clean and free from stagnant water and animals, moving sources of contamination (pit latrines, garbage pits, on - site sewer systems) and carrying out hygiene education. The well should be cleaned with 1 % chlorine solution after construction and periodically every 6 months. Well holes should be covered to prevent loose debris, animals, animal excrement, and wind - blown foreign matter from falling into the hole and decomposing. The cover should be able to be in place at all times, including when drawing water from the well. A suspended roof over an open hole helps to some degree, but ideally the cover should be tight fitting and fully enclosing, with only a screened air vent. Minimum distances and soil percolation requirements between sewage disposal sites and water wells need to be observed. Rules regarding the design and installation of private and municipal septic systems take all these factors into account so that nearby drinking water sources are protected. Education of the general population in society also plays an important role in protecting drinking water. Cleanup of contaminated groundwater tends to be very costly. Effective remediation of groundwater is generally very difficult. Contamination of groundwater from surface and subsurface sources can usually be dramatically reduced by correctly centering the casing during construction and filling the casing annulus with an appropriate sealing material. The sealing material (grout) should be placed from immediately above the production zone back to surface, because, in the absence of a correctly constructed casing seal, contaminated fluid can travel into the well through the casing annulus. Centering devices are important (usually 1 per length of casing or at maximum intervals of 9 m) to ensure that the grouted annular space is of even thickness. Upon the construction of a new test well, it is considered best practice to invest in a complete battery of chemical and biological tests on the well water in question. Point - of - use treatment is available for individual properties and treatment plants are often constructed for municipal water supplies that suffer from contamination. Most of these treatment methods involve the filtration of the contaminants of concern, and additional protection may be garnered by installing well - casing screens only at depths where contamination is not present. Well water for personal use is often filtered with reverse osmosis water processors; this process can remove very small particles. A simple, effective way of killing microorganisms is to bring the water to a full boil for one to three minutes, depending on location. A household well contaminated by microorganisms can initially be treated by shock chlorination using bleach, generating concentrations hundreds of times greater than found in community water systems; however, this will not fix any structural problems that led to the contamination and generally requires some expertise and testing for effective application. After the filtration process, it is common to implement an Ultraviolet (UV) system to kill pathogens in the water. UV light effects the DNA of the pathogen by UV - C photons breaking through the cell wall. UV disinfection has been gaining popularity in the past decades as it is a chemical free method of water treatment. A risk with the placement of water wells is soil salination which occurs when the water table of the soil begins to drop and salt begins to accumulate as the soil begins to dry out. Another environmental problem that is very prevalent in water well drilling is the potential for methane to seep through. The potential for soil salination is a large risk when choosing the placement of water wells. Soil salination is caused when the water table of the soil drops over time and salt begins to accumulate. In turn, the increased amount of salt begins to dry the soil out. This is a very detrimental problem because the increased level of salt in the soil can result in the degradation of soil and can be very harmful to vegetation. Methane, an asphyxiant, is a chemical compound that is the main component of natural gas. When methane is introduced into a confined space, it displaces oxygen, reducing oxygen concentration to a level low enough to pose a threat to humans and other aerobic organisms but still high enough for a risk of spontaneous or externally caused explosion. This potential for explosion is what poses such a danger in regards to the drilling and placement of water wells. Low levels of methane in drinking water are not considered toxic. When methane seeps into a water supply, it is commonly referred to as "methane migration ''. This can be caused by old natural gas wells near water well systems becoming abandoned and no longer monitored. Lately, however, the described wells / pumps are no longer very efficient and can be replaced by either handpumps or treadle pumps. Another alternative is the use of self - dug wells, electrical deep - well pumps (for higher depths). Appropriate technology organizations as Practical Action are now supplying information on how to build / set - up (DIY) handpumps and treadle pumps in practice. Springs and wells have had cultural significance since prehistoric times, leading to the foundation of towns such as Wells and Bath in Somerset. Interest in health benefits led to the growth of spa towns including many with wells in their name, examples being Llandrindod Wells and Royal Tunbridge Wells. Eratosthenes first calculated the radius of the Earth in about 230 BC by comparing shadows in wells during the summer solstice. Many incidents in the Bible take place around wells, such as the finding of a wife for Isaac in Genesis and Jesus 's talk with the Samaritan woman in the Gospels.
has any scottish team won back to back trebles
List of Scottish football champions - wikipedia The Scottish football champions are the winners of the highest league in Scottish football, namely the Scottish Football League (SFL) from 1890 until 1998, the Scottish Premier League (SPL) from 1998 until 2013 and the Scottish Premiership thereafter. The SFL was established in 1890, initially as an amateur league until professionalism in Scottish football was legalised in 1893. At the end of the first season Dumbarton and Rangers finished level on points at the top of the table. The rules in force at the time required that the teams contest a play - off match for the championship, which finished in a 2 -- 2 draw, and the first ever championship was thus shared between two clubs, the only occasion on which this has happened. In 1893 a Second Division was formed, with the existing single division renamed the First Division. The league continued during the First World War but was suspended during the Second World War. Although there were several short spells when a third division was created, the two - division structure remained largely in place until 1975, when a major re-organisation of the league led to a new three - tier set - up and the creation of a new Premier Division at the highest level. In 1998, the teams then in the Premier Division broke away to form the SPL, which supplanted the Premier Division as the highest level of football in Scotland. The SPL and SFL merged in 2013 to form the Scottish Professional Football League (SPFL), which branded its top division as the Scottish Premiership. Throughout its existence the championship of Scottish football has been dominated by two Glasgow clubs, Celtic and Rangers. The two rivals, who are collectively known as the "Old Firm '', have claimed the majority of league titles. As of 2018, Rangers have won 54 and Celtic 49, while no other club has won the title on more than four occasions. No club outside the Old Firm has won the title since the 1984 -- 85 season, when the Aberdeen side managed by Alex Ferguson won the Premier Division. The current period of dominance by the Old Firm is a record; the previous longest streak was 27 years, between 1904 and 1931. Each of the Old Firm clubs has at one time managed a run of nine consecutive championships, Celtic from 1966 to 1974 and Rangers from 1989 to 1997. Each of the two clubs has also claimed the Double on many occasions, by winning the league and the Scottish Cup in the same season. As of the start of the 2017 -- 18 season Rangers have won the most Doubles with 18, more than any other club in the world apart from Northern Ireland 's Linfield. Each club has also won a Double and added the Scottish League Cup to make it a Treble. In the 1966 -- 67 season Celtic took all three domestic trophies and also won the European Cup to complete the only Quadruple to date. Key:
when does lexie grey come into the show
Lexie Grey - wikipedia Alexandra Caroline "Lexie '' Grey, M.D. is a fictional character from the medical drama television series Grey 's Anatomy, which airs on ABC in the United States. The character was created by series ' producer Shonda Rhimes and was portrayed by actress Chyler Leigh from the third through eighth seasons. She was introduced as a surgical intern in season three. Serving as Meredith Grey (Ellen Pompeo) 's half sister, she transferred to the fictional Seattle Grace Hospital, after her mother 's sudden death. Eventually named a surgical resident, the character was originally contracted to appear for a multi-episode story arc, but received star billing in the fourth season. The character 's focal storyline involved a romantic relationship with plastic surgeon Mark Sloan (Eric Dane). She sustained life - threatening injuries after an aviation accident, in the eighth - season finale, which ultimately ended in her death. The character 's death resulted in mixed critical feedback and the reason given for the departure was Leigh 's desire to spend more time with her family. Rhimes has characterized Lexie as being a dork, as well as having issues with saying how she feels. Leigh has been moderately well received by critics, and was among the cast to receive a Screen Actors Guild Award nomination for Outstanding Performance by an Ensemble in a Drama Series in 2007. Lexie Grey is the daughter of Thatcher Grey (Jeff Perry) and his second wife, Susan Grey (Mare Winningham). She grew up with her younger sister, Molly Grey - Thompson (Mandy Siegfried) and throughout her childhood was kept unaware that she also had an older half - sister, Meredith Grey (Ellen Pompeo), born to Thatcher and his first wife, Ellis Grey (Kate Burton). Lexie is first mentioned when Molly is admitted to Seattle Grace Hospital, with Susan informing Meredith that Lexie is a student at Harvard Medical School. At some subsequent point, Lexie learns of Meredith 's existence. She graduates from medical school and is accepted to take her surgical internship at Mass General hospital, but following her mother 's sudden death from complications stemming from the hiccups, Lexie instead opts to move back to Seattle to be closer to her father, taking up an internship at Seattle Grace Hospital, one year behind Meredith who is due to begin her residency. Lexie has an eidetic memory, which is often used as a valuable resource and earns Lexie the nickname "Lexipedia. '' In contrast to Meredith, Lexie came from a loving home with a happy upbringing. Meredith once said of her: "She was raised right. With parents and rules and smiley face posters on her wall. '' Her idealistic upbringing leaves her unprepared to deal with the many hardships she faces during the course of the show, and she often has difficulties understanding and accepting the darker side of human emotions. While Lexie can be sympathetic, she is not empathetic, and tends to lash out and react childishly at perceived slights, or when faced with behavior she does not like or understand. The night before her internship begins, Lexie meets Derek Shepherd (Patrick Dempsey) at Joe 's Bar, mirroring his initial meeting with Meredith. The pair flirt, but Derek makes his excuses and leaves. On her arrival at the hospital, Lexie meets George O'Malley (T.R. Knight) who quickly realizes her identity. Lexie promises to keep George 's secret that he is repeating his internship after failing his intern exam and the pair strike up a friendship. Lexie is assigned to be Cristina Yang 's (Sandra Oh) intern, and is dubbed "three '' by Cristina, who does n't take the time to learn her name. Lexie is eager to get to know her half - sister Meredith, but receives a hostile response when she introduces herself, and her later frequent attempts to bond with Meredith are similarly rebuffed. Lexie begins a brief sexual relationship with Alex Karev (Justin Chambers), who discovers that Thatcher has descended into alcoholism following Susan 's death. When Meredith scolds Lexie for not looking after Thatcher, she finally retaliates and decides to stop pursuing a relationship with Meredith. Lexie bonds with patient Nick Hanscom (Seth Green), and is present when his exposed artery blows, causing him to suffer massive blood loss. She manages to stop the bleeding, but is distraught when Nick later dies anyway. Feeling sympathy for Lexie, Cristina invites her to join her and Meredith in drinking and dancing, causing the sisters relationship to begin to thaw. The following morning, Meredith goes out of her way to make Lexie breakfast, which she politely eats despite an allergy to eggs, resulting in her having to be treated at the hospital. Lexie and George agree to move in together, but they can only afford a dilapidated apartment which Lexie attempts to improve by stealing decorations from the hospital. She begins to develop romantic feelings for George, and discovers that he had only failed his intern exam by one point, encouraging him to try and convince the Chief that he deserves a second chance. Lexie continues to harbor romantic feelings for George, oblivious to the fact that he does n't see her in the same way. Their relationship is a mirror foil of George 's previous infatuation with the oblivious Meredith. She prioritizes helping George study over taking part in a surgery with Mark Sloan (Eric Dane), but feels betrayed when George does n't request that she become one of his interns after passing his exam. Finally realizing that he does n't feel the same way, Lexie gives up on her feelings for George and the pair 's friendship begins to fizzle out. She later begins a flirtation with Mark, and the pair begin an unlikely romantic relationship, though they are forced to keep it a secret when Meredith and Derek warn Mark away from Lexie. Lexie discovers that some of her fellow interns have secretly been performing simple procedures on each other and begins taking part to prove she 's hardcore. Sadie Harris (Melissa George) joins the society, and seeking a more daring procedure, suggests removing her appendix. Though Lexie agrees, she quickly finds herself out of her depth, resulting in Meredith and Cristina having to intervene to save Sadie 's life. Lexie and the other interns are put on probation. Derek finds Lexie distraught at the day 's events, and allows her to move into the attic at his and Meredith 's house. Mark comes clean to Derek about his relationship with Lexie, resulting in the two men getting into a fist fight. The pair continue to feud, resulting in Lexie beginning to stress eat until they eventually reconcile. Lexie is delighted when Meredith asks her to be a bridesmaid at her and Derek 's wedding, though they eventually give the ceremony to Alex and cancer - stricken Izzie Stevens (Katherine Heigl). Mark decides to purchase a house and invites Lexie to move in with him, however she declines, concerned about how fast their relationship is progressing. Lexie feels guilt following George 's death, having abandoned their friendship after he failed to reciprocate her romantic feelings towards him. Mark comforts her, and she eventually agrees to move into his new apartment with him. Thatcher is admitted into the hospital with liver failure stemming from his former alcoholism. When Lexie is found to not be a suitable transplant candidate, Meredith steps in and donates part of her liver, not to help Thatcher, but to spare Lexie the grief of losing her father and because Lexie inappropriately pulled Meredith 's medical files and begged her to. Mark and Lexie receive a shock when a pregnant teenage girl named Sloan Riley shows up claiming to be Mark 's daughter, and her parentage is soon confirmed. Mark quickly agrees to let her and the baby move in permanently without consulting Lexie, resulting in her ending their relationship when Mark asks her not to force him to choose between Lexie and his child and grandchild, because he 'd have to choose them instead of her. She engages in a one - night stand with Alex, but feels guilty and confesses to Mark when he attempts to restart their relationship once his daughter leaves. Mark is furious with her, officially ending their relationship. Despite secretly suppressing her feelings for Mark, Lexie and Alex assume a casual relationship, that Lexie begins to take too seriously and flaunts to make Mark jealous. Lexie is able to use her eiditic memory to help Alex with several of his cases. Meanwhile, Mark and Lexie never seem to get their timing right, with Lexie disinterested and angry at Mark when he wishes to reconcile, or Mark being caught sleeping with a flavor of the day when Lexie wants to rekindle their relationship. Lexie takes part in a surgery on a patient named Alison Clark (Caroline Williams) and informs her husband Gary Clark (Michael O'Neill) that it was a success. However, moments later Alison suffers a stroke and falls into a coma that she is deemed unlikely to wake from. As Alison had signed a DNR form, Lexie is forced to turn off the machine keeping her alive, despite Gary begging her to stop. After failing in an attempt to sue the hospital, a grief - stricken Mr. Clark later returns to the hospital with a gun, seeking revenge on Lexie, Derek and Richard Webber (James Pickens, Jr.). After witnessing Mr. Clark shoot a nurse dead, Mark shields Lexie during the shooting and the pair attempt to save a critically wounded Alex. Seeking supplies, Lexie heads out into the hospital and comes face to face with Mr Clark who attempts to shoot her, but she is saved by a SWAT team member who wounds Clark at the last moment allowing her to escape. Lexie and Mark proceed to save Alex 's life, but Lexie is disappointed and irritated when a delirious Alex calls for his ex-wife, Izzie, while nearly dying instead of Lexie. After a showdown with Richard, Gary Clark eventually commits suicide, having killed eleven people and wounding another seven, as well as causing Meredith to suffer a miscarriage. Lexie has a psychotic breakdown after the shooting. She is admitted to the hospital 's psychiatric facility and is sedated for over fifty hours, with Meredith remaining by her side. Alex, on the other hand, breaks off his and Lexie 's fling because he can not handle having to care for an insane person after his personal and family history. Lexie retaliates with causal cruelty by mocking him for calling out for the wife who left him and snubbing him when he asks her to use her "Lexipedia '' talents to help on cases. Lexie is paranoid and irritated that other 's may view her as fragile and incompetent due to her melt down and works hard to repair her image. After both realize they continue to have feelings for each other, Lexie and Mark resume their relationship, but their brief happiness is ruined when Lexie discovers that during their breakup, Mark had impregnated his best friend Callie Torres (Sara Ramirez) and she again ends their relationship. Eager to win Lexie back, Mark recruits his protégé Jackson Avery (Jesse Williams) to try to bring him back into Lexie 's favour, but his plan backfires when Jackson and Lexie develop romantic feelings for each other, resulting in them starting a relationship of their own. Mark is furious and tries to win Lexie back, but shortly after Callie is involved in a car accident and Lexie consoles Mark. The next episode, Lexie explains that she will go back to Mark if he continues to pursue her, but she believes they ultimately are n't suited and will be unhappy. Mark agrees to let her go, giving her and Jackson his blessing. Although initially happy in her relationship with Jackson, Lexie is distraught when she learns that Mark has begun a relationship with a woman named Julia. At a charity softball match, her jealousy gets the better of her, resulting in her throwing a ball at Julia. Sensing that Lexie is still in love with Mark, Jackson calls off their relationship. Lexie begins working with Derek 's service and becomes increasingly proficient in neurosurgery, helping Derek with a set of "hopeless cases '' - high risk surgeries for patients who had otherwise run out of options. During a surgery, Derek is called away on an emergency, leaving Lexie and Meredith to carry out the procedure on their own. Though Derek had instructed them to merely reduce the patient 's brain tumour, Meredith allows Lexie to remove it completely, despite not being authorized by either the patient or Derek to due so. The sisters celebrate the successful surgery, but when the patient wakes up, Lexie is devastated to discover that the patient suffered severe brain damage, losing the ability to speak. Alex, Jackson and April Kepner (Sarah Drew) move out of Meredith 's house without inviting Lexie to join them, and with Derek and Meredith settling down with baby Zola, Lexie begins to feel increasingly lonely and isolated. After being left babysitting Zola on Valentine 's Day, she decides to try to salvage her relationship with Mark. However, after plucking up the courage to visit his apartment, she finds Mark studying with Jackson and loses her nerve, instead claiming that she was visiting to set up a play date for Zola and Sofia. When Mark confides in Derek that he and Julia have been discussing having a baby, Derek warns Lexie not to miss her chance again, resulting in her professing her love to a shell - shocked Mark, who merely thanks her for her candour. He later confesses to Derek that he feels the same way, but is unsure about how to go about things. Days later, Lexie is named as part of a team of surgeons that will be sent to Boise to separate conjoined twins, along with Mark, Meredith, Derek, Cristina and Arizona Robbins (Jessica Capshaw); however, while flying to their destination, the doctors ' plane crashes in the wilderness. Lexie is crushed under debris from the plane, but manages to alert Mark and Cristina to help her. The pair try in vain to free Lexie, who realises that she is suffering from a hemothorax and is unlikely to survive. While Cristina tries to find an oxygen tank and water for to try and save Lexie, Mark takes Lexie by the hand and professes his love for her, telling her that they were "meant to be ''. While fantasizing about the future that she and Mark could have had together, Lexie succumbs to her injuries, and dies moments before Meredith arrives. The remaining doctors are left in the woods waiting for rescue, with Mark refusing to let go of Lexie 's hand. The remaining surgeons are rescued from the woods days later, and Lexie 's body is returned to Seattle. Weeks later, a traumatized Cristina reminisces to Owen Hunt (Kevin McKidd) about her time in the woods, telling him that she had fought with wild animals to keep them away from Lexie 's body. Weeks after the crash, Meredith comes across a young girl presenting with injuries identical to Lexie 's, and tirelessly works to save the girls life, with the experience eventually causing her to come to terms with her sister 's death. Despite making it home from the woods, Mark later died from the injuries he sustained in the crash. When the survivors of the plane crash pool their compensation money to purchase the hospital, they agree to rename the hospital to "Grey Sloan Memorial Hospital '' in tribute to Lexie and Mark. Leigh first appeared on the show during the last two episodes of the third season as Meredith 's half - sister, Lexie Grey. Following Isaiah Washington 's departure who portrayed Preston Burke, it was reported that show 's executives were planning on adding new cast members, such as Lexie. She was officially upgraded to a series ' regular on July 11, 2007, for the fourth season. On casting Chyler Leigh as Lexie, Grey 's Anatomy creator Shonda Rhimes said: "We met with a lot of young actresses, but Chyler stood out -- she had a quality that felt right and real to me. It felt like she could be Meredith 's sister, but she had a depth that was very interesting. '' In September 2011, Leigh requested an extended summer hiatus to spend more time with her family. This was granted by Rhimes, though the actress returned in mid-October. Leigh 's character died in the eighth - season finale. In May 2012, Rhimes revealed why she decided to have Lexie die: "I love Chyler and I love the character of Lexie Grey. She was an important member of my Grey 's family. This was not an easy decision. But it was a decision that Chyler and I came to together. We had a lot of thoughtful discussion about it and ultimately we both decided this was the right time for her character 's journey to end. As far as I 'm concerned, Chyler will always remain a part of the Shondaland family and I ca n't wait to work with her again in the future. '' Following the death of her character, Leigh released a statement saying: Leigh 's character has been called "reliable, trustworthy, timid, and apprehensive '' by Grey 's Anatomy executives. In her early appearances, it was learned that Lexie has a photographic memory, which she applied to her surgical career. This led her to being nicknamed "Lexipedia '' by Alex Karev. The character has also been described as an "innocent young intern '' by Alex Keen of The Trades. Of the character, Leigh said: "She 's a very vulnerable person from a very healthy background -- she knows how to make good relationships but at this point (season four), she 's coming into so much opposition she 's trying to adjust to it. '' Debbie Chang of BuddyTV commented on Lexie 's early characterization, including her sexual relationship with Karev: Similarities have been established between Lexie and Meredith. Series writer, Stacy McKee, commented on this: "Lexie 's struggling to be hardcore herself. I do n't know if I 'd go so far as to say, perhaps, this kind of struggle must run in the family, but... Okay. Fine. It must run in the family -- because Lexie, though she 's very different from Meredith in many many ways, in this one way -- they seem to be exactly alike. Meredith and Lexie both want to succeed. They want to be strong. They want to feel normal. They want, so much, to be whole. But it 's a struggle -- a genuine struggle for them. Being hardcore does n't come naturally. Sometimes, they have to fake it. '' Lexie entertained several relationships throughout her time on Grey 's Anatomy. In her early appearances, she maintained a friendship with George O'Malley, until developing romantic feelings towards him. Rhimes offered the insight: "I love them as friends. They make good friends. We all have that friend we met in school or the gym or somewhere -- we just hit it off right away. And right away there was no pretense or airs. Just pure honesty. That 's Lexie and George. They 're really good friends and I can see the friendship evolving into something even greater. At least, that 's what Lexie is hoping. She is my kind of girl. The girl who likes the guy because he is a GOOD guy and that 's what George is. He is a good guy and that 's something that Lexie could use now. She 's going through her own challenges what with Meredith and losing her own mother and trying to keep things afloat. I 'm rooting for Lexie. She 's my kind of girl and I hope that she gets what she deserves: love. And more kisses. There should always be that. '' Lexie 's most significant relationship was with Mark Sloan. Following her death, Rhimes said: Leigh has received mixed feedback for her role as Lexie Grey. People, less than impressed, criticized the way Leigh 's character initially approached her sister, calling it "rude ''. Jennifer Armstrong of Entertainment Weekly was also critical of Leigh 's early appearances, referring to her as "awkward ''. However, Armstrong later noted that the "sparkling '' friendship development between Lexie and O'Malley "won her over ''. The character 's transition from season four to five was positively reviewed, with Keen of The Trades writing: "Her presence and confidence have increased quite a bit since last season, and actress, Chyler Leigh, does a fantastic job of making this progression feel seamless. Since the series has defused the tension between Little Grey and Big Grey (aka Meredith), Lexie has clear sailing through the season and steals the show as one of the best current characters on the series. '' Lexie was strongly criticized by Laura Burrows from IGN, being called "awful ''. Burrows also wrote: "Everything she says and does is obnoxious and does harm to someone. Lexie is an idiot and should be shot or drowned or exploded. '' The character 's relationship with Sloan has been well received, with Chris Monfette of IGN writing: "Sloan 's honest relationship with Lexie helped to make both characters infinitely more interesting and mature. '' Leigh served as a primary vocalist in "Song Beneath the Song '', the Grey 's Anatomy music event, and was well received, with the Boston Herald 's Mark Perigard praising her performance. The character was listed in Wetpaint 's "10 Hottest Female Doctors on TV ''. In 2007, at the 14th Screen Actors Guild Awards, Leigh and the rest of the cast of Grey 's Anatomy, received a nomination for Outstanding Performance by an Ensemble in a Drama Series. Specific General
what is the difference between mvno and mvne
Mobile virtual network enabler - wikipedia A mobile virtual network enabler (MVNE) is a company that provides network infrastructure and related services, such as business support systems, administration and operations support systems to a Mobile Virtual Network Operator (MVNO). This enables MVNOs to offer services to their own customers with their own brands. The MVNE does not have a relationship with consumers, but rather is a provider of network enablement platforms and services. MVNEs specialise in planning, implementation and management of mobile services. Typically this includes SIM provisioning and configuration, customer billing, customer relationship management and value - added service platforms. In effect, they enable an MVNO to outsource both the initial integration with the MNO and the ongoing business and technical operations management. A related type of company is mobile virtual network aggregator, or MVNA. MVNE is a telecom solution, whereas MVNA is a business model which includes wholesale of an operator 's airtime and routing of traffic over the MVNE 's own switches. The benefits of using an MVNE include a reduction in the upfront capital expenses of an MVNO, financing arrangements offered by MVNEs to cover start - up costs and reduced wholesale airtime costs achieved through economies of scale of hosting multiple MVNOs on a single MVNE platform. The other benefits could be reduced operational expenses by outsourcing management of business and technical operations, smoother launch processes and benefiting from previous experience of the MVNE as a negotiating channel for smaller MVNOs to reach a wholesale agreement with the MNO. However, not all MVNEs are the same. There is significant variation in the level of experience, technical capability, integration and operational support. Furthermore, using an MVNE may not be appropriate for all MVNOs. The considerations for this decision are manifold, however, some of the key reasons for an MVNO against using an MVNE are: Adoption of the MVNE model will have a significant impact on the margins of an MVNO business, acting either to improve or shrink these margins relative to direct or self - build models. As such, the decision to use an MVNE should not be taken lightly as the impacts could range from customer experience to business efficiency.
who will eliminated from bigg boss 3 kannada today
Bigg Boss Kannada 5 - Wikipedia Bigg Boss Kannada 5 (BBK5) is the fifth season of the Kannada reality television series Bigg Boss Kannada, that premiered on 15 October 2017 with the finale set in 2018. Sudeep reprised his role as the host of the show. The grand opening ceremony was aired on 15 October at 6pm in Colors Super channel, and will be airing from Monday to Sunday from 8 pm to 9: 30 pm in the same channel, where contestants will perform and enter the house, where they have to stay for more than 100 days, to be a winner. Sudeep had signed a ₹ 20 crore (US $3.1 million) deal with the channel Colors Kannada to host the next five seasons starting from the previous season. During the grand finale of Bigg Boss Kannada 4, it was announced that the next season will start with new contestants and a renovated Bigg Boss house built for the previous season in the Innovative Film City at Bidadi, Bangalore. This will be the first season to allow non-celebrities to the Bigg Boss house with online auditions taking place from July 2017. The application process of the auditions for non-celebrities was exclusively carried by Voot. Along with the usual celebrity contestants, the housemates of this season includes contestants selected through online audition process. The total of 17 housemates are contain 11 celebrities and 6 commoners. Sunil, the lead actor of the upcoming serial Shani, entered the house in character to promote it. She visited the Bigg Boss house to promote her new movie Upendra Matte Baa. Keerthi was an alumnus of Bigg Boss Kannada 4. He entered the house as a Kannada teacher. Incidentally, he had played the same character during a task in season 4. Shalini was an alumna of Bigg Boss Kannada 4. She entered the house as an Arts teacher. Sheetal was an alumna of Bigg Boss Kannada 4. She entered the house as a PT teacher. Niranjan was an alumnus of Bigg Boss Kannada 4. He entered the house as Drama teacher. Akul was an alumnus of Bigg Boss Kannada 2. He stayed in the house for 4 days. Samyuktha is an actress who gained fame with Kirik Party. She entered the house as a guest with her best friend, Lasya Nagraj. Chandan Diwakar Jagan Sameer Shruthi Sreenivasan
is puerto rico part of the united state
Puerto Rico - Wikipedia Coordinates: 18 ° 12 ′ N 66 ° 30 ′ W  /  18.2 ° N 66.5 ° W  / 18.2; - 66.5 Puerto Rican (formal) American (since 1917) Puerto Rico (Spanish for "Rich Port ''), officially the Commonwealth of Puerto Rico (Spanish: Estado Libre Asociado de Puerto Rico, lit. "Free Associated State of Puerto Rico '') and briefly called Porto Rico, is an unincorporated territory of the United States located in the northeast Caribbean Sea. An archipelago among the Greater Antilles, Puerto Rico includes the main island of Puerto Rico and a number of smaller ones, such as Mona, Culebra, and Vieques. The capital and most populous city is San Juan. Its official languages are Spanish and English, though Spanish predominates. The island 's population is approximately 3.4 million. Puerto Rico 's history, tropical climate, natural scenery, traditional cuisine, and tax incentives make it a destination for travelers from around the world. Originally populated by the indigenous Taíno people, the island was claimed in 1493 by Christopher Columbus for Spain during his second voyage. Later it endured invasion attempts from the French, Dutch, and British. Four centuries of Spanish colonial government influenced the island 's cultural landscapes with waves of African slaves, Canarian, and Andalusian settlers. In the Spanish Empire, Puerto Rico played a secondary, but strategic role when compared to wealthier colonies like Peru and the mainland parts of New Spain. Spain 's distant administrative control continued up to the end of the 19th century, helping to produce a distinctive creole Hispanic culture and language that combined elements from the Native Americans, Africans, and Iberians. In 1898, following the Spanish -- American War, the United States acquired Puerto Rico under the terms of the Treaty of Paris. The treaty took effect on April 11, 1899. Puerto Ricans are by law natural - born citizens of the United States and may move freely between the island and the mainland. As it is not a state, Puerto Rico does not have a vote in the United States Congress, which governs the territory with full jurisdiction under the Puerto Rico Federal Relations Act of 1950. However, Puerto Rico does have one non-voting member of the House called a Resident Commissioner. As a U.S. territory, American citizens residing in Puerto Rico are disenfranchised at the national level and do not vote for president and vice president of the United States, and do not pay federal income tax on Puerto Rican income. Like other territories and the District of Columbia, Puerto Rico does not have U.S. senators. Congress approved a local constitution, allowing U.S. citizens on the territory to elect a governor. A 2012 referendum showed a majority (54 % of those who voted) disagreed with "the present form of territorial status ''. A second question asking about a new model, had full statehood the preferred option among those who voted for a change of status, although a significant number of people did not answer the second question of the referendum. Another fifth referendum was held on June 11, 2017, with "Statehood '' and "Independence / Free Association '' initially as the only available choices. At the recommendation of the Department of Justice, an option for the "current territorial status '' was added. The referendum showed an overwhelming support for statehood, with 97.18 % voting for it, although the voter turnout had a historically low figure of only 22.99 % of the registered voters casting their ballots. In early 2017, the Puerto Rican government - debt crisis posed serious problems for the government. The outstanding bond debt had climbed to $70 billion at a time with 12.4 % unemployment. The debt had been increasing during a decade long recession. This was the second major financial crisis to affect the island after the Great Depression when the U.S. government, in 1935, provided relief efforts through the Puerto Rico Reconstruction Administration. On May 3, 2017, Puerto Rico 's financial oversight board in the U.S. District Court for Puerto Rico filed the debt restructuring petition which was made under Title III of PROMESA. By early August 2017, the debt was $72 billion with a 45 % poverty rate. In late September 2017, Hurricane Maria hit Puerto Rico causing devastating damage. The island 's electrical grid was largely destroyed, with repairs expected to take months to complete, provoking the largest power outage in American history. Recovery efforts were somewhat slow in the first few months, and over 200,000 residents had moved to Florida alone by late November 2017. Puerto Rico means "rich port '' in Spanish. Puerto Ricans often call the island Borinquén -- a derivation of Borikén, its indigenous Taíno name, which means "Land of the Valiant Lord ''. The terms boricua and borincano derive from Borikén and Borinquen respectively, and are commonly used to identify someone of Puerto Rican heritage. The island is also popularly known in Spanish as la isla del encanto, meaning "the island of enchantment ''. Columbus named the island San Juan Bautista, in honor of Saint John the Baptist, while the capital city was named Ciudad de Puerto Rico ("Rich Port City ''). Eventually traders and other maritime visitors came to refer to the entire island as Puerto Rico, while San Juan became the name used for the main trading / shipping port and the capital city. The island 's name was changed to "Porto Rico '' by the United States after the Treaty of Paris of 1898. The anglicized name was used by the U.S. government and private enterprises. The name was changed back to Puerto Rico by a joint resolution in Congress introduced by Félix Córdova Dávila in 1931. The official name of the entity in Spanish is Estado Libre Asociado de Puerto Rico ("free associated state of Puerto Rico ''), while its official English name is Commonwealth of Puerto Rico. The ancient history of the archipelago which is now Puerto Rico is not well known. Unlike other indigenous cultures in the New World (Aztec, Maya and Inca) which left behind abundant archeological and physical evidence of their societies, scant artifacts and evidence remain of the Puerto Rico 's indigenous population. Scarce archaeological findings and early Spanish accounts from the colonial era constitute all that is known about them. The first comprehensive book on the history of Puerto Rico was written by Fray Íñigo Abbad y Lasierra in 1786, nearly three centuries after the first Spaniards landed on the island. The first known settlers were the Ortoiroid people, an Archaic Period culture of Amerindian hunters and fishermen who migrated from the South American mainland. Some scholars suggest their settlement dates back about 4,000 years. An archeological dig in 1990 on the island of Vieques found the remains of a man, designated as the "Puerto Ferro Man '', which was dated to around 2000 BC. The Ortoiroid were displaced by the Saladoid, a culture from the same region that arrived on the island between 430 and 250 BC. The Igneri tribe migrated to Puerto Rico between 120 and 400 AD from the region of the Orinoco river in northern South America. The Arcaico and Igneri co-existed on the island between the 4th and 10th centuries. Between the 7th and 11th centuries, the Taíno culture developed on the island. By approximately 1000 AD, it had become dominant. At the time of Columbus ' arrival, an estimated 30,000 to 60,000 Taíno Amerindians, led by the cacique (chief) Agüeybaná, inhabited the island. They called it Boriken, meaning "the great land of the valiant and noble Lord ''. The natives lived in small villages, each led by a cacique. They subsisted by hunting and fishing, done generally by men, as well as by the women 's gathering and processing of indigenous cassava root and fruit. This lasted until Columbus arrived in 1493. When Columbus arrived in Puerto Rico during his second voyage on November 19, 1493, the island was inhabited by the Taíno. They called it Borikén (Borinquen in Spanish transliteration). Columbus named the island San Juan Bautista, in honor of St John the Baptist. Having reported the findings of his first travel, Columbus brought with him this time a letter from King Ferdinand empowered by a papal bull that authorized any course of action necessary for the expansion of the Spanish Empire and the Christian faith. Juan Ponce de León, a lieutenant under Columbus, founded the first Spanish settlement, Caparra, on August 8, 1508. He later served as the first governor of the island. Eventually, traders and other maritime visitors came to refer to the entire island as Puerto Rico, and San Juan became the name of the main trading / shipping port. At the beginning of the 16th century, the Spanish people began to colonize the island. Despite the Laws of Burgos of 1512 and other decrees for the protection of the indigenous population, some Taíno Indians were forced into an encomienda system of forced labor in the early years of colonization. The population suffered extremely high fatalities from epidemics of European infectious diseases. In 1520, King Charles I of Spain issued a royal decree collectively emancipating the remaining Taíno population. By that time, the Taíno people were few in number. Enslaved Africans had already begun to be imported to compensate for the native labor loss, but their numbers were proportionate to the diminished commercial interest Spain soon began to demonstrate for the island colony. Other nearby islands, like Cuba, Saint - Domingue, and Guadeloupe, attracted more of the slave trade than Puerto Rico, probably because of greater agricultural interests in those islands, on which colonists had developed large sugar plantations and had the capital to invest in the Atlantic slave trade. From the beginning of the country, the colonial administration relied heavily on the industry of enslaved Africans and creole blacks for public works and defenses, primarily in coastal ports and cities, where the tiny colonial population had hunkered down. With no significant industries or large - scale agricultural production as yet, enslaved and free communities lodged around the few littoral settlements, particularly around San Juan, also forming lasting Afro - creole communities. Meanwhile, in the island 's interior, there developed a mixed and independent peasantry that relied on a subsistence economy. This mostly unsupervised population supplied villages and settlements with foodstuffs and, in relative isolation, set the pattern for what later would be known as the Puerto Rican Jíbaro culture. By the end of the 16th century, the Spanish Empire was diminishing and, in the face of increasing raids from European competitors, the colonial administration throughout the Americas fell into a "bunker mentality ''. Imperial strategists and urban planners redesigned port settlements into military posts with the objective of protecting Spanish territorial claims and ensuring the safe passing of the king 's silver - laden Atlantic Fleet to the Iberian Peninsula. San Juan served as an important port - of - call for ships driven across the Atlantic by its powerful trade winds. West Indies convoys linked Spain to the island, sailing between Cádiz and the Spanish West Indies. The colony 's seat of government was on the forested Islet of San Juan and for a time became one of the most heavily fortified settlements in the Spanish Caribbean earning the name of the "Walled City ''. The islet is still dotted with the various forts and walls, such as La Fortaleza, Castillo San Felipe del Morro, and Castillo San Cristóbal, designed to protect the population and the strategic Port of San Juan from the raids of the Spanish European competitors. In 1625, in the Battle of San Juan, the Dutch commander Boudewijn Hendricksz tested the defenses ' limits like no one else before. Learning from Francis Drake 's previous failures here, he circumvented the cannons of the castle of San Felipe del Morro and quickly brought his 17 ships into the San Juan Bay. He then occupied the port and attacked the city while the population hurried for shelter behind the Morro 's moat and high battlements. Historians consider this event the worst attack on San Juan. Though the Dutch set the village on fire, they failed to conquer the Morro, and its batteries pounded their troops and ships until Hendricksz deemed the cause lost. Hendricksz 's expedition eventually helped propel a fortification frenzy. Constructions of defenses for the San Cristóbal Hill were soon ordered so as to prevent the landing of invaders out of reach of the Morro 's artillery. Urban planning responded to the needs of keeping the colony in Spanish hands. During the late 16th and early 17th centuries, Spain concentrated its colonial efforts on the more prosperous mainland North, Central, and South American colonies. With the advent of the lively Bourbon Dynasty in Spain in the 1700s, the island of Puerto Rico began a gradual shift to more imperial attention. More roads began connecting previously isolated inland settlements to coastal cities, and coastal settlements like Arecibo, Mayaguez, and Ponce began acquiring importance of their own, separate from San Juan. By the end of the 18th century, merchant ships from an array of nationalities threatened the tight regulations of the Mercantilist system, which turned each colony solely toward the European metropole and limited contact with other nations. U.S. ships came to surpass Spanish trade and with this also came the exploitation of the island 's natural resources. Slavers, which had made but few stops on the island before, began selling more enslaved Africans to growing sugar and coffee plantations. The increasing number of Atlantic wars in which the Caribbean islands played major roles, like the War of Jenkins ' Ear, the Seven Years ' War and the Atlantic Revolutions, ensured Puerto Rico 's growing esteem in Madrid 's eyes. On April 17, 1797, Sir Ralph Abercromby 's fleet invaded the island with a force of 6,000 -- 13,000 men, which included German soldiers and Royal Marines and 60 to 64 ships. Fierce fighting continued for the next days with Spanish troops. Both sides suffered heavy losses. On Sunday April 30 the British ceased their attack and began their retreat from San Juan. By the time independence movements in the larger Spanish colonies gained success, new waves of loyal creole immigrants began to arrive in Puerto Rico, helping to tilt the island 's political balance toward the Crown. In 1809, to secure its political bond with the island and in the midst of the European Peninsular War, the Supreme Central Junta based in Cádiz recognized Puerto Rico as an overseas province of Spain. This gave the island residents the right to elect representatives to the recently convened Spanish parliament (Cádiz Cortes), with equal representation to mainland Iberian, Mediterranean (Balearic Islands) and Atlantic maritime Spanish provinces (Canary Islands). Ramón Power y Giralt, the first Spanish parliamentary representative from the island of Puerto Rico, died after serving a three - year term in the Cortes. These parliamentary and constitutional reforms were in force from 1810 to 1814, and again from 1820 to 1823. They were twice reversed during the restoration of the traditional monarchy by Ferdinand VII. Immigration and commercial trade reforms in the 19th century increased the island 's ethnic European population and economy and expanded the Spanish cultural and social imprint on the local character of the island. Minor slave revolts had occurred on the island throughout the years, with the revolt planned and organized by Marcos Xiorro in 1821 being the most important. Even though the conspiracy was unsuccessful, Xiorro achieved legendary status and is part of Puerto Rico 's folklore. In the early 19th century, Puerto Rico spawned an independence movement that, due to harsh persecution by the Spanish authorities, convened in the island of St. Thomas. The movement was largely inspired by the ideals of Simón Bolívar in establishing a United Provinces of New Granada and Venezuela, that included Puerto Rico and Cuba. Among the influential members of this movement were Brigadier General Antonio Valero de Bernabé and María de las Mercedes Barbudo. The movement was discovered, and Governor Miguel de la Torre had its members imprisoned or exiled. With the increasingly rapid growth of independent former Spanish colonies in the South and Central American states in the first part of the 19th century, the Spanish Crown considered Puerto Rico and Cuba of strategic importance. To increase its hold on its last two New World colonies, the Spanish Crown revived the Royal Decree of Graces of 1815 as a result of which 450,000 immigrants, mainly Spaniards, settled on the island in the period up until the American conquest. Printed in three languages -- Spanish, English, and French -- it was intended to also attract non-Spanish Europeans, with the hope that the independence movements would lose their popularity if new settlers had stronger ties to the Crown. Hundreds of non Spanish families, mainly from Corsica, France, Germany, Ireland, Italy and Scotland, also immigrated to the island. Free land was offered as an incentive to those who wanted to populate the two islands, on the condition that they swear their loyalty to the Spanish Crown and allegiance to the Roman Catholic Church. The offer was very successful, and European immigration continued even after 1898. Puerto Rico still receives Spanish and European immigration. Poverty and political estrangement with Spain led to a small but significant uprising in 1868 known as Grito de Lares. It began in the rural town of Lares, but was subdued when rebels moved to the neighboring town of San Sebastián. Leaders of this independence movement included Ramón Emeterio Betances, considered the "father '' of the Puerto Rican independence movement, and other political figures such as Segundo Ruiz Belvis. Slavery was abolished in Puerto Rico in 1873, "with provisions for periods of apprenticeship ''. Leaders of "El Grito de Lares '' went into exile in New York City. Many joined the Puerto Rican Revolutionary Committee, founded on December 8, 1895, and continued their quest for Puerto Rican independence. In 1897, Antonio Mattei Lluberas and the local leaders of the independence movement in Yauco organized another uprising, which became known as the Intentona de Yauco. They raised what they called the Puerto Rican flag, which was adopted as the national flag. The local conservative political factions opposed independence. Rumors of the planned event spread to the local Spanish authorities who acted swiftly and put an end to what would be the last major uprising in the island to Spanish colonial rule. In 1897, Luis Muñoz Rivera and others persuaded the liberal Spanish government to agree to grant limited self - government to the island by royal decree in the Autonomic Charter, including a bicameral legislature. In 1898, Puerto Rico 's first, but short - lived, quasi-autonomous government was organized as an "overseas province '' of Spain. This bilaterally agreed - upon charter maintained a governor appointed by the King of Spain -- who held the power to annul any legislative decision -- and a partially elected parliamentary structure. In February, Governor - General Manuel Macías inaugurated the new government under the Autonomic Charter. General elections were held in March and the new government began to function on July 17, 1898. In 1890, Captain Alfred Thayer Mahan, a member of the Navy War Board and leading U.S. strategic thinker, published a book titled The Influence of Sea Power upon History in which he argued for the establishment of a large and powerful navy modeled after the British Royal Navy. Part of his strategy called for the acquisition of colonies in the Caribbean, which would serve as coaling and naval stations. They would serve as strategic points of defense with the construction of a canal through the Isthmus of Panama, to allow easier passage of ships between the Atlantic and Pacific oceans. William H. Seward, the former Secretary of State under presidents Abraham Lincoln and Andrew Johnson, had also stressed the importance of building a canal in Honduras, Nicaragua or Panama. He suggested that the United States annex the Dominican Republic and purchase Puerto Rico and Cuba. The U.S. Senate did not approve his annexation proposal, and Spain rejected the U.S. offer of 160 million dollars for Puerto Rico and Cuba. Since 1894, the United States Naval War College had been developing contingency plans for a war with Spain. By 1896, the U.S. Office of Naval Intelligence had prepared a plan that included military operations in Puerto Rican waters. Except for one 1895 plan, which recommended annexation of the island then named Isle of Pines (later renamed as Isla de la Juventud), a recommendation dropped in later planning, plans developed for attacks on Spanish territories were intended as support operations against Spain 's forces in and around Cuba. Recent research suggests that the U.S. did consider Puerto Rico valuable as a naval station, and recognized that it and Cuba generated lucrative crops of sugar -- a valuable commercial commodity which the United States lacked, before the development of the Sugar Beet industry in the United States. On July 25, 1898, during the Spanish -- American War, the U.S. invaded Puerto Rico with a landing at Guánica. As an outcome of the war, Spain ceded Puerto Rico, along with the Philippines and Guam, then under Spanish sovereignty, to the U.S. under the Treaty of Paris, which went into effect on April 11, 1899. Spain relinquished sovereignty over Cuba, but did not cede it to the U.S. The United States and Puerto Rico began a long - standing metropolis - colony relationship. In the early 20th century, Puerto Rico was ruled by the military, with officials including the governor appointed by the President of the United States. The Foraker Act of 1900 gave Puerto Rico a certain amount of civilian popular government, including a popularly elected House of Representatives. The upper house and governor were appointed by the United States. Its judicial system was constructed to follow the American legal system; a Puerto Rico Supreme Court and a United State District Court for the territory were established. It was authorized a non-voting member of Congress, by the title of "Resident Commissioner '', who was appointed. In addition, this Act extended all U.S. laws "not locally inapplicable '' to Puerto Rico, specifying, in particular, exemption from U.S. Internal Revenue laws. The Act empowered the civil government to legislate on "all matters of legislative character not locally inapplicable '', including the power to modify and repeal any laws then in existence in Puerto Rico, though the U.S. Congress retained the power to annul acts of the Puerto Rico legislature. During an address to the Puerto Rican legislature in 1906, President Theodore Roosevelt recommended that Puerto Ricans become U.S. citizens. In 1914, the Puerto Rican House of Delegates voted unanimously in favor of independence from the United States, but this was rejected by the U.S. Congress as "unconstitutional '', and in violation of the 1900 Foraker Act. In 1917, the U.S. Congress passed the Jones -- Shafroth Act, popularly called the Jones Act, which granted Puerto Ricans, born on or after, April 25, 1898, U.S. citizenship. Opponents, which included all of the Puerto Rican House of Delegates, who voted unanimously against it, said that the U.S. imposed citizenship in order to draft Puerto Rican men into the army as American entry into World War I became likely. The same Act provided for a popularly elected Senate to complete a bicameral Legislative Assembly, as well as a bill of rights. It authorized the popular election of the Resident Commissioner to a four - year term. Natural disasters, including a major earthquake and tsunami in 1918, and several hurricanes, and the Great Depression impoverished the island during the first few decades under U.S. rule. Some political leaders, such as Pedro Albizu Campos, who led the Puerto Rican Nationalist Party, demanded change in relations with the United States. He organized a protest at the University of Puerto Rico in 1935, in which four were killed by police. In 1936, U.S. Senator Millard Tydings introduced a bill supporting independence for Puerto Rico, but it was opposed by Luis Muñoz Marín of the Liberal Party of Puerto Rico. (Tydings had co-sponsored the Tydings -- McDuffie Act, which provided independence to the Philippines after a 10 - year transition under a limited autonomy.) All the Puerto Rican parties supported the bill, but Muñoz Marín opposed it. Tydings did not gain passage of the bill. In 1937, Albizu Campos ' party organized a protest in which numerous people were killed by police in Ponce. The Insular Police, resembling the National Guard, opened fire upon unarmed cadets and bystanders alike. The attack on unarmed protesters was reported by the U.S. Congressman Vito Marcantonio and confirmed by the report of the Hays Commission, which investigated the events. The commission was led by Arthur Garfield Hays, counsel to the American Civil Liberties Union. Nineteen people were killed and over 200 were badly wounded, many in their backs while running away. The Hays Commission declared it a massacre and police mob action, and it has since been known as the Ponce massacre. In the aftermath, on April 2, 1943, Tydings introduced a bill in Congress calling for independence for Puerto Rico. This bill ultimately was defeated. During the latter years of the Roosevelt -- Truman administrations, the internal governance was changed in a compromise reached with Luis Muñoz Marín and other Puerto Rican leaders. In 1946, President Truman appointed the first Puerto Rican - born governor, Jesús T. Piñero. Since 2007, the Puerto Rico State Department has developed a protocol to issue certificates of Puerto Rican citizenship to Puerto Ricans. In order to be eligible, applicants must have been born in Puerto Rico; born outside of Puerto Rico to a Puerto Rican -- born parent; or be an American citizen with at least one year of residence in Puerto Rico. In 1947, the U.S. granted Puerto Ricans the right to democratically elect their own governor. In 1948, Luis Muñoz Marín became the first popularly elected governor of Puerto Rico. A bill was introduced before the Puerto Rican Senate which would restrain the rights of the independence and nationalist movements in the island. The Senate at the time was controlled by the Popular Democratic Party (PPD), and was presided over by Luis Muñoz Marín. The bill, also known as the Gag Law (Spanish: Ley de la Mordaza), was approved by the legislature on May 21, 1948. It made it illegal to display a Puerto Rican flag, to sing a pro-independence tune, to talk of independence, or to campaign for independence. The bill, which resembled the Smith Act passed in the United States, was signed and made into law on June 10, 1948, by the U.S. appointed governor of Puerto Rico, Jesús T. Piñero, and became known as "Law 53 '' (Spanish: Ley 53). In accordance with this law, it would be a crime to print, publish, sell, exhibit, organize or help anyone organize any society, group or assembly of people whose intentions are to paralyze or destroy the insular government. Anyone accused and found guilty of disobeying the law could be sentenced to ten years of prison, be fined $10,000 (U.S.), or both. According to Dr. Leopoldo Figueroa, a member of the Puerto Rico House of Representatives, the law was repressive and in violation of the First Amendment of the U.S. Constitution, which guarantees Freedom of Speech. He asserted that the law as such was a violation of the civil rights of the people of Puerto Rico. The law was repealed in 1957. In 1950, the U.S. Congress granted Puerto Ricans the right to organize a constitutional convention via a referendum that gave them the option of voting their preference, "yes '' or "no '', on a proposed U.S. law that would organize Puerto Rico as a "commonwealth '' that would continue United States sovereignty over Puerto Rico and its people. Puerto Rico 's electorate expressed its support for this measure in 1951 with a second referendum to ratify the constitution. The Constitution of Puerto Rico was formally adopted on July 3, 1952. The Constitutional Convention specified the name by which the body politic would be known. On February 4, 1952, the convention approved Resolution 22 which chose in English the word Commonwealth, meaning a "politically organized community '' or "state '', which is simultaneously connected by a compact or treaty to another political system. Puerto Rico officially designates itself with the term "Commonwealth of Puerto Rico '' in its constitution, as a translation into English of the term to "Estado Libre Asociado '' (ELA). In 1967 Puerto Rico 's Legislative Assembly polled the political preferences of the Puerto Rican electorate by passing a plebiscite act that provided for a vote on the status of Puerto Rico. This constituted the first plebiscite by the Legislature for a choice among three status options (commonwealth, statehood, and independence). In subsequent plebiscites organized by Puerto Rico held in 1993 and 1998 (without any formal commitment on the part of the U.S. Government to honor the results), the current political status failed to receive majority support. In 1993, Commonwealth status won by a plurality of votes (48.6 % versus 46.3 % for statehood), while the "none of the above '' option, which was the Popular Democratic Party - sponsored choice, won in 1998 with 50.3 % of the votes (versus 46.5 % for statehood). Disputes arose as to the definition of each of the ballot alternatives, and Commonwealth advocates, among others, reportedly urged a vote for "none of the above ''. In 1950, the U.S. Congress approved Public Law 600 (P.L. 81 - 600), which allowed for a democratic referendum in Puerto Rico to determine whether Puerto Ricans desired to draft their own local constitution. This Act was meant to be adopted in the "nature of a compact ''. It required congressional approval of the Puerto Rico Constitution before it could go into effect, and repealed certain sections of the Organic Act of 1917. The sections of this statute left in force were entitled the Puerto Rican Federal Relations Act. U.S. Secretary of the Interior Oscar L. Chapman, under whose Department resided responsibility of Puerto Rican affairs, clarified the new commonwealth status in this manner: On October 30, 1950, Pedro Albizu Campos and other nationalists led a three - day revolt against the United States in various cities and towns of Puerto Rico, in what is known as the Puerto Rican Nationalist Party Revolts of the 1950s. The most notable occurred in Jayuya and Utuado. In the Jayuya revolt, known as the Jayuya Uprising, the Puerto Rican governor declared martial law, and attacked the insurgents in Jayuya with infantry, artillery and bombers under control of the Puerto Rican commander. The Utuado uprising culminated in what is known as the Utuado massacre. On November 1, 1950, Puerto Rican nationalists from New York City, Griselio Torresola and Oscar Collazo, attempted to assassinate President Harry S. Truman at his temporary residence of Blair House. Torresola was killed during the attack, but Collazo was wounded and captured. He was convicted of murder and sentenced to death, but President Truman commuted his sentence to life. After Collazo served 29 years in a federal prison, President Jimmy Carter commuted his sentence to times served and he was released in 1979. Pedro Albizu Campos served many years in a federal prison in Atlanta, for seditious conspiracy to overthrow the U.S. government in Puerto Rico. The Constitution of Puerto Rico was approved by a Constitutional Convention on February 6, 1952, and 82 % of the voters in a March referendum. It was modified and ratified by the U.S. Congress, approved by President Truman on July 3 of that year, and proclaimed by Gov. Muñoz Marín on July 25, 1952. This was the anniversary of July 25, 1898, landing of U.S. troops in the Puerto Rican Campaign of the Spanish -- American War, until then celebrated as an annual Puerto Rico holiday. Puerto Rico adopted the name of Estado Libre Asociado de Puerto Rico (literally "Associated Free State of Puerto Rico ''), officially translated into English as Commonwealth, for its body politic. "The United States Congress legislates over many fundamental aspects of Puerto Rican life, including citizenship, the currency, the postal service, foreign policy, military defense, communications, labor relations, the environment, commerce, finance, health and welfare, and many others. '' During the 1950s and 1960s, Puerto Rico experienced rapid industrialization, due in large part to Operación Manos a la Obra ("Operation Bootstrap ''), an offshoot of FDR 's New Deal. It was intended to transform Puerto Rico 's economy from agriculture - based to manufacturing - based to provide more jobs. Puerto Rico has become a major tourist destination, as well as a global center for pharmaceutical manufacturing. Four referendums have been held since the late 20th century to resolve the political status. The 2012 referendum showed a majority (54 % of the voters) in favor of a change in status, with full statehood the preferred option of those who wanted a change. Because there were almost 500,000 blank ballots in the 2012 referendum, creating confusion as to the voters ' true desire, Congress decided to ignore the vote. The first three plebiscites provided voters with three options: statehood, free association, and independence. The Puerto Rican status referendum, 2017 in June 2017 was going to offer only two options: Statehood and Independence / Free Association. However, a letter from the Donald Trump administration recommended adding the Commonwealth, the current status, in the plebiscite. The option had been removed from this plebiscite in response to the results of the plebiscite in 2012 which asked whether to remain in the current status and No had won. The Trump administration cited changes in demographics during the past 5 years to add the option once again. Amendments to the plebiscite bill were adopted making ballot wording changes requested by the Department of Justice, as well as adding a "current territorial status '' option. While 97 percent voted in favor of statehood, the turnout was low; only some 23 percent voted. After the ballots were counted the Justice Department was non-committal. The Justice Department had asked for the 2017 plebiscite to be postponed but the Rosselló government chose not to do so. After the outcome was announced, the department told the Associated Press that it had "not reviewed or approved the ballot 's language ''. Former Governor Aníbal Acevedo Vilá (2005 -- 2009) is convinced that statehood is not the solution for either the U.S. or for Puerto Rico "for economic, identity and cultural reasons ''. He pointed out that voter turnout for the 2017 referendum was extremely low, and suggests that a different type of mutually - beneficial relationship should be found. If the federal government agrees to discuss an association agreement, the conditions would be negotiated between the two entities. The agreement might cover topics such as the role of the U.S. military in Puerto Rico, the use of the U.S. currency, free trade between the two entities, and whether Puerto Ricans would be U.S. citizens. The three current Free Associated States (Marshall Islands, Micronesia and Palau) use the American dollar, receive some financial support and the promise of military defense if they refuse military access to any other country. Their citizens are allowed to work in the U.S. and serve in its military. Governor Ricardo Rosselló is strongly in favor of statehood to help develop the economy and help to "solve our 500 - year - old colonial dilemma... Colonialism is not an option... It 's a civil rights issue... 3.5 million citizens seeking an absolute democracy, '' he told the news media. Benefits of statehood include an additional $10 billion per year in federal funds, the right to vote in presidential elections, higher Social Security and Medicare benefits, and a right for its government agencies and municipalities to file for bankruptcy. The latter is currently prohibited. Statehood might be useful as a means of dealing with the financial crisis, since it would allow for bankruptcy and the relevant protection. According to the Government Development Bank, this might be the only solution to the debt crisis. Congress has the power to vote to allow Chapter 9 protection without the need for statehood, but in late 2015 there was very little support in the House for this concept. Other benefits to statehood include increased disability benefits and Medicaid funding, the right to vote in Presidential elections and the higher (federal) minimum wage. Subsequent to the 2017 referendum, Puerto Rico 's legislators are also expected to vote on a bill that would allow the Governor to draft a state constitution and hold elections to choose senators and representatives to the federal Congress. In spite of the outcome of the referendum, and the so - called Tennessee Plan (above), action by the United States Congress would be necessary to implement changes to the status of Puerto Rico under the Territorial Clause of the United States Constitution. Since 1953, the UN has been considering the political status of Puerto Rico and how to assist it in achieving "independence '' or "decolonization ''. In 1978, the Special Committee determined that a "colonial relationship '' existed between the U.S. and Puerto Rico. The UN 's Special Committee on Decolonization has often referred to Puerto Rico as a "nation '' in its reports, because, internationally, the people of Puerto Rico are often considered to be a Caribbean nation with their own national identity. Most recently, in a June 2016 report, the Special Committee called for the United States to expedite the process to allow self - determination in Puerto Rico. More specifically, the group called on the United States to expedite a process that would allow the people of Puerto Rico to exercise fully their right to self - determination and independence... allow the Puerto Rican people to take decisions in a sovereign manner, and to address their urgent economic and social needs, including unemployment, marginalization, insolvency and poverty ". On November 27, 1953, shortly after the establishment of the Commonwealth, the General Assembly of the United Nations approved Resolution 748, removing Puerto Rico 's classification as a non-self - governing territory. The General Assembly did not apply the full list of criteria which was enunciated in 1960 when it took favorable note of the cessation of transmission of information regarding the non-self - governing status of Puerto Rico. According to the White House Task Force on Puerto Rico 's Political Status in its December 21, 2007 report, the U.S., in its written submission to the UN in 1953, never represented that Congress could not change its relationship with Puerto Rico without the territory 's consent. It stated that the U.S. Justice Department in 1959 reiterated that Congress held power over Puerto Rico pursuant to the Territorial Clause of the U.S. Constitution. In 1993 the United States Court of Appeals for the Eleventh Circuit stated that Congress may unilaterally repeal the Puerto Rican Constitution or the Puerto Rican Federal Relations Act and replace them with any rules or regulations of its choice. In a 1996 report on a Puerto Rico status political bill, the U.S. House Committee on Resources stated, "Puerto Rico 's current status does not meet the criteria for any of the options for full self - government under Resolution 1541 '' (the three established forms of full self - government being stated in the report as (1) national independence, (2) free association based on separate sovereignty, or (3) full integration with another nation on the basis of equality). The report concluded that Puerto Rico "... remains an unincorporated colony and does not have the status of ' free association ' with the United States as that status is defined under United States law or international practice '', that the establishment of local self - government with the consent of the people can be unilaterally revoked by the U.S. Congress, and that U.S. Congress can also withdraw the U.S. citizenship of Puerto Rican residents of Puerto Rico at any time, for a legitimate Federal purpose. The application of the U.S. Constitution to Puerto Rico is limited by the Insular Cases. In 2006, 2007, 2009, 2010, and 2011 the United Nations Special Committee on Decolonization passed resolutions calling on the United States to expedite a process "that would allow Puerto Ricans to fully exercise their inalienable right to self - determination and independence '', and to release all Puerto Rican political prisoners in U.S. prisons, to clean up, decontaminate and return the lands in the islands of Vieques and Culebra to the people of Puerto Rico, to perform a probe into U.S. human rights violations on the island and a probe into the killing by the FBI of pro-independence leader Filiberto Ojeda Rios. On July 15, 2009, the United Nations Special Committee on Decolonization approved a draft resolution calling on the Government of the United States to expedite a process that would allow the Puerto Rican people to exercise fully their inalienable right to self - determination and independence. On April 29, 2010, the U.S. House voted 223 -- 169 to approve a measure for a federally sanctioned process for Puerto Rico 's self - determination, allowing Puerto Rico to set a new referendum on whether to continue its present form of commonwealth, or to have a different political status. If Puerto Ricans voted to continue as a commonwealth, the Government of Puerto Rico was authorized to conduct additional plebiscites at intervals of every eight years from the date on which the results of the prior plebiscite were certified; if Puerto Ricans voted to have a different political status, a second referendum would determine whether Puerto Rico would become a U.S. state, an independent country, or a sovereign nation associated with the U.S. that would not be subject to the Territorial Clause of the United States Constitution. During the House debate, a fourth option, to retain its present form of commonwealth (sometimes referred to as "the status quo '') political status, was added as an option in the second plebiscite. Immediately following U.S. House passage, H.R. 2499 was sent to the U.S. Senate, where it was given two formal readings and referred to the Senate Committee on Energy and Natural Resources. On December 22, 2010, the 111th United States Congress adjourned without any Senate vote on H.R. 2499, killing the bill. The latest Task Force report was released on March 11, 2011. The report suggested a two - plebiscite process, including a "first plebiscite that requires the people of Puerto Rico to choose whether they wish to be part of the United States (either via Statehood or Commonwealth) or wish to be independent (via Independence or Free Association). If continuing to be part of the United States were chosen in the first plebiscite, a second vote would be taken between Statehood and Commonwealth. '' On June 14, 2011, President Barack Obama "promised to support ' a clear decision ' by the people of Puerto Rico on statehood ''. That same month, the United Nations Special Committee on Decolonization passed a resolution and adopted a consensus text introduced by Cuba 's delegate on June 20, 2011, calling on the United States to expedite a process "that would allow Puerto Ricans to fully exercise their inalienable right to self - determination and independence ''. On November 6, 2012, a two - question referendum took place, simultaneous with the general elections. The first question asked voters whether they wanted to maintain the current status under the territorial clause of the U.S. Constitution. The second question posed three alternate status options if the first question was approved: statehood, independence or free association. For the first question, 54 percent voted against the current Commonwealth status. For the second question, 61.16 % voted for statehood, 33.34 % for a sovereign free associated state, and 5.49 % for independence. There were also 515,348 blank and invalidated ballots, which are not reflected in the final tally, as they are not considered cast votes under Puerto Rico law. On December 11, 2012, Puerto Rico 's Legislature passed a concurrent resolution to request to the President and the U.S. Congress action on November 6, 2012 plebiscite results. But on April 10, 2013, with the issue still being widely debated, the White House announced that it will seek $2.5 million to hold another referendum, this next one being the first Puerto Rican status referendum to be financed by the U.S. Federal government. In December 2015, the U.S. Government submitted a brief as Amicus Curiae to the U.S. Supreme Court related to the case Commonwealth of Puerto Rico v. Sanchez Valle. The U.S. Government official position is that the U.S. Constitution does not contemplate "sovereign territories ''. That the Court has consistently recognized that "there is no sovereignty in a Territory of the United States but that of the United States itself ''. and a U.S. territory has "no independent sovereignty comparable to that of a state. That is because "the Government of a territory owes its existence wholly to the United States ''. Congress 's plenary authority over federal territories includes the authority to permit self - government, whereby local officials administer a territory 's internal affairs. On June 9, 2016, in Commonwealth of Puerto Rico vs Sanchez Valle, a 6 -- 2 majority of the Supreme Court of the United States determined that Puerto Rico is a territory and lacks Sovereignty. On June 30, 2016, the President of the United States of America signed a new law approved by U.S. Congress, H.R. 5278: PROMESA, establishing a Control Board over the Puerto Rico Government. This board will have a significant degree of federal control involved in its establishment and operations. In particular, the authority to establish the control board derives from the federal government 's constitutional power to "make all needful rules and regulations '' regarding U.S. territories; The President would appoint all seven voting members of the board; and the board would have broad sovereign powers to effectively overrule decisions by Puerto Rico 's legislature, governor, and other public authorities. The latest referendum on statehood, independence, or an associated republic was held on November 6, 2012. The people of Puerto Rico made history by requesting, for the first time ever, the conclusion of the island 's current territorial status. Almost 78 % of registered voters participated in a plebiscite held to resolve Puerto Rico 's status, and a slim but clear majority (54 %) disagreed with Puerto Rico maintaining its present territorial status. Furthermore, among the possible alternatives, sixty - one percent (61 %) of voters chose the statehood option, while one third of the ballots were submitted blank. On December 11, 2012, the Legislative Assembly of Puerto Rico enacted a concurrent resolution requesting the President and the Congress of the United States to respond to the referendum of the people of Puerto Rico, held on November 6, 2012, to end its current form of territorial status and to begin the process to admit Puerto Rico as a State. The initiative has not made Puerto Rico into a state. In May, 2017, the Natural Resources Defense Council reported that Puerto Rico 's water system was the worst as measured by the Clean Water Act. 70 % of the population drank water that violated U.S. law. In late September 2017, Hurricane Maria hit the island as a Category 4 storm causing severe damage to homes, other buildings and infrastructure. The recovery as of late November was slow but progress had been made. Electricity was restored to two - thirds of the island, although there was some doubt as to the number of residents getting reliable power. In January 2018, it was reported that close to 40 percent of the island 's customers still did not have electricity. The vast majority had access to water but were still required to boil it. The number still living in shelters had dropped to 982 with thousands of others living with relatives. The official death toll at the time was 58 but some sources indicated that the actual number is much higher. A dam on the island was close to failure and officials were concerned about additional flooding from this source. Thousands had left Puerto Rico, with close to 200,000 having arrived in Florida alone. Those who were then living on the mainland experienced difficulty in getting health care benefits. A New York Times report on November 27 said it was understandable that Puerto Ricans wanted to leave the island. "Basic essentials are hard to find and electricity and other utilities are unreliable or entirely inaccessible. Much of the population has been unable to return to jobs or to school and access to health care has been severely limited. '' The Center for Puerto Rican Studies at New York 's Hunter College estimated that some half million people, about 14 % of the population, may permanently leave by 2019. The total damage on the island was estimated as up to $95 billion. By the end of November, FEMA had received over a million applications for aid and had approved about a quarter of those. The US government had agreed in October to provide funding to rebuild and up to $4.9 billion in loans to help the island 's government. FEMA had $464 million earmarked to help local governments rebuild public buildings and infrastructure. Bills for other funding were being considered in Washington but little progress had been made on those. A November 28, 2017 report by the Sierra Club included this comment: "It will take years to rebuild Puerto Rico, not just from the worst hurricane to make landfall since 1932, but to sustainably overcome environmental injustices which made Maria 's devastation even more catastrophic ''. Puerto Rico consists of the main island of Puerto Rico and various smaller islands, including Vieques, Culebra, Mona, Desecheo, and Caja de Muertos. Of these five, only Culebra and Vieques are inhabited year - round. Mona, which has played a key role in maritime history, is uninhabited most of the year except for employees of the Puerto Rico Department of Natural Resources. There are many other even smaller islets, like Monito, which is near to Mona, Isla de Cabras and La Isleta de San Juan, both located on the San Juan Bay. The latter is the only inhabited islet with communities like Old San Juan and Puerta de Tierra, and connected to the main island by bridges. The Commonwealth of Puerto Rico has an area of 13,790 square kilometers (5,320 sq mi), of which 8,870 km (3,420 sq mi) is land and 4,921 km (1,900 sq mi) is water. Puerto Rico is larger than two U.S. states, Delaware and Rhode Island. The maximum length of the main island from east to west is 180 km (110 mi), and the maximum width from north to south is 65 km (40 mi). Puerto Rico is the smallest of the Greater Antilles. It is 80 % of the size of Jamaica, just over 18 % of the size of Hispaniola and 8 % of the size of Cuba, the largest of the Greater Antilles. The island is mostly mountainous with large coastal areas in the north and south. The main mountain range is called "La Cordillera Central '' (The Central Range). The highest elevation in Puerto Rico, Cerro de Punta 1,338 meters (4,390 ft), is located in this range. Another important peak is El Yunque, one of the highest in the Sierra de Luquillo at the El Yunque National Forest, with an elevation of 1,065 m (3,494 ft). Puerto Rico has 17 lakes, all man - made, and more than 50 rivers, most originating in the Cordillera Central. Rivers in the northern region of the island are typically longer and of higher water flow rates than those of the south, since the south receives less rain than the central and northern regions. Puerto Rico is composed of Cretaceous to Eocene volcanic and plutonic rocks, overlain by younger Oligocene and more recent carbonates and other sedimentary rocks. Most of the caverns and karst topography on the island occurs in the northern region in the carbonates. The oldest rocks are approximately 190 million years old (Jurassic) and are located at Sierra Bermeja in the southwest part of the island. They may represent part of the oceanic crust and are believed to come from the Pacific Ocean realm. Puerto Rico lies at the boundary between the Caribbean and North American plates and is being deformed by the tectonic stresses caused by their interaction. These stresses may cause earthquakes and tsunamis. These seismic events, along with landslides, represent some of the most dangerous geologic hazards in the island and in the northeastern Caribbean. The most recent major earthquake occurred on October 11, 1918, and had an estimated magnitude of 7.5 on the Richter scale. It originated off the coast of Aguadilla, several kilometers off the northern coast, and was accompanied by a tsunami. It caused extensive property damage and widespread losses, damaging infrastructure, especially bridges. It resulted in an estimated 116 deaths and $4 million in property damage. The failure of the government to move rapidly to provide for the general welfare contributed to political activism by opponents and eventually to the rise of the Puerto Rican Nationalist Party. The Puerto Rico Trench, the largest and deepest trench in the Atlantic, is located about 115 km (71 mi) north of Puerto Rico at the boundary between the Caribbean and North American plates. It is 280 km (170 mi) long. At its deepest point, named the Milwaukee Deep, it is almost 8,400 m (27,600 ft) deep. The climate of Puerto Rico in the Köppen climate classification is tropical rainforest. Temperatures are warm to hot year round, averaging near 85 ° F (29 ° C) in lower elevations and 70 ° F (21 ° C) in the mountains. Easterly trade winds pass across the island year round. Puerto Rico has a rainy season which stretches from April into November. The mountains of the Cordillera Central are the main cause of the variations in the temperature and rainfall that occur over very short distances. The mountains can also cause wide variation in local wind speed and direction due to their sheltering and channeling effects adding to the climatic variation. The island has an average temperature of 82.4 ° F (28 ° C) throughout the year, with an average minimum temperature of 66.9 ° F (19 ° C) and maximum of 85.4 ° F (30 ° C). Daily temperature changes seasonally are quite small in the lowlands and coastal areas. The temperature in the south is usually a few degrees higher than the north and temperatures in the central interior mountains are always cooler than those on the rest of the island. Between the dry and wet season, there is a temperature change of around 6 ° F (3.3 ° C). This is mainly due to the warm waters of the tropical Atlantic Ocean, which significantly modify cooler air moving in from the north and northwest. Coastal waters temperatures around the years are about 75 ° F (24 ° C) in February to 85 ° F (29 ° C) in August. The highest temperature ever recorded was 99 ° F (37 ° C) at Arecibo, while the lowest temperature ever recorded was 40 ° F (4 ° C) in the mountains at Adjuntas, Aibonito, and Corozal. The average yearly precipitation is 1,687 mm (66 in). Puerto Rico experiences the Atlantic hurricane season, similar to the remainder of the Caribbean Sea and North Atlantic oceans. On average, a quarter of its annual rainfall is contributed from tropical cyclones, which are more prevalent during periods of La Niña than El Niño. A cyclone of tropical storm strength passes near Puerto Rico, on average, every five years. A hurricane passes in the vicinity of the island, on average, every seven years. Since 1851, the Lake Okeechobee Hurricane of September 1928 is the only hurricane to make landfall as a Category 5 hurricane. In the busy 2017 Atlantic hurricane season, Puerto Rico avoided a direct hit by the Category 5 Hurricane Irma on September 8, 2017, but high winds caused a loss of electrical power to some one million residents. Almost 50 % of hospitals were operating with power provided by generators. The Category 4 Hurricane Jose, as expected, veered away from Puerto Rico. A short time later, the devastating Hurricane Maria made landfall in Puerto Rico as a Category 4 hurricane, with sustained 155 mph (249 km / h) winds, powerful rains and widespread flooding causing tremendous destruction, including the electrical grid, which could remain out for 4 -- 6 months. With such widespread destruction and a great need for supplies -- everything from drinking water, food, medicine and personal care items to fuel for generators and construction materials for rebuilding the island -- Gov. Rossello and several Congressmen called on the federal government to waive the WWI - era Jones Act (protectionist provisions: ships made and owned in U.S., and with U.S. crews), which essentially double Puerto Rico 's cost for shipped goods relative to neighboring islands. On September 28, U.S. President Donald Trump waived the Act for ten days. Species endemic to the archipelago number 239 plants, 16 birds and 39 amphibians / reptiles, recognized as of 1998. Most of these (234, 12 and 33 respectively) are found on the main island. The most recognizable endemic species and a symbol of Puerto Rican pride is the coquí, a small frog easily identified by the sound of its call, from which it gets its name. Most coquí species (13 of 17) live in the El Yunque National Forest, a tropical rainforest in the northeast of the island previously known as the Caribbean National Forest. El Yunque is home to more than 240 plants, 26 of which are endemic to the island. It is also home to 50 bird species, including the critically endangered Puerto Rican amazon. Across the island in the southwest, the 40 km (15 sq mi) of dry land at the Guánica Commonwealth Forest Reserve contain over 600 uncommon species of plants and animals, including 48 endangered species and 16 endemic to Puerto Rico. The population of Puerto Rico has been shaped by Amerindian settlement, European colonization, slavery, economic migration, and Puerto Rico 's status as unincorporated territory of the United States. The estimated population of Puerto Rico as of July 1, 2015, was 3,474,182, a 6.75 % decrease since the 2010 United States Census. From 2000 to 2010, the population decreased, the first such decrease in census history for Puerto Rico. It went from the 3,808,610 residents registered in the 2000 Census to 3,725,789 in the 2010 Census. A declining and aging population presents additional problems for the society. The U.S. Census Bureau 's estimate for July 1, 2016 was 3,411,307 people, down substantially from the 2010 data which had indicated 3,725,789 people. Continuous European immigration and high natural increase helped the population of Puerto Rico grow from 155,426 in 1800, to almost a million by the close of the 19th century. A census conducted by royal decree on September 30, 1858, gave the following totals of the Puerto Rican population at that time: 341,015 were Free colored; 300,430 identified as Whites; and 41,736 were slaves. During the 19th century hundreds of families arrived in Puerto Rico, primarily from the Canary Islands and Andalusia, but also from other parts of Spain such as Catalonia, Asturias, Galicia and the Balearic Islands and numerous Spanish loyalists from Spain 's former colonies in South America. Settlers from outside Spain also arrived in the islands, including from Corsica, France, Lebanon, China, Portugal, Ireland, Scotland, Germany and Italy. This immigration from non-Hispanic countries was the result of the Real Cedula de Gracias de 1815 ("Royal Decree of Graces of 1815 ''), which allowed European Catholics to settle in the island with land allotments in the interior of the island, provided they paid taxes and continued to support the Catholic Church. Between 1960 and 1990 the census questionnaire in Puerto Rico did not ask about race or ethnicity. The 2000 United States Census included a racial self - identification question in Puerto Rico. According to the census, most Puerto Ricans identified as White and Hispanic; few identified as Black or some other race. A recent population genetics study conducted in Puerto Rico suggests that between 52.6 % and 84 % of the population possess some degree of Amerindian mitochondrial DNA (mtDNA) in their maternal ancestry, usually in a combination with other ancestries such as aboriginal Guanche North - West African ancestry brought by Spanish settlers from the Canary Islands. In addition, these DNA studies show Amerindian ancestry in addition to the Taíno. One genetic study on the racial makeup of Puerto Ricans (including all races) found them to be roughly around 61 % West Eurasian / North African (overwhelmingly of Spanish provenance), 27 % Sub-Saharan African and 11 % Native American. Another genetic study from 2007, claimed that "the average genomewide individual (ie. Puerto Rican) ancestry proportions have been estimated as 66 %, 18 %, and 16 %, for European, West African, and Native American, respectively. '' Other study estimates 63.7 % European, 21.2 % (Sub-Saharan) African, and 15.2 % Native American; European ancestry is more prevalent in the West and in Central Puerto Rico, African in Eastern Puerto Rico, and Native American in Northern Puerto Rico. A Pew Research survey indicated a literacy rate of 90.4 % (adult population) in 2012 based on data from the United Nations and a life expectancy of 79.3 years. Puerto Rico has recently become the permanent home of over 100,000 legal residents. The vast majority of recent immigrants, both legal and illegal, come from the Dominican Republic and Haiti. Other sources sending in significant numbers of recent immigrants include Cuba, Mexico, Colombia, Panama, Jamaica, Venezuela, Spain, and Nigeria. Also, there are many non-Puerto Rican U.S. citizens settling in Puerto Rico, from the mainland United States and the U.S. Virgin Islands, as well as Nuyoricans (stateside Puerto Ricans) coming back to Puerto Rico. Most recent immigrants settle in and around San Juan. Emigration is a major part of contemporary Puerto Rican history. Starting soon after World War II, poverty, cheap airfares, and promotion by the island government caused waves of Puerto Ricans to move to the United States, particularly to the Northeastern states, and Florida. This trend continued even as Puerto Rico 's economy improved and its birth rate declined. Puerto Ricans continue to follow a pattern of "circular migration '', with some migrants returning to the island. In recent years, the population has declined markedly, falling nearly 1 % in 2012 and an additional 1 % (36,000 people) in 2013 due to a falling birthrate and emigration. According to the 2010 Census, the number of Puerto Ricans living in the United States outside of Puerto Rico far exceeds those living in Puerto Rico. Emigration exceeds immigration. As those who leave tend to be better educated than those that remain, this accentuates the drain on Puerto Rico 's economy. Based on the July 1, 2016 estimate by the U.S. Census Bureau, the population of the Commonwealth had declined by 314,482 people since the 2010 Census data had been tabulated. The most populous city is the capital, San Juan, with approximately 371,400 people based on a 2015 estimate by the Census Bureau. Other major cities include Bayamón, Carolina, Ponce, and Caguas. Of the ten most populous cities on the island, eight are located within what is considered San Juan 's metropolitan area, while the other two are located in the south (Ponce) and west (Mayagüez) of the island. The official languages of the executive branch of government of Puerto Rico are Spanish and English, with Spanish being the primary language. Spanish is, and has been, the only official language of the entire Commonwealth judiciary system, despite a 1902 English - only language law. However, all official business of the U.S. District Court for the District of Puerto Rico is conducted in English. English is the primary language of less than 10 % of the population. Spanish is the dominant language of business, education and daily life on the island, spoken by nearly 95 % of the population. The U.S. Census Bureau 's 2015 update provides the following facts: 94.1 % of adults speak Spanish, 5.8 % speak only English, 78.3 % do not speak English "very well ''. In Puerto Rico, public school instruction is conducted almost entirely in Spanish. There have been pilot programs in about a dozen of the over 1,400 public schools aimed at conducting instruction in English only. Objections from teaching staff are common, perhaps because many of them are not fully fluent in English. English is taught as a second language and is a compulsory subject from elementary levels to high school. The languages of the deaf community are American Sign Language and its local variant, Puerto Rican Sign Language. The Spanish of Puerto Rico has evolved into having many idiosyncrasies in vocabulary and syntax that differentiate it from the Spanish spoken elsewhere. As a product of Puerto Rican history, the island possesses a unique Spanish dialect. Puerto Rican Spanish utilizes many Taíno words, as well as English words. The largest influence on the Spanish spoken in Puerto Rico is that of the Canary Islands. The Spanish of Puerto Rico also includes Taíno words, typically in the context of vegetation, natural phenomena or primitive musical instruments. Similarly, words attributed to primarily West African languages were adopted in the contexts of foods, music or dances, particularly in coastal towns with concentrations of descendants of Sub-Saharan Africans. Religious Affiliation (2014) The Roman Catholic Church was brought by Spanish colonists and gradually became the dominant religion in Puerto Rico. The first dioceses in the Americas, including that of Puerto Rico, were authorized by Pope Julius II in 1511. One Pope, John Paul II, visited Puerto Rico in October 1984. All municipalities in Puerto Rico have at least one Catholic Church, most of which are located at the town center or "plaza ''. African slaves brought and maintained various ethnic African religious practices associated with different peoples; in particular, the Yoruba beliefs of Santería and / or Ifá, and the Kongo - derived Palo Mayombe. Some aspects were absorbed into syncretic Christianity. Protestantism, which was suppressed under the Spanish Catholic regime, has slightly reemerged under United States rule, making contemporary Puerto Rico more interconfessional than in previous centuries, although Catholicism continues to be the dominant religion. The first Protestant church, Iglesia de la Santísima Trinidad, was established in Ponce by the Anglican Diocese of Antigua in 1872. It was the first non-Roman Catholic Church in the entire Spanish Empire in the Americas. Pollster Pablo Ramos stated in 1998 that the population was 38 % Roman Catholic, 28 % Pentecostal, and 18 % were members of independent churches, which would give a Protestant percentage of 46 % if the last two populations are combined. Protestants collectively added up to almost two million people. Another researcher gave a more conservative assessment of the proportion of Protestants: Puerto Rico, by virtue of its long political association with the United States, is the most Protestant of Latin American countries, with a Protestant population of approximately 33 to 38 percent, the majority of whom are Pentecostal. David Stoll calculates that if we extrapolate the growth rates of evangelical churches from 1960 -- 1985 for another twenty - five years Puerto Rico will become 75 percent evangelical. (Ana Adams: "Brincando el Charco... '' in Power, Politics and Pentecostals in Latin America, Edward Cleary, ed., 1997. p. 164). The data provided for 2014 by Pew Research Center, is summarized in the chart to the right. An Associated Press article in March 2014 stated that "more than 70 percent of whom identify themselves as Catholic '' but provided no source for this information. The CIA World Factbook reports that 85 % of the population of Puerto Rico identifies as Roman Catholic, while 15 % identify as Protestant and Other. Neither a date or a source for that information is provided and may not be recent. A 2013 Pew Research survey found that only about 45 % of Puerto Rican adults identified themselves as Catholic, 29 % as Protestant and 20 % as unaffiliated with a religion. The people surveyed by Pew consisted of Puerto Ricans living in the 50 states and DC and may not be indicative of those living in the Commonwealth. By 2014, a Pew Research report, with the sub-title Widespread Change in a Historically Catholic Region, indicated that only 56 % of Puerto Ricans were Catholic and that 33 % were Protestant; this survey was completed between October 2013 and February 2014. An Eastern Orthodox community, the Dormition of the Most Holy Theotokos / St. Spyridon 's Church is located in Trujillo Alto, and serves the small Orthodox community. This affiliation accounted for under 1 % of the population in 2010 according to the Pew Research report. In 1940, Juanita García Peraza founded the Mita Congregation, the first religion of Puerto Rican origin. Taíno religious practices have been rediscovered / reinvented to a degree by a handful of advocates. Similarly, some aspects of African religious traditions have been kept by some adherents. In 1952, a handful of American Jews established the island 's first synagogue; this religion accounts for under 1 % of the population in 2010 according to the Pew Research report. The synagogue, called Sha'are Zedeck, hired its first rabbi in 1954. Puerto Rico has the largest Jewish community in the Caribbean, numbering 3000 people (date not stated), and is the only Caribbean island in which the Conservative, Reform and Orthodox Jewish movements all are represented. In 2007, there were about 5,000 Muslims in Puerto Rico, representing about 0.13 % of the population. Eight mosques are located throughout the island, with most Muslims living in Río Piedras and Caguas, most of these Muslims are of Palestinian and Jordanian descent. In 2015, the 25,832 Jehovah 's Witnesses represented about 0.70 % of the population, with 324 congregations. The Padmasambhava Buddhist Center, whose followers practice Tibetan Buddhism, has a branch in Puerto Rico. Roman Catholic Cathedral of San Juan Bautista. Anglican Iglesia Santísima Trinidad in Ponce Islamic Center at Ponce Inside Sha'are Zedeck in San Juan Puerto Rico has 8 senatorial districts, 40 representative districts and 78 municipalities. It has a republican form of government with separation of powers subject to the jurisdiction and sovereignty of the United States. Its current powers are all delegated by the United States Congress and lack full protection under the United States Constitution. Puerto Rico 's head of state is the President of the United States. The government of Puerto Rico, based on the formal republican system, is composed of three branches: the executive, legislative, and judicial branch. The executive branch is headed by the governor, currently Ricky Rosselló. The legislative branch consists of a bicameral legislature called the Legislative Assembly, made up of a Senate as its upper chamber and a House of Representatives as its lower chamber. The Senate is headed by the President of the Senate, currently Thomas Rivera Schatz, while the House of Representatives is headed by the Speaker of the House, currently Johnny Méndez. The governor and legislators are elected by popular vote every four years with the last election held in November 2016. The judicial branch is headed by the Chief Justice of the Supreme Court of Puerto Rico, currently Maite Oronoz Rodríguez. Members of the judicial branch are appointed by the governor with the advice and consent of the Senate. Puerto Rico is represented in the United States Congress by a nonvoting delegate, the Resident Commissioner, currently Jenniffer González. Current congressional rules have removed the Commissioner 's power to vote in the Committee of the Whole, but the Commissioner can vote in committee. Puerto Rican elections are governed by the Federal Election Commission and the State Elections Commission of Puerto Rico. While residing in Puerto Rico, Puerto Ricans can not vote in U.S. presidential elections, but they can vote in primaries. Puerto Ricans who become residents of a U.S. state can vote in presidential elections. Puerto Rico hosts consulates from 41 countries, mainly from the Americas and Europe, with most located in San Juan. As an unincorporated territory of the United States, Puerto Rico does not have any first - order administrative divisions as defined by the U.S. government, but has 78 municipalities at the second level. Mona Island is not a municipality, but part of the municipality of Mayagüez. Municipalities are subdivided into wards or barrios, and those into sectors. Each municipality has a mayor and a municipal legislature elected for a four - year term. The municipality of San Juan (previously called "town ''), was founded first, in 1521, San Germán in 1570, Coamo in 1579, Arecibo in 1614, Aguada in 1692 and Ponce in 1692. An increase of settlement saw the founding of 30 municipalities in the 18th century and 34 in the 19th. Six were founded in the 20th century; the last was Florida in 1971. Since 1952, Puerto Rico has had three main political parties: the Popular Democratic Party (PPD in Spanish), the New Progressive Party (PNP in Spanish) and the Puerto Rican Independence Party (PIP). The three parties stand for different political status. The PPD, for example, seeks to maintain the island 's status with the U.S. as a commonwealth, while the PNP, on the other hand, seeks to make Puerto Rico a state of the United States. The PIP, in contrast, seeks a complete separation from the United States by seeking to make Puerto Rico a sovereign nation. In terms of party strength, the PPD and PNP usually hold about 47 % of the vote each while the PIP holds only about 5 %. After 2007, other parties emerged on the island. The first, the Puerto Ricans for Puerto Rico Party (PPR in Spanish) was registered that same year. The party claims that it seeks to address the islands ' problems from a status - neutral platform. But it ceased to remain as a registered party when it failed to obtain the required number of votes in the 2008 general election. Four years later, the 2012 election saw the emergence of the Movimiento Unión Soberanista (MUS; English: Sovereign Union Movement) and the Partido del Pueblo Trabajador (PPT; English: Working People 's Party) but none obtained more than 1 % of the vote. Other non-registered parties include the Puerto Rican Nationalist Party, the Socialist Workers Movement, and the Hostosian National Independence Movement. The insular legal system is a blend of civil law and the common law systems. Puerto Rico is the only current U.S. possession whose legal system operates primarily in a language other than American English: namely, Spanish. Because the U.S. federal government operates primarily in English, all Puerto Rican attorneys must be bilingual in order to litigate in English in U.S. federal courts, and litigate federal preemption issues in Puerto Rican courts. Title 48 of the United States Code outlines the role of the United States Code to United States territories and insular areas such as Puerto Rico. After the U.S. government assumed control of Puerto Rico in 1901, it initiated legal reforms resulting in the adoption of codes of criminal law, criminal procedure, and civil procedure modeled after those then in effect in California. Although Puerto Rico has since followed the federal example of transferring criminal and civil procedure from statutory law to rules promulgated by the judiciary, several portions of its criminal law still reflect the influence of the California Penal Code. The judicial branch is headed by the Chief Justice of the Puerto Rico Supreme Court, which is the only appellate court required by the Constitution. All other courts are created by the Legislative Assembly of Puerto Rico. There is also a Federal District Court for Puerto Rico. Someone accused of a criminal act at the federal level may not be accused for the same act in a Commonwealth court, unlike a state court, since Puerto Rico as a territory lacks sovereignty separate from Congress as a state does. Such a parallel accusation would constitute double jeopardy. The nature of Puerto Rico 's political relationship with the U.S. is the subject of ongoing debate in Puerto Rico, the United States Congress, and the United Nations. Specifically, the basic question is whether Puerto Rico should remain a U.S. territory, become a U.S. state, or become an independent country. Constitutionally, Puerto Rico is subject to the plenary powers of the United States Congress under the territorial clause of Article IV of the U.S. Constitution. Laws enacted at the federal level in the United States apply to Puerto Rico as well, regardless of its political status. Their residents do not have voting representation in the U.S. Congress. Like the different states of the United States, Puerto Rico lacks "the full sovereignty of an independent nation '', for example, the power to manage its "external relations with other nations '', which is held by the U.S. federal government. The Supreme Court of the United States has indicated that once the U.S. Constitution has been extended to an area (by Congress or the courts), its coverage is irrevocable. To hold that the political branches may switch the Constitution on or off at will would lead to a regime in which they, not this Court, say "what the law is ''. Puerto Ricans "were collectively made U.S. citizens '' in 1917 as a result of the Jones - Shafroth Act. U.S. citizens residing in Puerto Rico can not vote for the U.S. president, though both major parties, Republican and Democratic, run primary elections in Puerto Rico to send delegates to vote on a presidential candidate. Since Puerto Rico is an unincorporated territory (see above) and not a U.S. state, the United States Constitution does not fully enfranchise U.S. citizens residing in Puerto Rico. Only fundamental rights under the American federal constitution and adjudications are applied to Puerto Ricans. Various other U.S. Supreme Court decisions have held which rights apply in Puerto Rico and which ones do not. Puerto Ricans have a long history of service in the U.S. Armed Forces and, since 1917, they have been included in the U.S. compulsory draft whensoever it has been in effect. Though the Commonwealth government has its own tax laws, Puerto Ricans are also required to pay many kinds of U.S. federal taxes, not including the federal personal income tax for Puerto Rico - sourced income, but only under certain circumstances. In 2009, Puerto Rico paid $3.742 billion into the U.S. Treasury. Residents of Puerto Rico pay into Social Security, and are thus eligible for Social Security benefits upon retirement. They are excluded from the Supplemental Security Income (SSI), and the island actually receives a smaller fraction of the Medicaid funding it would receive if it were a U.S. state. Also, Medicare providers receive less - than - full state - like reimbursements for services rendered to beneficiaries in Puerto Rico, even though the latter paid fully into the system. While a state may try an individual for the same crime he / she was tried in federal court, this is not the case in Puerto Rico. Being a U.S. territory, Puerto Rico 's authority to enact a criminal code derives from Congress and not from local sovereignty as with the states. Thus, such a parallel accusation would constitute double jeopardy and is constitutionally impermissible. In 1992, President George H.W. Bush issued a memorandum to heads of executive departments and agencies establishing the current administrative relationship between the federal government and the Commonwealth of Puerto Rico. This memorandum directs all federal departments, agencies, and officials to treat Puerto Rico administratively as if it were a state, insofar as doing so would not disrupt federal programs or operations. Many federal executive branch agencies have significant presence in Puerto Rico, just as in any state, including the Federal Bureau of Investigation, Federal Emergency Management Agency, Transportation Security Administration, Social Security Administration, and others. While Puerto Rico has its own Commonwealth judicial system similar to that of a U.S. state, there is also a U.S. federal district court in Puerto Rico, and Puerto Ricans have served as judges in that Court and in other federal courts on the U.S. mainland regardless of their residency status at the time of their appointment. Sonia Sotomayor, a New Yorker of Puerto Rican descent, serves as an Associate Justice of the Supreme Court of the United States. Puerto Ricans have also been frequently appointed to high - level federal positions, including serving as United States Ambassadors to other nations. Puerto Rico is subject to the Commerce and Territorial Clause of the Constitution of the United States and, therefore, is restricted on how it can engage with other nations, sharing the opportunities and limitations that state governments have albeit not being one. As is the case with state governments, regardless, it has established several trade agreements with other nations, particularly with Hispanic American countries such as Colombia and Panamá. It has also established trade promotion offices in many foreign countries, all Spanish - speaking, and within the United States itself, which now include Spain, the Dominican Republic, Panama, Colombia, Washington, D.C., New York City and Florida, and has included in the past offices in Chile, Costa Rica, and Mexico. Such agreements require permission from the U.S. Department of State; most, are simply allowed by existing laws or trade treaties between the United States and other nations which supersede trade agreements pursued by Puerto Rico and different U.S. states. At the local level, Puerto Rico established by law that the international relations which states and territories are allowed to engage must be handled by the Department of State of Puerto Rico, an executive department, headed by the Secretary of State of Puerto Rico, who also serves as the territory 's lieutenant governor. It is also charged to liaise with general consuls and honorary consuls based in Puerto Rico. The Puerto Rico Federal Affairs Administration, along with the Office of the Resident Commissioner, manage all its intergovernmental affairs before entities of or in the United States (including the federal government of the United States, local and state governments of the United States, and public or private entities in the United States). Both entities frequently assist the Department of State of Puerto Rico in engaging with Washington, D.C. - based ambassadors and federal agencies that handle Puerto Rico 's foreign affairs, such as the U.S. Department of State, the Agency for International Development, and others. The current Secretary of State is Víctor Suárez Meléndez from the Popular Democratic Party and member of the Democratic Party of the United States, while the current Director of the Puerto Rico Federal Affairs Administration is Juan Eugenio Hernández Mayoral also from the Popular Democratic and member of the Democratic Party. The Resident Commissioner of Puerto Rico, the delegate elected by Puerto Ricans to represent them before the federal government, including the U.S. Congress, sits in the United States House of Representatives, serves and votes on congressional committees, and functions in every respect as a legislator except being denied a vote on the final disposition of legislation on the House floor, also engages in foreign affairs to the same extent as other members of Congress. The current Resident Commissioner is Pedro Pierluisi from the New Progressive Party and member of the Democratic Party of the United States. Many Puerto Ricans have served as United States ambassadors to different nations and international organizations, such as the Organization of American States, mostly but not exclusively in Latin America. For example, Maricarmen Aponte, a Puerto Rican and now an Acting Assistant Secretary of State, previously served as U.S. ambassador to El Salvador. As it is a territory of the United States of America, the defense of Puerto Rico is provided by the United States as part of the Treaty of Paris with the President of the United States as its commander - in - chief. Puerto Rico has its own Puerto Rico National Guard, and its own state defense force, the Puerto Rico State Guard, which by local law is under the authority of the Puerto Rico National Guard. The commander - in - chief of both local forces is the governor of Puerto Rico who delegates his authority to the Puerto Rico Adjutant General, currently Colonel Marta Carcana. The Adjutant General, in turn, delegates the authority over the State Guard to another officer but retains the authority over the Puerto Rico National Guard as a whole. U.S. military installations in Puerto Rico were part of the U.S. Atlantic Command (LANTCOM after 1993 USACOM), which had authority over all U.S. military operations that took place throughout the Atlantic. Puerto Rico had been seen as crucial in supporting LANTCOM 's mission until 1999, when U.S. Atlantic Command was renamed and given a new mission as United States Joint Forces Command. Puerto Rico is currently under the responsibility of United States Northern Command. Both the Naval Forces Caribbean (NFC) and the Fleet Air Caribbean (FAIR) were formerly based at the Roosevelt Roads Naval Station. The NFC had authority over all U.S. Naval activity in the waters of the Caribbean while FAIR had authority over all U.S. military flights and air operations over the Caribbean. With the closing of the Roosevelt Roads and Vieques Island training facilities, the U.S. Navy has basically exited from Puerto Rico, except for the ships that steam by, and the only significant military presence in the island is the U.S. Army at Ft Buchanan, the Puerto Rican Army and Air National Guards, and the U.S. Coast Guard. Protests over the noise of bombing practice forced the closure of the naval base. This resulted in a loss of 6,000 jobs and an annual decrease in local income of $300 million. A branch of the U.S. Army National Guard is stationed in Puerto Rico -- known as the Puerto Rico Army National Guard -- which performs missions equivalent to those of the Army National Guards of the different states of the United States, including ground defense, disaster relief, and control of civil unrest. The local National Guard also incorporates a branch of the U.S. Air National Guard -- known as the Puerto Rico Air National Guard -- which performs missions equivalent to those of the Air National Guards of each one of the U.S. states. At different times in the 20th century, the U.S. had about 25 military or naval installations in Puerto Rico, some very small ones, as well as large installations. The largest of these installations were the former Roosevelt Roads Naval Station in Ceiba, the Atlantic Fleet Weapons Training Facility (AFWTF) on Vieques, the National Guard training facility at Camp Santiago in Salinas, Fort Allen in Juana Diaz, the Army 's Fort Buchanan in San Juan, the former U.S. Air Force Ramey Air Force Base in Aguadilla, and the Puerto Rico Air National Guard at Muñiz Air Force base in San Juan. The former U.S. Navy facilities at Roosevelt Roads, Vieques, and Sabana Seca have been deactivated and partially turned over to the local government. Other than U.S. Coast Guard and Puerto Rico National Guard facilities, there are only two remaining military installations in Puerto Rico: the U.S. Army 's small Ft. Buchanan (supporting local veterans and reserve units) and the PRANG (Puerto Rico Air National Guard) Muñiz Air Base (the C - 130 Fleet). In recent years, the U.S. Congress has considered their deactivations, but these have been opposed by diverse public and private entities in Puerto Rico -- such as retired military who rely on Ft. Buchanan for the services available there. Puerto Ricans have participated in many of the military conflicts in which the United States has been involved. For example, they participated in the American Revolution, when volunteers from Puerto Rico, Cuba, and Mexico fought the British in 1779 under the command of General Bernardo de Gálvez (1746 -- 1786), and have continued to participate up to the present - day conflicts in Iraq and Afghanistan. A significant number of Puerto Ricans participate as members and work for the U.S. Armed Services, largely as National Guard members and civilian employees. The size of the overall military - related community in Puerto Rico is estimated to be 100,000 individuals. This includes retired personnel. Fort Buchanan has about 4,000 military and civilian personnel. In addition, approximately 17,000 people are members of the Puerto Rico Army and Air National Guards, or the U.S. Reserve forces. Puerto Rican soldiers have served in every U.S. military conflict from World War I to the current military engagement known by the United States and its allies as the War against Terrorism. The 65th Infantry Regiment, nicknamed "The Borinqueneers '' from the original Taíno name of the island (Borinquen), is a Puerto Rican regiment of the United States Army. The regiment 's motto is Honor et Fidelitas, Latin for Honor and Fidelity. The 65th Infantry Regiment participated in World War I, World War II, the Korean War, and the War on Terror and in 2014 was awarded the Congressional Gold Medal, presented by President Barack Obama, for its heroism during the Korean War. There are no counties, as there are in 48 of the 50 United States. There are 78 municipalities. Municipalities are subdivided into barrios, and those into sectors. Each municipality has a mayor and a municipal legislature elected to four - year terms. The economy of Puerto Rico is classified as a high income economy by the World Bank and as the most competitive economy in Latin America by the World Economic Forum but Puerto Rico currently has a public debt of $72.204 billion (equivalent to 103 % of GNP), and a government deficit of $2.5 billion. According to World Bank, gross national income per capita of Puerto Rico in 2013 is $23,830 (PPP, International Dollars), ranked as 63rd among all sovereign and dependent territories entities in the world. Its economy is mainly driven by manufacturing (primarily pharmaceuticals, textiles, petrochemicals and electronics) followed by the service industry (primarily finance, insurance, real estate and tourism). In recent years, the territory has also become a popular destination for MICE (meetings, incentives, conferencing, exhibitions), with a modern convention centre district overlooking the Port of San Juan. The geography of Puerto Rico and its political status are both determining factors on its economic prosperity, primarily due to its relatively small size as an island; its lack of natural resources used to produce raw materials, and, consequently, its dependence on imports; as well as its territorial status with the United States, which controls its foreign policy while exerting trading restrictions, particularly in its shipping industry. Puerto Rico experienced a recession from 2006 to 2011, interrupted by 4 quarters of economic growth, and entered into recession again in 2013, following growing fiscal imbalance and the expiration of the IRS Section 936 corporate incentives that the U.S. Internal Revenue Code had applied to Puerto Rico. This IRS section was critical to the economy, as it established tax exemptions for U.S. corporations that settled in Puerto Rico, and allowed their insular subsidiaries to send their earnings to the parent corporation at any time, without paying federal tax on corporate income. Puerto Rico has surprisingly been able to maintain a relatively low inflation in the past decade while maintaining a purchasing power parity per capita higher than 80 % of the rest of the world. Academically, most of Puerto Rico 's economic woes stem from federal regulations that expired, have been repealed, or no longer apply to Puerto Rico; its inability to become self - sufficient and self - sustainable throughout history; its highly politicized public policy which tends to change whenever a political party gains power; as well as its highly inefficient local government which has accrued a public debt equal to 68 % of its gross domestic product throughout time. In comparison to the different states of the United States, Puerto Rico is poorer than Mississippi (the poorest state of the U.S.) with 41 % of its population below the poverty line. When compared to Latin America, Puerto Rico has the highest GDP per capita in the region. Its main trading partners are the United States itself, Ireland, and Japan, with most products coming from East Asia, mainly from China, Hong Kong, and Taiwan. At a global scale, Puerto Rico 's dependency on oil for transportation and electricity generation, as well as its dependency on food imports and raw materials, makes Puerto Rico volatile and highly reactive to changes in the world economy and climate. Puerto Rico 's agricultural sector represents less than 1 % of GNP. In early 2017, the Puerto Rican government - debt crisis posed serious problems for the government which was saddled with outstanding bond debt that had climbed to $70 billion at a time with a 45 percent poverty rate and 12.4 % unemployment that is more than twice the mainland U.S. average. The debt had been increasing during a decade long recession. The Commonwealth had been defaulting on many debts, including bonds, since 2015. With debt payments due, the Governor was facing the risk of a government shutdown and failure to fund the managed health care system. "Without action before April, Puerto Rico 's ability to execute contracts for Fiscal Year 2018 with its managed care organizations will be threatened, thereby putting at risk beginning July 1, 2017 the health care of up to 900,000 poor U.S. citizens living in Puerto Rico '', according to a letter sent to Congress by the Secretary of the Treasury and the Secretary of Health and Human Services. They also said that "Congress must enact measures recommended by both Republicans and Democrats that fix Puerto Rico 's inequitable health care financing structure and promote sustained economic growth. '' Initially, the oversight board created under PROMESA called for Puerto Rico 's governor Ricardo Rosselló to deliver a fiscal turnaround plan by January 28. Just before that deadline, the control board gave the Commonwealth government until February 28 to present a fiscal plan (including negotiations with creditors for restructuring debt) to solve the problems. A moratorium on lawsuits by debtors was extended to May 31. It is essential for Puerto Rico to reach restructuring deals to avoid a bankruptcy - like process under PROMESA. An internal survey conducted by the Puerto Rican Economists Association revealed that the majority of Puerto Rican economists reject the policy recommendations of the Board and the Rosselló government, with more than 80 % of economists arguing in favor of auditing the debt. In early August 2017, the island 's financial oversight board (created by PROMESA) planned to institute two days off without pay per month for government employees, down from the original plan of four days per month; the latter had been expected to achieve $218 million in savings. Governor Rossello rejected this plan as unjustified and unnecessary. Pension reforms were also discussed including a proposal for a 10 % reduction in benefits to begin addressing the $50 billion in unfunded pension liabilities. Puerto Rico has an operating budget of about U.S. $9.8 billion with expenses at about $10.4 billion, creating a structural deficit of $775 million (about 7.9 % of the budget). The practice of approving budgets with a structural deficit has been done for 18 consecutive years starting in 2000. Throughout those years, including present time, all budgets contemplated issuing bonds to cover these projected deficits rather than making structural adjustments. This practice increased Puerto Rico 's cumulative debt, as the government had already been issuing bonds to balance its actual budget for four decades since 1973. Projected deficits added substantial burdens to an already indebted nation which accrued a public debt of $71 B or about 70 % of Puerto Rico 's gross domestic product. This sparked an ongoing government - debt crisis after Puerto Rico 's general obligation bonds were downgraded to speculative non-investment grade ("junk status '') by three credit rating agencies. In terms of financial control, almost 9.6 % -- or about $1.5 billion -- of Puerto Rico 's central government budget expenses for FY2014 is expected to be spent on debt service. Harsher budget cuts are expected as Puerto Rico must now repay larger chunks of debts in the following years. For practical reasons the budget is divided into two aspects: a "general budget '' which comprises the assignments funded exclusively by the Department of Treasury of Puerto Rico, and the "consolidated budget '' which comprises the assignments funded by the general budget, by Puerto Rico 's government - owned corporations, by revenue expected from loans, by the sale of government bonds, by subsidies extended by the federal government of the United States, and by other funds. Both budgets contrast each other drastically, with the consolidated budget being usually thrice the size of the general budget; currently $29 B and $9.0 B respectively. Almost one out of every four dollars in the consolidated budget comes from U.S. federal subsidies while government - owned corporations compose more than 31 % of the consolidated budget. The critical aspects come from the sale of bonds, which comprise 7 % of the consolidated budget -- a ratio that increased annually due to the government 's inability to prepare a balanced budget in addition to being incapable of generating enough income to cover all its expenses. In particular, the government - owned corporations add a heavy burden to the overall budget and public debt, as none is self - sufficient. For example, in FY2011 the government - owned corporations reported aggregated losses of more than $1.3 B with the Puerto Rico Highways and Transportation Authority (PRHTA) reporting losses of $409 M, the Puerto Rico Electric Power Authority (PREPA; the government monopoly that controls all electricity on the island) reporting losses of $272 M, while the Puerto Rico Aqueducts and Sewers Authority (PRASA; the government monopoly that controls all water utilities on the island) reported losses of $112 M. Losses by government - owned corporations have been defrayed through the issuance of bonds compounding more than 40 % of Puerto Rico 's entire public debt today. Holistically, from FY2000 -- FY2010 Puerto Rico 's debt grew at a compound annual growth rate (CAGR) of 9 % while GDP remained stagnant. This has not always provided a long - term solution. In early July 2017 for example, the PREPA power authority was effectively bankrupt after defaulting in a plan to restructure $9 billion in bond debt; the agency planned to seek Court protection. In terms of protocol, the governor, together with the Puerto Rico Office of Management and Budget (OGP in Spanish), formulates the budget he believes is required to operate all government branches for the ensuing fiscal year. He then submits this formulation as a budget request to the Puerto Rican legislature before July 1, the date established by law as the beginning of Puerto Rico 's fiscal year. While the constitution establishes that the request must be submitted "at the beginning of each regular session '', the request is typically submitted during the first week of May as the regular sessions of the legislature begin in January and it would be impractical to submit a request so far in advance. Once submitted, the budget is then approved by the legislature, typically with amendments, through a joint resolution and is referred back to the governor for his approval. The governor then either approves it or vetoes it. If vetoed the legislature can then either refer it back with amendments for the governor 's approval, or approve it without the governor 's consent by two - thirds of the bodies of each chamber. Once the budget is approved, the Department of Treasury disburses funds to the Office of Management and Budget which in turn disburses the funds to the respective agencies, while the Puerto Rico Government Development Bank (the government 's intergovernmental bank) manages all related banking affairs including those related to the government - owned corporations. The cost of living in Puerto Rico is high and has increased over the past decade. San Juan 's in particular is higher than Atlanta, Dallas, and Seattle but lower than Boston, Chicago, and New York City. One factor is housing prices which are comparable to Miami and Los Angeles, although property taxes are considerably lower than most places in the United States. Statistics used for cost of living sometimes do not take into account certain costs, such as the high cost of electricity, which has hovered in the 24 ¢ to 30 ¢ range per kilowatt / hour, two to three times the national average, increased travel costs for longer flights, additional shipping fees, and the loss of promotional participation opportunities for customers "outside the continental United States ''. While some online stores do offer free shipping on orders to Puerto Rico, many merchants exclude Hawaii, Alaska, Puerto Rico and other United States territories. The household median income is stated as $19,350 and the mean income as $30,463 in the U.S. Census Bureau 's 2015 update. The report also indicates that 45.5 % of individuals are below the poverty level. The median home value in Puerto Rico ranges from U.S. $100,000 to U.S. $214,000, while the national median home value sits at $119,600. One of the most cited contributors to the high cost of living in Puerto Rico is the Merchant Marine Act of 1920, also known as the Jones Act, which prevents foreign - flagged ships from carrying cargo between two American ports, a practice known as cabotage. Because of the Jones Act, foreign ships inbound with goods from Central and South America, Western Europe, and Africa can not stop in Puerto Rico, offload Puerto Rico - bound goods, load mainland - bound Puerto Rico - manufactured goods, and continue to U.S. ports. Instead, they must proceed directly to U.S. ports, where distributors break bulk and send Puerto Rico - bound manufactured goods to Puerto Rico across the ocean by U.S. - flagged ships. The local government of Puerto Rico has requested several times to the U.S. Congress to exclude Puerto Rico from the Jones Act restrictions without success. The most recent measure has been taken by the 17th Legislative Assembly of Puerto Rico through R. Conc. del S. 21. These measures have always received support from all the major local political parties. In 2013 the Government Accountability Office published a report which concluded that "repealing or amending the Jones Act cabotage law might cut Puerto Rico shipping costs '' and that "shippers believed that opening the trade to non-U.S. - flag competition could lower costs ''. However, the same GAO report also found that "(shippers) doing business in Puerto Rico that GAO contacted reported that the freight rates are often -- although not always -- lower for foreign carriers going to and from Puerto Rico and foreign locations than the rates shippers pay to ship similar cargo to and from the United States, despite longer distances. Data were not available to allow us to validate the examples given or verify the extent to which this difference occurred. '' Ultimately, the report concluded that "(the) effects of modifying the application of the Jones Act for Puerto Rico are highly uncertain '' for both Puerto Rico and the United States, particularly for the U.S. shipping industry and the military preparedness of the United States. The first school in Puerto Rico was the Escuela de Gramática (Grammar School). It was established by Bishop Alonso Manso in 1513, in the area where the Cathedral of San Juan was to be constructed. The school was free of charge and the courses taught were Latin language, literature, history, science, art, philosophy and theology. Education in Puerto Rico is divided in three levels -- Primary (elementary school grades 1 -- 6), Secondary (intermediate and high school grades 7 -- 12), and Higher Level (undergraduate and graduate studies). As of 2002, the literacy rate of the Puerto Rican population was 94.1 %; by gender, it was 93.9 % for males and 94.4 % for females. According to the 2000 Census, 60.0 % of the population attained a high school degree or higher level of education, and 18.3 % has a bachelor 's degree or higher. Instruction at the primary school level is compulsory between the ages of 5 and 18. As of 2010, there are 1539 public schools and 806 private schools. The largest and oldest university system is the public University of Puerto Rico (UPR) with 11 campuses. The largest private university systems on the island are the Sistema Universitario Ana G. Mendez which operates the Universidad del Turabo, Metropolitan University and Universidad del Este. Other private universities include the multi-campus Inter American University, the Pontifical Catholic University, Universidad Politécnica de Puerto Rico, and the Universidad del Sagrado Corazón. Puerto Rico has four schools of Medicine and three ABA - approved Law Schools. As of 2015 medical care in Puerto Rico had been heavily impacted by emigration of doctors to the mainland and underfunding of the Medicare and Medicaid programs which serve 60 % of the island 's population. Affordable medical insurance under the Affordable Care Act is not available in Puerto Rico as, since Puerto Ricans pay no income tax, no subsidies are available. The city of San Juan has a system of triage, hospital, and preventive care health services. The municipal government sponsors regular health fairs in different areas of the city focusing on health care for the elderly and the disabled. In 2017, there were 69 hospitals in Puerto Rico. There are twenty hospitals in San Juan, half of which are operated by the government. The largest hospital is the Centro Médico de Río Piedras (the Río Piedras Medical Center). Founded in 1956, it is operated by the Medical Services Administration of the Department of Health of Puerto Rico, and is actually a network of eight hospitals: The city of San Juan operates nine other hospitals. Of these, eight are Diagnostic and Treatment Centers located in communities throughout San Juan. These nine hospitals are: There are also ten private hospitals in San Juan. These are: The city of Ponce is served by several clinics and hospitals. There are four comprehensive care hospitals: Hospital Dr. Pila, Hospital San Cristobal, Hospital San Lucas, and Hospital de Damas. In addition, Hospital Oncológico Andrés Grillasca specializes in the treatment of cancer, and Hospital Siquiátrico specializes in mental disorders. There is also a U.S. Department of Veterans Affairs Outpatient Clinic that provides health services to U.S. veterans. The U.S. Veterans Administration will build a new hospital in the city to satisfy regional needs. Hospital de Damas is listed in the U.S. News & World Report as one of the best hospitals under the U.S. flag. Ponce has the highest concentration of medical infrastructure per inhabitant of any municipality in Puerto Rico. On the island of Culebra, there is a small hospital in the island called Hospital de Culebra. It also offers pharmacy services to residents and visitors. For emergencies, patients are transported by plane to Fajardo on the main island. The town of Caguas has three hospitals: Hospital Hima San Pablo, Menonita Caguas Regional Hospital, and the San Juan Bautista Medical Center. The town of Cayey is served by the Hospital Menonita de Cayey, and the Hospital Municipal de Cayey. Reforma de Salud de Puerto Rico (Puerto Rico Health Reform) -- locally referred to as La Reforma (The Reform) -- is a government - run program which provides medical and health care services to the indigent and impoverished, by means of contracting private health insurance companies, rather than employing government - owned hospitals and emergency centers. The Reform is administered by the Puerto Rico Health Insurance Administration. The overall rate of crime is low in Puerto Rico. The territory has a high firearm homicide rate. The homicide rate of 19.2 per 100,000 inhabitants was significantly higher than any U.S. state in 2014. Most homicide victims are gang members and drug traffickers with about 80 % of homicides in Puerto Rico being drug related. Modern Puerto Rican culture is a unique mix of cultural antecedents: including European (predominantly Spanish, Italian, French, German and Irish), African, and, more recently, some North American and lots of South Americans. A large number of Cubans and Dominicans have relocated to the island in the past few decades. From the Spanish, Puerto Rico received the Spanish language, the Catholic religion and the vast majority of their cultural and moral values and traditions. The United States added English - language influence, the university system and the adoption of some holidays and practices. On March 12, 1903, the University of Puerto Rico was officially founded, branching out from the "Escuela Normal Industrial '', a smaller organization that was founded in Fajardo three years before. Much of Puerto Rican culture centers on the influence of music and has been shaped by other cultures combining with local and traditional rhythms. Early in the history of Puerto Rican music, the influences of Spanish and African traditions were most noticeable. The cultural movements across the Caribbean and North America have played a vital role in the more recent musical influences which have reached Puerto Rico. The official symbols of Puerto Rico are the reinita mora or Puerto Rican spindalis (a type of bird), the flor de maga (a type of flower), and the ceiba or kapok (a type of tree). The unofficial animal and a symbol of Puerto Rican pride is the coquí, a small frog. Other popular symbols of Puerto Rico are the jíbaro (the "countryman ''), and the carite. The architecture of Puerto Rico demonstrates a broad variety of traditions, styles and national influences accumulated over four centuries of Spanish rule, and a century of American rule. Spanish colonial architecture, Ibero - Islamic, art deco, post-modern, and many other architectural forms are visible throughout the island. From town to town, there are also many regional distinctions. Old San Juan is one of the two barrios, in addition to Santurce, that made up the municipality of San Juan from 1864 to 1951, at which time the former independent municipality of Río Piedras was annexed. With its abundance of shops, historic places, museums, open air cafés, restaurants, gracious homes, tree - shaded plazas, and its old beauty and architectonical peculiarity, Old San Juan is a main spot for local and internal tourism. The district is also characterized by numerous public plazas and churches including San José Church and the Cathedral of San Juan Bautista, which contains the tomb of the Spanish explorer Juan Ponce de León. It also houses the oldest Catholic school for elementary education in Puerto Rico, the Colegio de Párvulos, built in 1865. The oldest parts of the district of Old San Juan remain partly enclosed by massive walls. Several defensive structures and notable forts, such as the emblematic Fort San Felipe del Morro, Fort San Cristóbal, and El Palacio de Santa Catalina, also known as La Fortaleza, acted as the primary defenses of the settlement which was subjected to numerous attacks. La Fortaleza continues to serve also as the executive mansion for the Governor of Puerto Rico. Many of the historic fortifications are part of San Juan National Historic Site. During the 1940s, sections of Old San Juan fell into disrepair, and many renovation plans were suggested. There was even a strong push to develop Old San Juan as a "small Manhattan ''. Strict remodeling codes were implemented to prevent new constructions from affecting the common colonial Spanish architectural themes of the old city. When a project proposal suggested that the old Carmelite Convent in San Juan be demolished to erect a new hotel, the Institute had the building declared as a historic building, and then asked that it be converted to a hotel in a renewed facility. This was what became the Hotel El Convento in Old San Juan. The paradigm to reconstruct and renovate the old city and revitalize it has been followed by other cities in the Americas, particularly Havana, Lima and Cartagena de Indias. Ponce Creole is an architectural style created in Ponce, Puerto Rico, in the late 19th and early 20th centuries. This style of Puerto Rican buildings is found predominantly in residential homes in Ponce that developed between 1895 and 1920. Ponce Creole architecture borrows heavily from the traditions of the French, the Spaniards, and the Caribbean to create houses that were especially built to withstand the hot and dry climate of the region, and to take advantage of the sun and sea breezes characteristic of the southern Puerto Rico 's Caribbean Sea coast. It is a blend of wood and masonry, incorporating architectural elements of other styles, from Classical revival and Spanish Revival to Victorian. Puerto Rican art reflects many influences, much from its ethnically diverse background. A form of folk art, called santos evolved from the Catholic Church 's use of sculptures to convert indigenous Puerto Ricans to Christianity. Santos depict figures of saints and other religious icons and are made from native wood, clay, and stone. After shaping simple, they are often finished by painting them in vivid colors. Santos vary in size, with the smallest examples around eight inches tall and the largest about twenty inches tall. Traditionally, santos were seen as messengers between the earth and Heaven. As such, they occupied a special place on household altars, where people prayed to them, asked for help, or tried to summon their protection. Also popular, caretas or vejigantes are masks worn during carnivals. Similar masks signifying evil spirits were used in both Spain and Africa, though for different purposes. The Spanish used their masks to frighten lapsed Christians into returning to the church, while tribal Africans used them as protection from the evil spirits they represented. True to their historic origins Puerto Rican caretas always bear at least several horns and fangs. While usually constructed of papier - mâché, coconut shells and fine metal screening are sometimes used as well. Red and black were the typical colors for caretas but their palette has expanded to include a wide variety of bright hues and patterns. Puerto Rican literature evolved from the art of oral story telling to its present - day status. Written works by the native islanders of Puerto Rico were prohibited and repressed by the Spanish colonial government. Only those who were commissioned by the Spanish Crown to document the chronological history of the island were allowed to write. Diego de Torres Vargas was allowed to circumvent this strict prohibition for three reasons: he was a priest, he came from a prosperous Spanish family, and his father was a Sergeant Major in the Spanish Army, who died while defending Puerto Rico from an invasion by the Dutch armada. In 1647, Torres Vargas wrote Descripción de la Ciudad e Isla de Puerto Rico ("Description of the Island and City of Puerto Rico ''). This historical book was the first to make a detailed geographic description of the island. The book described all the fruits and commercial establishments of the time, mostly centered in the towns of San Juan and Ponce. The book also listed and described every mine, church, and hospital in the island at the time. The book contained notices on the State and Capital, plus an extensive and erudite bibliography. Descripción de la Ciudad e Isla de Puerto Rico was the first successful attempt at writing a comprehensive history of Puerto Rico. Some of Puerto Rico 's earliest writers were influenced by the teachings of Rafael Cordero. Among these was Dr. Manuel A. Alonso, the first Puerto Rican writer of notable importance. In 1849 he published El Gíbaro, a collection of verses whose main themes were the poor Puerto Rican country farmer. Eugenio María de Hostos wrote La peregrinación de Bayoán in 1863, which used Bartolomé de las Casas as a spring board to reflect on Caribbean identity. After this first novel, Hostos abandoned fiction in favor of the essay which he saw as offering greater possibilities for inspiring social change. In the late 19th century, with the arrival of the first printing press and the founding of the Royal Academy of Belles Letters, Puerto Rican literature began to flourish. The first writers to express their political views in regard to Spanish colonial rule of the island were journalists. After the United States invaded Puerto Rico during the Spanish -- American War and the island was ceded to the Americans as a condition of the Treaty of Paris of 1898, writers and poets began to express their opposition to the new colonial rule by writing about patriotic themes. Alejandro Tapia y Rivera, also known as the Father of Puerto Rican Literature, ushered in a new age of historiography with the publication of The Historical Library of Puerto Rico. Cayetano Coll y Toste was another Puerto Rican historian and writer. His work The Indo - Antillano Vocabulary is valuable in understanding the way the Taínos lived. Dr. Manuel Zeno Gandía in 1894 wrote La Charca and told about the harsh life in the remote and mountainous coffee regions in Puerto Rico. Dr. Antonio S. Pedreira, described in his work Insularismo the cultural survival of the Puerto Rican identity after the American invasion. With the Puerto Rican diaspora of the 1940s, Puerto Rican literature was greatly influenced by a phenomenon known as the Nuyorican Movement. Puerto Rican literature continued to flourish and many Puerto Ricans have since distinguished themselves as authors, journalists, poets, novelists, playwrights, screenwriters, essayists and have also stood out in other literary fields. The influence of Puerto Rican literature has transcended the boundaries of the island to the United States and the rest of the world. Over the past fifty years, significant writers include Ed Vega, Luis Rafael Sánchez, Piri Thomas, Giannina Braschi, and Miguel Piñero. Esmeralda Santiago has written an autobiographical trilogy about growing up in modern Puerto Rico as well as an historical novel, Conquistadora, about life on a sugar plantation during the mid-19th century. The media in Puerto Rico includes local radio stations, television stations and newspapers, the majority of which are conducted in Spanish. There are also three stations of the U.S. Armed Forces Radio and Television Service. Newspapers with daily distribution are El Nuevo Dia, El Vocero and Indice, Metro, and Primera Hora. El Vocero is distributed free of charge as well as Indice and Metro. Newspapers distributed on a weekly or regional basis include Claridad, La Perla del Sur, La Opinion, Vision, and La Estrella del Norte, among others. Several television channels provide local content in the island. These include WIPR - TV, Telemundo, Univision Puerto Rico, WAPA - TV, and WKAQ - TV. The music of Puerto Rico has evolved as a heterogeneous and dynamic product of diverse cultural resources. The most conspicuous musical sources have been Spain and West Africa, although many aspects of Puerto Rican music reflect origins elsewhere in Europe and the Caribbean and, over the last century, from the U.S. Puerto Rican music culture today comprises a wide and rich variety of genres, ranging from indigenous genres like bomba, plena, aguinaldo, danza and salsa to recent hybrids like reggaeton. Puerto Rico has some national instruments, like the Cuatro (Spanish for Four). The cuatro is a local instrument that was made by the "Jibaro '' or people from the mountains. Originally, the Cuatro consisted of four steel strings, hence its name, but currently the Cuatro consists of five double steel strings. It is easily confused with a guitar, even by locals. When held upright, from right to left, the strings are G, D, A, E, B. In the realm of classical music, the island hosts two main orchestras, the Orquesta Sinfónica de Puerto Rico and the Orquesta Filarmónica de Puerto Rico. The Casals Festival takes place annually in San Juan, drawing in classical musicians from around the world. With respect to opera, the legendary Puerto Rican tenor Antonio Paoli was so celebrated, that he performed private recitals for Pope Pius X and the Czar Nicholas II of Russia. In 1907, Paoli was the first operatic artist in world history to record an entire opera -- when he participated in a performance of Pagliacci by Ruggiero Leoncavallo in Milan, Italy. Over the past fifty years, Puerto Rican artists such as Jorge Emmanuelli, Yomo Toro, Ramito, Jose Feliciano, Bobby Capo, Rafael Cortijo, Ismael Rivera, Chayanne, Tito Puente, Eddie Palmieri, Ray Barreto, Dave Valentin, Omar Rodríguez - López, Hector Lavoe, Ricky Martin, Marc Anthony and Luis Fonsi have thrilled audiences around the world. Puerto Rican cuisine has its roots in the cooking traditions and practices of Europe (Spain), Africa and the native Taínos. In the latter part of the 19th century, the cuisine of Puerto Rico was greatly influenced by the United States in the ingredients used in its preparation. Puerto Rican cuisine has transcended the boundaries of the island, and can be found in several countries outside the archipelago. Basic ingredients include grains and legumes, herbs and spices, starchy tropical tubers, vegetables, meat and poultry, seafood and shellfish, and fruits. Main dishes include mofongo, arroz con gandules, pasteles, alcapurrias and pig roast (or lechón). Beverages include maví and piña colada. Desserts include flan, arroz con dulce (sweet rice pudding), piraguas, brazo gitanos, tembleque, polvorones, and dulce de leche. Locals call their cuisine cocina criolla. The traditional Puerto Rican cuisine was well established by the end of the 19th century. By 1848 the first restaurant, La Mallorquina, opened in Old San Juan. El Cocinero Puertorriqueño, the island 's first cookbook was published in 1849. From the diet of the Taíno people come many tropical roots and tubers like yautía (taro) and especially Yuca (cassava), from which thin cracker - like casabe bread is made. Ajicito or cachucha pepper, a slightly hot habanero pepper, recao / culantro (spiny leaf), achiote (annatto), peppers, ají caballero (the hottest pepper native to Puerto Rico), peanuts, guavas, pineapples, jicacos (cocoplum), quenepas (mamoncillo), lerenes (Guinea arrowroot), calabazas (tropical pumpkins), and guanabanas (soursops) are all Taíno foods. The Taínos also grew varieties of beans and some maize / corn, but maize was not as dominant in their cooking as it was for the peoples living on the mainland of Mesoamerica. This is due to the frequent hurricanes that Puerto Rico experiences, which destroy crops of maize, leaving more safeguarded plants like conucos (hills of yuca grown together). Spanish / European influence is also seen in Puerto Rican cuisine. Wheat, chickpeas (garbanzos), capers, olives, olive oil, black pepper, onions, garlic, cilantrillo (cilantro), oregano, basil, sugarcane, citrus fruit, eggplant, ham, lard, chicken, beef, pork, and cheese all came to Borikén (Puerto Rico 's native Taino name) from Spain. The tradition of cooking complex stews and rice dishes in pots such as rice and beans are also thought to be originally European (much like Italians, Spaniards, and the British). Early Dutch, French, Italian, and Chinese immigrants influenced not only the culture but Puerto Rican cooking as well. This great variety of traditions came together to form La Cocina Criolla. Coconuts, coffee (brought by the Arabs and Corsos to Yauco from Kafa, Ethiopia), okra, yams, sesame seeds, gandules (pigeon peas in English) sweet bananas, plantains, other root vegetables and Guinea hen, all come to Puerto Rico from Africa. Puerto Rico has been commemorated on four U.S. postal stamps and four personalities have been featured. Insular Territories were commemorated in 1937, the third stamp honored Puerto Rico featuring ' La Fortaleza ', the Spanish Governor 's Palace. The first free election for governor of the U.S. colony of Puerto Rico was honored on April 27, 1949, at San Juan, Puerto Rico. ' Inauguration ' on the 3 - cent stamp refers to the election of Luis Munoz Marin, the first democratically elected governor of Puerto Rico. San Juan, Puerto Rico was commemorated with an 8 - cent stamp on its 450th anniversary issued September 12, 1971, featuring a sentry box from Castillo San Felipe del Morro. In the "Flags of our nation series '' 2008 -- 2012, of the fifty - five, five territorial flags were featured. Forever stamps included the Puerto Rico Flag illustrated by a bird issued 2011. Four Puerto Rican personalities have been featured on U.S. postage stamps. These include Roberto Clemente in 1984 as an individual and in the Legends of Baseball series issued in 2000. Luis Muñoz Marín in the Great Americans series, on February 18, 1990, Julia de Burgos in the Literary Arts series, issued 2010, and José Ferrer in the Distinguished American series, issued 2012. Baseball was one of the first sports to gain widespread popularity in Puerto Rico. The Puerto Rico Baseball League serves as the only active professional league, operating as a winter league. No Major League Baseball franchise or affiliate plays in Puerto Rico, however, San Juan hosted the Montreal Expos for several series in 2003 and 2004 before they moved to Washington, D.C. and became the Washington Nationals. The Puerto Rico national baseball team has participated in the World Cup of Baseball winning one gold (1951), four silver and four bronze medals, the Caribbean Series (winning fourteen times) and the World Baseball Classic. On March 2006, San Juan 's Hiram Bithorn Stadium hosted the opening round as well as the second round of the newly formed World Baseball Classic. Puerto Rican baseball players include Hall of Famers Roberto Clemente, Orlando Cepeda and Roberto Alomar, enshrined in 1973, 1999, and 2011 respectively. Boxing, basketball, and volleyball are considered popular sports as well. Wilfredo Gómez and McWilliams Arroyo have won their respective divisions at the World Amateur Boxing Championships. Other medalists include José Pedraza, who holds a silver medal, and three boxers who finished in third place, José Luis Vellón, Nelson Dieppa and McJoe Arroyo. In the professional circuit, Puerto Rico has the third-most boxing world champions and it is the global leader in champions per capita. These include Miguel Cotto, Félix Trinidad, Wilfred Benítez and Gómez among others. The Puerto Rico national basketball team joined the International Basketball Federation in 1957. Since then, it has won more than 30 medals in international competitions, including gold in three FIBA Americas Championships and the 1994 Goodwill Games August 8, 2004, became a landmark date for the team when it became the first team to defeat the United States in an Olympic tournament since the integration of National Basketball Association players. Winning the inaugural game with scores of 92 -- 73 as part of the 2004 Summer Olympics organized in Athens, Greece. Baloncesto Superior Nacional acts as the top - level professional basketball league in Puerto Rico, and has experienced success since its beginning in 1930. Puerto Rico is also a member of FIFA and CONCACAF. In 2008, the archipelago 's first unified league, the Puerto Rico Soccer League, was established. Other sports include professional wrestling and road running. The World Wrestling Council and International Wrestling Association are the largest wrestling promotions in the main island. The World 's Best 10K, held annually in San Juan, has been ranked among the 20 most competitive races globally. The "Puerto Rico All Stars '' team, which has won twelve world championships in unicycle basketball. Organized Streetball has gathered some exposition, with teams like "Puerto Rico Street Ball '' competing against established organizations including the Capitanes de Arecibo and AND1 's Mixtape Tour Team. Six years after the first visit, AND1 returned as part of their renamed Live Tour, losing to the Puerto Rico Streetballers. Consequently, practitioners of this style have earned participation in international teams, including Orlando "El Gato '' Meléndez, who became the first Puerto Rican born athlete to play for the Harlem Globetrotters. Orlando Antigua, whose mother is Puerto Rican, in 1995 became the first Hispanic and the first non-black in 52 years to play for the Harlem Globetrotters. Puerto Rico has representation in all international competitions including the Summer and Winter Olympics, the Pan American Games, the Caribbean World Series, and the Central American and Caribbean Games. Puerto Rico hosted the Pan Am Games in 1979 (officially in San Juan), and The Central American and Caribbean Games were hosted in 1993 in Ponce and in 2010 in Mayagüez. Puerto Rican athletes have won nine medals in Olympic competition (one gold, two silver, six bronze), the first one in 1948 by boxer Juan Evangelista Venegas. Monica Puig won the first gold medal for Puerto Rico in the Olympic Games by winning the Women 's Tennis singles title in Rio 2016. Cities and towns in Puerto Rico are interconnected by a system of roads, freeways, expressways, and highways maintained by the Highways and Transportation Authority under the jurisdiction of the U.S. Department of Transportation, and patrolled by the Puerto Rico Police Department. The island 's metropolitan area is served by a public bus transit system and a metro system called Tren Urbano (in English: Urban Train). Other forms of public transportation include seaborne ferries (that serve Puerto Rico 's archipelago) as well as Carros Públicos (private mini buses). Puerto Rico has three international airports, the Luis Muñoz Marín International Airport in Carolina, Mercedita Airport in Ponce, and the Rafael Hernández Airport in Aguadilla, and 27 local airports. The Luis Muñoz Marín International Airport is the largest aerial transportation hub in the Caribbean. Puerto Rico has nine ports in different cities across the main island. The San Juan Port is the largest in Puerto Rico, and the busiest port in the Caribbean and the 10th busiest in the United States in terms of commercial activity and cargo movement, respectively. The second largest port is the Port of the Americas in Ponce, currently under expansion to increase cargo capacity to 1.5 million twenty - foot containers (TEUs) per year. The Puerto Rico Electric Power Authority (PREPA) -- Spanish: Autoridad de Energía Eléctrica (AEE) -- is an electric power company and the government - owned corporation of Puerto Rico responsible for electricity generation, power transmission, and power distribution in Puerto Rico. PREPA is the only entity authorized to conduct such business in Puerto Rico, effectively making it a government monopoly. The Authority is ruled by a Governing Board appointed by the Governor with the advice and consent of the Senate of Puerto Rico, and is run by an Executive Director. Telecommunications in Puerto Rico includes radio, television, fixed and mobile telephones, and the Internet. Broadcasting in Puerto Rico is regulated by the U.S. Federal Communications Commission (FCC). As of 2007, there were 30 TV stations, 125 radio stations and roughly 1 million TV sets on the island. Cable TV subscription services are available and the U.S. Armed Forces Radio and Television Service also broadcast on the island. Geography United States government United Nations (U.N.) Declaration on Puerto Rico
who plays josie on nicky ricky dicky and dawn
Nicky, Ricky, Dicky & Dawn - wikipedia Nicky, Ricky, Dicky & Dawn is an American sitcom developed by Michael Feldman and created by Matt Fleckenstein that premiered on Nickelodeon on September 13, 2014. The series stars Brian Stepanek, Allison Munn, Aidan Gallagher, Casey Simpson, Mace Coronel, Lizzy Greene, Gabrielle Elyse, and Kyla - Drew Simmons. The show was cancelled on September 26 2017 The series focuses on quadruplets Nicky, Ricky, Dicky, and Dawn Harper, 10 - years - old at the start of the series, who have nothing in common and often fight, but must work together to solve everyday situations. The series was originally picked up for 13 episodes on March 13, 2014, but was later increased to 20 episodes. The series premiered on September 13, 2014. On November 18, 2014, the series was renewed for a second season. The second season premiered on May 23, 2015. On February 9, 2016, Nickelodeon renewed Nicky, Ricky, Dicky & Dawn for a third season of 14 episodes. It was also confirmed that Matt Fleckenstein would step down as show runner. Actress Lizzy Greene announced on her Twitter account that production for season three started on April 26, 2016. The third season premiered on January 7, 2017. The series was renewed for a fourth season and had its episode order for the third season increased from 14 to 24 by Nickelodeon on March 20, 2017. Nickelodeon cancelled the series on September 26 2017
in the book the boy in the striped pajamas where does bruno move to
The Boy in the Striped Pyjamas - wikipedia The Boy in the Striped Pyjamas is a 2006 Holocaust novel by Irish novelist John Boyne. Unlike the months of planning Boyne devoted to his other books, he said that he wrote the entire first draft of The Boy in the Striped Pyjamas in two and a half days, barely sleeping until he got to the end. As of March 2010, the novel had sold more than five million copies around the world. In both 2007 and 2008, it was the best selling book of the year in Spain, and it has also reached number one on the New York Times bestseller list, as well as in the UK, Ireland, and Australia. The book was adapted in 2008 as a film of the same name. Bruno is a 9 - year - old boy growing up during World War II in Berlin. He lives with his parents, his 12 - year - old sister Gretel and maids, one of whom is called Maria. After a visit by Adolf Hitler, Bruno 's father is promoted to Commandant, and the family has to move to ' Out - With ' because of the orders of "The Fury '' (Bruno 's naïve interpretation of the word ' Führer '). Bruno is initially upset about moving to Out - With (never identified, but cf. Auschwitz) and leaving his friends, Daniel, Karl, and Martin. From the house at Out - With, Bruno sees a camp in which the prisoners wear striped pyjamas. One day, Bruno decides to explore the strange wire fence. As he walks along the fence, he meets a Jewish boy named Shmuel, who he learns shares his birthday. Shmuel says that his father, grandfather, and brother are with him on this side of the fence, but he is separated from his mother. Bruno and Shmuel talk and become very good friends, although Bruno still does not understand very much about Shmuel and his side of the fence. Nearly every day, unless it 's raining, Bruno goes to see Shmuel and sneaks him food. As the meetings go on, and Shmuel gets more and more skinny, Bruno 's naïveté means he never realizes he is living beside a death camp. When lice eggs are discovered in Bruno 's hair he is forced to get all of his hair shaved off. Bruno comments that he looks like Shmuel, and Shmuel agrees, except that Bruno is fatter. Bruno 's mother eventually persuades his father to take them back to Berlin and stay at Out - With without them. The next day Bruno concocts a plan with Shmuel to sneak into the camp to look for Shmuel 's father. Shmuel brings a set of prison clothes (which look to Bruno like striped pyjamas), and Bruno leaves his own clothes outside the fence. As they search the camp, both children are rounded up along with a group of prisoners on a "march ''. In the gas chamber, Bruno apologizes to Shmuel for not finding his father and tells Shmuel that he is Bruno 's best friend for life. Shmuel does not answer, as at that moment the door of the gas chamber is closed, it becomes dark, and all is chaos. Some critics have called the premise of the book and subsequent film -- that there would be a child of Shmuel 's age in the camp -- erroneous. Reviewing the original book, Rabbi Benjamin Blech wrote: "Note to the reader: There were no 9 - year - old Jewish boys in Auschwitz -- the Nazis immediately gassed those not old enough to work. '' Rabbi Blech affirmed the opinion of a Holocaust survivor friend that the book is "not just a lie and not just a fairytale, but a profanation ''. Blech acknowledges the objection that a "fable '' need not be factually accurate; he counters that the book trivializes the conditions in and around the death camps and perpetuates the "myth that those (...) not directly involved can claim innocence '', and thus undermines its moral authority. Students who read it, he warns, may believe the camps "were n't that bad '' if a boy could conduct a clandestine friendship with a Jewish captive of the same age, unaware of "the constant presence of death ''. Kathryn Hughes, whilst agreeing about the implausibility of the plot, argues that "Bruno 's innocence comes to stand for the willful refusal of all adult Germans to see what was going on under their noses ''.
which countries are not members of united nations
Member states of the United Nations - wikipedia The United Nations member states are the 193 sovereign states that are members of the United Nations (UN) and have equal representation in the UN General Assembly. The UN is the world 's largest intergovernmental organization, ahead of the Organisation of Islamic Cooperation. The criteria for admission of new members to the UN are set out in Chapter II, Article 4 of the UN Charter: A recommendation for admission from the Security Council requires affirmative votes from at least nine of the council 's fifteen members, with none of the five permanent members using their veto power. The Security Council 's recommendation must then be approved in the General Assembly by a two - thirds majority vote. In principle, only sovereign states can become UN members, and currently all UN members are sovereign states. Although five members were not sovereign when they joined the UN, all subsequently became fully independent between 1946 and 1991. Because a state can only be admitted to membership in the UN by the approval of the Security Council and the General Assembly, a number of states that are considered sovereign according to the Montevideo Convention are not members of the UN. This is because the UN does not consider them to possess sovereignty, mainly due to the lack of international recognition or due to opposition from one of the permanent members. In addition to the member states, the UN also invites non-member states to become observers at the UN General Assembly (currently two: the Holy See and Palestine), allowing them to participate and speak in General Assembly meetings, but not vote. Observers are generally intergovernmental organizations and international organizations and entities whose statehood or sovereignty is not precisely defined. The UN officially came into existence on 24 October 1945, after ratification of the United Nations Charter by the five permanent members of the United Nations Security Council (the Republic of China, France, the Soviet Union, the United Kingdom, and the United States) and a majority of the other signatories. A total of 51 original members (or founding members) joined that year; 50 of them signed the Charter at the United Nations Conference on International Organization in San Francisco on 26 June 1945, while Poland, which was not represented at the conference, signed it on 15 October 1945. The original members of the United Nations were: France, the Republic of China, the Soviet Union, the United Kingdom, the United States, Argentina, Australia, Belgium, Bolivia, Brazil, Byelorussia, Canada, Chile, Colombia, Costa Rica, Cuba, Czechoslovakia, Denmark, the Dominican Republic, Ecuador, Egypt, El Salvador, Ethiopia, Greece, Guatemala, Haiti, Honduras, India, Iran, Iraq, Lebanon, Liberia, Luxembourg, Mexico, the Netherlands, New Zealand, Nicaragua, Norway, Panama, Paraguay, Peru, the Philippines, Poland, Saudi Arabia, South Africa, Syria, Turkey, Ukraine, Uruguay, Venezuela and Yugoslavia. Among the original members, 49 are either still UN members or had their memberships in the UN continued by a successor state (see table below); for example, the membership of the Soviet Union was continued by the Russian Federation after its dissolution (see the section Former members: Union of Soviet Socialist Republics). The other two original members, Czechoslovakia and Yugoslavia (i.e., the Socialist Federal Republic of Yugoslavia), had been dissolved and their memberships in the UN not continued from 1992 by any one successor state (see the sections Former members: Czechoslovakia and Former members: Yugoslavia). At the time of UN 's founding, the seat of China in the UN was held by the Republic of China, but as a result of United Nations General Assembly Resolution 2758 in 1971, it is now held by the People 's Republic of China (see the section Former members: Republic of China (Taiwan)). A number of the original members were not sovereign when they joined the UN, and only gained full independence later: The current members and their dates of admission are listed below with their official designations used by the United Nations. The alphabetical order by the member states ' official designations is used to determine the seating arrangement of the General Assembly sessions, where a draw is held each year to select a member state as the starting point. Several members use their full official names in their official designations and thus are sorted out of order from their common names: the Democratic People 's Republic of Korea, the Democratic Republic of the Congo, the Republic of Korea, the Republic of Moldova, The former Yugoslav Republic of Macedonia (a provisional reference used for all purposes within the UN, and listed under T), and the United Republic of Tanzania. The member states can be sorted by their official designations and dates of admission by clicking on the buttons in the header of the columns. See related sections on former members by clicking on the links in the column See also. Original members are listed with blue background. The Republic of China (ROC) joined the UN as an original member on 24 October 1945, and as set out by the United Nations Charter, Chapter V, Article 23, became one of the five permanent members of the United Nations Security Council. In 1949, as a result of the Chinese Civil War, the Kuomintang - led ROC government lost effective control of mainland China and relocated to the island of Taiwan, and the Communist Party - led government of the People 's Republic of China (PRC), declared on 1 October 1949, took control of mainland China. The UN was notified on 18 November 1949 of the formation of the Central People 's Government of the People 's Republic of China; however, the Government of the Republic of China continued to represent China at the UN, despite the small size of the ROC 's jurisdiction of Taiwan and a number of smaller islands compared to the PRC 's jurisdiction of mainland China. As both governments claimed to be the sole legitimate representative of China, proposals to effect a change in the representation of China in the UN were discussed but rejected for the next two decades, as the ROC was still recognized as the sole legitimate representative of China by a majority of UN members. Both sides rejected compromise proposals to allow both states to participate in the UN, based on the One - China policy. By the 1970s, a shift had occurred in international diplomatic circles and the PRC had gained the upper hand in international diplomatic relations and recognition count. On 25 October 1971, the 21st time the United Nations General Assembly debated on the PRC 's admission into the UN, United Nations General Assembly Resolution 2758 was adopted, by which it recognized that "the representatives of the Government of the People 's Republic of China are the only lawful representatives of China to the United Nations and that the People 's Republic of China is one of the five permanent members of the Security Council, '' and decided "to restore all its rights to the People 's Republic of China and to recognize the representatives of its Government as the only legitimate representatives of China to the United Nations, and to expel forthwith the representatives of Chiang Kai - shek from the place which they unlawfully occupy at the United Nations and in all the organizations related to it. '' This effectively transferred the seat of China in the UN, including its permanent seat on the Security Council, from the ROC to the PRC, and expelled the ROC from the UN. From the United Nations ' perspective the "Republic of China '' is not a former member. No UN member was expelled in 1971. Rather, the credentials of one Chinese delegation (from Taipei) were rejected and the credentials of another Chinese delegation (from Beijing) were accepted. In addition to losing its seat in the UN, the UN Secretary - General concluded from the resolution that the General Assembly considered Taiwan to be a province of China. Consequently, the Secretary - General decided that it was not permitted for the ROC to become a party to treaties deposited with it. In 1993 the ROC began campaigning to rejoin the UN separately from the People 's Republic of China. A number of options were considered, including seeking membership in the specialized agencies, applying for observer status, applying for full membership, or having resolution 2758 revoked to reclaim the seat of China in the UN. Every year from 1993 -- 2006, UN member states submitted a memorandum to the UN Secretary - General requesting that the UN General Assembly consider allowing the ROC to resume participating in the United Nations. This approach was chosen, rather than a formal application for membership, because it could be enacted by the General Assembly, while a membership application would need Security Council approval, where the PRC held a veto. Early proposals recommended admitting the ROC with parallel representation over China, along with the People 's Republic of China, pending eventual reunification, citing examples of other divided countries which had become separate UN member states, such as East and West Germany and North and South Korea. Later proposals emphasized that the ROC was a separate state, over which the PRC had no effective sovereignty. These proposed resolutions referred to the ROC under a variety of names: "Republic of China in Taiwan '' (1993 -- 94), "Republic of China on Taiwan '' (1995 -- 97, 1999 -- 2002), "Republic of China '' (1998), "Republic of China (Taiwan) '' (2003) and "Taiwan '' (2004 -- 06). However, all fourteen attempts were unsuccessful as the General Assembly 's General Committee declined to put the issue on the Assembly 's agenda for debate, under strong opposition from the PRC. While all these proposals were vague, requesting the ROC be allowed to participate in UN activities without specifying any legal mechanism, in 2007 the ROC submitted a formal application under the name "Taiwan '' for full membership in the UN. However, the application was rejected by the United Nations Office of Legal Affairs citing General Assembly Resolution 2758, without being forwarded to the Security Council. Secretary - General of the United Nations Ban Ki - moon stated that: The position of the United Nations is that the People 's Republic of China is representing the whole of China as the sole and legitimate representative Government of China. The decision until now about the wish of the people in Taiwan to join the United Nations has been decided on that basis. The resolution (General Assembly Resolution 2758) that you just mentioned is clearly mentioning that the Government of China is the sole and legitimate Government and the position of the United Nations is that Taiwan is part of China. Responding to the UN 's rejection of its application, the ROC government has stated that Taiwan is not now nor has it ever been under the jurisdiction of the PRC, and that since General Assembly Resolution 2758 did not clarify the issue of Taiwan 's representation in the UN, it does not prevent Taiwan 's participation in the UN as an independent sovereign nation. The ROC government also criticized Ban for asserting that Taiwan is part of China and returning the application without passing it to the Security Council or the General Assembly, contrary to UN 's standard procedure (Provisional Rules of Procedure of the Security Council, Chapter X, Rule 59). On the other hand, the PRC government, which has stated that Taiwan is part of China and firmly opposes the application of any Taiwan authorities to join the UN either as a member or an observer, praised that UN 's decision "was made in accordance with the UN Charter and Resolution 2758 of the UN General Assembly, and showed the UN and its member states ' universal adherence to the one - China principle ''. A group of UN member states put forward a draft resolution for that fall 's UN General Assembly calling on the Security Council to consider the application. The following year two referendums in Taiwan on the government 's attempts to regain participation at the UN did not pass due to low turnout. That fall the ROC took a new approach, with its allies submitting a resolution requesting that the "Republic of China (Taiwan) '' be allowed to have "meaningful participation '' in the UN specialized agencies. Again the issue was not put on the Assembly 's agenda. In 2009, the ROC chose not to bring the issue of its participation in the UN up for debate at the General Assembly for the first time since it began the campaign in 1993. In May 2009, the Department of Health of the Republic of China was invited by the World Health Organization to attend the 62nd World Health Assembly as an observer under the name "Chinese Taipei ''. This was the ROC 's first participation in an event organized by a UN-affiliated agency since 1971, as a result of the improved cross-strait relations since Ma Ying - jeou became the President of the Republic of China a year before. The Republic of China is officially recognized by 19 UN member states and the Holy See. It maintains unofficial diplomatic relations with around 100 nations, including the United States and Japan. Czechoslovakia joined the UN as an original member on 24 October 1945, with its name changed to the Czech and Slovak Federative Republic on 20 April 1990. Upon the imminent dissolution of Czechoslovakia, in a letter dated 10 December 1992, its Permanent Representative informed the United Nations Secretary - General that the Czech and Slovak Federative Republic would cease to exist on 31 December 1992 and that the Czech Republic and Slovakia, as successor states, would apply for membership in the UN. Neither state sought sole successor state status. Both states were admitted to the UN on 19 January 1993. Both the Federal Republic of Germany (West Germany) and the German Democratic Republic (East Germany) were admitted to the UN on 18 September 1973. Through the accession of the East German federal states to the Federal Republic of Germany, effective from 3 October 1990, the territory of the German Democratic Republic became part of the Federal Republic of Germany, today simply known as Germany. Consequently, the Federal Republic of Germany continued being a member of the UN while the German Democratic Republic ceased to exist. The Federation of Malaya joined the United Nations on 17 September 1957. On 16 September 1963, its name was changed to Malaysia, following the formation of Malaysia from Singapore, North Borneo (now Sabah), Sarawak and the Federation of Malaya. Singapore became an independent State on 9 August 1965 and a Member of the United Nations on 21 September 1965. Tanganyika was admitted to the UN on 14 December 1961, and Zanzibar was admitted to the UN on 16 December 1963. Following the ratification on 26 April 1964 of the Articles of Union between Tanganyika and Zanzibar, the two states merged to form the single member "United Republic of Tanganyika and Zanzibar '', with its name changed to the United Republic of Tanzania on 1 November 1964. The Union of Soviet Socialist Republics (USSR) joined the UN as an original member on 24 October 1945, and as set out by the United Nations Charter, Chapter V, Article 23, became one of the five permanent members of the United Nations Security Council. Upon the imminent dissolution of the USSR, in a letter dated 24 December 1991, Boris Yeltsin, the President of the Russian Federation, informed the United Nations Secretary - General that the membership of the USSR in the Security Council and all other UN organs was being continued by the Russian Federation with the support of the 11 member countries of the Commonwealth of Independent States. The other fourteen independent states established from the former Soviet Republics were all admitted to the UN: Both Egypt and Syria joined the UN as original members on 24 October 1945. Following a plebiscite on 21 February 1958, the United Arab Republic was established by a union of Egypt and Syria and continued as a single member. On 13 October 1961, Syria, having resumed its status as an independent state, resumed its separate membership in the UN. Egypt continued as a UN member under the name of the United Arab Republic, until it reverted to its original name on 2 September 1971. Syria changed its name to the Syrian Arab Republic on 14 September 1971. Yemen (i.e., North Yemen) was admitted to the UN on 30 September 1947; Southern Yemen (i.e., South Yemen) was admitted to the UN on 14 December 1967, with its name changed to the People 's Democratic Republic of Yemen on 30 November 1970, and was later referred to as Democratic Yemen. On 22 May 1990, the two states merged to form the Republic of Yemen, which continued as a single member under the name Yemen. The Socialist Federal Republic of Yugoslavia, referred to as Yugoslavia, joined the UN as an original member on 24 October 1945. By 1992, it had been effectively dissolved into five independent states, which were all subsequently admitted to the UN: Due to the dispute over its legal successor states, the member state "Yugoslavia '', referring to the former Socialist Federal Republic of Yugoslavia, remained on the official roster of UN members for many years after its effective dissolution. Following the admission of all five states as new UN members, "Yugoslavia '' was removed from the official roster of UN members. The government of the Federal Republic of Yugoslavia, established on 28 April 1992 by the remaining Yugoslav republics of Montenegro and Serbia, claimed itself as the legal successor state of the former Socialist Federal Republic of Yugoslavia; however, on 30 May 1992, United Nations Security Council Resolution 757 was adopted, by which it imposed international sanctions on the Federal Republic of Yugoslavia due to its role in the Yugoslav Wars, and noted that "the claim by the Federal Republic of Yugoslavia (Serbia and Montenegro) to continue automatically the membership of the former Socialist Federal Republic of Yugoslavia in the United Nations has not been generally accepted, '' and on 22 September 1992, United Nations General Assembly Resolution A / RES / 47 / 1 was adopted, by which it considered that "the Federal Republic of Yugoslavia (Serbia and Montenegro) can not continue automatically the membership of the former Socialist Federal Republic of Yugoslavia in the United Nations, '' and therefore decided that "the Federal Republic of Yugoslavia (Serbia and Montenegro) should apply for membership in the United Nations and that it shall not participate in the work of the General Assembly ''. The Federal Republic of Yugoslavia refused to comply with the resolution for many years, but following the ousting of President Slobodan Milošević from office, it applied for membership, and was admitted to the UN on 1 November 2000. On 4 February 2003, the Federal Republic of Yugoslavia had its official name changed to Serbia and Montenegro, following the adoption and promulgation of the Constitutional Charter of Serbia and Montenegro by the Assembly of the Federal Republic of Yugoslavia. On the basis of a referendum held on 21 May 2006, Montenegro declared independence from Serbia and Montenegro on 3 June 2006. In a letter dated on the same day, the President of Serbia informed the United Nations Secretary - General that the membership of Serbia and Montenegro in the UN was being continued by Serbia, following Montenegro 's declaration of independence, in accordance with the Constitutional Charter of Serbia and Montenegro. Montenegro was admitted to the UN on 28 June 2006. In the aftermath of the Kosovo War, the territory of Kosovo, then an autonomous province of the Federal Republic of Yugoslavia, was put under the interim administration of the United Nations Mission in Kosovo on 10 June 1999. On 17 February 2008 it declared independence, but this has not been recognised by Serbia. The Republic of Kosovo is not a member of the UN, but is a member of the International Monetary Fund and the World Bank Group, both specialized agencies in the United Nations System. The Republic of Kosovo is recognised by 111 UN member states, including three of the five permanent members of the United Nations Security Council (France, the United Kingdom, and the United States), while the other two -- China and Russia -- do not recognise Kosovo. On 22 July 2010, the International Court of Justice, the primary judicial organ of the UN, issued an advisory opinion, ruling that Kosovo 's declaration of independence was not in violation of international law. A member state may be suspended or expelled from the UN, according to the United Nations Charter. From Chapter II, Article 5: A Member of the United Nations against which preventive or enforcement action has been taken by the Security Council may be suspended from the exercise of the rights and privileges of membership by the General Assembly upon the recommendation of the Security Council. The exercise of these rights and privileges may be restored by the Security Council. From Article 6: A Member of the United Nations which has persistently violated the Principles contained in the present Charter may be expelled from the Organization by the General Assembly upon the recommendation of the Security Council. Since its inception, no member state has been suspended or expelled from the UN under Articles 5 and 6. However, in a few cases, states were suspended or expelled from participating in UN activities by means other than Articles 5 and 6: Since the inception of the UN, only one member state (excluding those that dissolved or merged with other member states) has unilaterally withdrawn from the UN. During the Indonesia -- Malaysia confrontation, and in response to the election of Malaysia as a non-permanent member of the United Nations Security Council, in a letter dated 20 January 1965, Indonesia informed the United Nations Secretary - General that it had decided "at this stage and under the present circumstances '' to withdraw from the UN. However, following the overthrow of President Sukarno, in a telegram dated 19 September 1966, Indonesia notified the Secretary - General of its decision "to resume full cooperation with the United Nations and to resume participation in its activities starting with the twenty - first session of the General Assembly ''. On 28 September 1966, the United Nations General Assembly took note of the decision of the Government of Indonesia and the President invited the representatives of that country to take their seats in the Assembly. Unlike suspension and expulsion, no express provision is made in the United Nations Charter of whether or how a member can legally withdraw from the UN (largely to prevent the threat of withdrawal from being used as a form of political blackmail, or to evade obligations under the Charter, similar to withdrawals that weakened the UN 's predecessor, the League of Nations), or on whether a request for readmission by a withdrawn member should be treated the same as an application for membership, i.e., requiring Security Council as well as General Assembly approval. Indonesia 's return to the UN would suggest that this is not required; however, scholars have argued that the course of action taken by the General Assembly was not in accordance with the Charter from a legal point of view. In addition to the member states, there are two non-member permanent observer states: the Holy See and the State of Palestine. A number of states were also granted observer status before being admitted to the UN as full members (see United Nations General Assembly observers for the full list). The most recent case of an observer state becoming a member state was Switzerland, which was admitted in 2002. A European Union institution, the European Commission, was granted observer status at the UNGA through Resolution 3208 in 1974. The Treaty of Lisbon in 2009 resulted in the delegates being accredited directly to the EU. It was accorded full rights in the General Assembly, bar the right to vote and put forward candidates, via UNGA Resolution A / RES / 65 / 276 on 10 May 2011. It is the only non-state party to over 50 multilateral conventions, and has participated as a full member in every way except for having a vote in a number of UN conferences. The sovereignty status of Western Sahara is in dispute between Morocco and the Polisario Front. Most of the territory is controlled by Morocco, the remainder (the Free Zone) by the Sahrawi Arab Democratic Republic, proclaimed by the Polisario Front. Western Sahara is listed by the UN as a "non-self - governing territory ''. The Cook Islands and Niue, which are both associated states of New Zealand, are not members of the UN, but are members of specialized agencies of the UN such as WHO and UNESCO, and have had their "full treaty - making capacity '' recognized by United Nations Secretariat in 1992 and 1994 respectively. They have since become parties to a number of international treaties which the UN Secretariat acts as a depositary for, such as the United Nations Framework Convention on Climate Change and the United Nations Convention on the Law of the Sea, and are treated as non-member states. Both the Cook Islands and Niue has expressed a desire to become a UN member state, but New Zealand has said that they would not support the application without a change in their constitutional relationship, in particular their right to New Zealand citizenship.
how to be a judge in the supreme court
List of Sitting judges of the Supreme Court of India - wikipedia This is a list of judges of the Supreme Court of India, the highest court in the Republic of India. The list is ordered according to seniority. There are currently 22 judges (including Chief Justice of India) against a maximum possible strength of 31. As per the Constitution of India, judges of the Supreme Court retire at age 65. Justice Dipak Misra is the current and 45th head.
who played dana pruitt on boy meets world
Larisa Oleynik - wikipedia Larisa Romanovna Oleynik (/ ləˈrɪsə oʊ ˈleɪnɪk /; born June 7, 1981) is an American actress. She is known for starring in the title role of the children 's television series The Secret World of Alex Mack during the mid-1990s. She has also appeared in theatrical films, including The Baby - Sitters Club and 10 Things I Hate About You. During her period as a teen idol, she was described as "one of America 's favorite 15 - year - olds '', and "the proverbial girl next door ''. Oleynik was born in Santa Clara County, California. Her mother Lorraine (née Allen) is a former nurse, and her late father Roman Oleynik (1936 -- 2003) was an anesthesiologist. Her father was of Ukrainian ancestry, and she was raised in the Eastern Orthodox Church. Oleynik grew up in the San Francisco Bay Area. She attended Pinewood School in Los Altos, California. As her acting career flourished, she would "divide her time between normal childhood experiences in Northern California and auditions in Los Angeles. '' After the success in her role as Alex Mack, Oleynik decided to attend college. She attended Sarah Lawrence College and described her time there as "the best decision I 've made ''. Oleynik began acting in a San Francisco production of Les Misérables in 1989 after seeing an audition ad in a newspaper when she was eight years old. She obtained two parts in the production (young Cossette and young Eponine), both with singing roles. After appearing in the musical, she contacted an agent and began taking acting lessons. Her onscreen acting career began at age 12, in a 1993 episode of the television series Dr. Quinn, Medicine Woman; the same year she also appeared in the made - for - television film River of Rage: The Taking of Maggie Keene and in 1996 made a cameo on The Adventures of Pete & Pete as a nurse at the beginning of the episode "Dance Fever ''. Later in 1993, she was subsequently cast in the lead role of the series The Secret World of Alex Mack where she portrayed a teenage girl who receives telekinetic powers as the result of an accident. She won the role of Alex Mack over 400 other aspirants. The series ran on Nickelodeon from 1994 to 1998 and was one of the network 's top three most watched shows, becoming quite a favorite among the child and teen audiences and turning Oleynik into a teen idol. During the show 's heyday, children who met Oleynik (and were too young to understand special effects) would often ask her to "morph '' for them. Rather than try to explain things, she would quickly glance around, then tell them "Not here -- everybody would see! ''. Oleynik reprised the role in an All That sketch, although the name was changed to "Alex Sax ''. She later made an appearance in the 100th episode of the show. Also during her time on The Secret World of Alex Mack, she played one of the lead characters in the 1995 feature film The Baby - Sitters Club (opposite Rachael Leigh Cook and Schuyler Fisk), appeared in several episodes of Boy Meets World, wrote an advice column for Tiger Beat magazine, and was involved in Nickelodeon 's The Big Help charity, Hands Across Communication, Surfrider Foundation and the Starlight Children 's Foundation. She has also hosted the CableACE Awards, Daytime Emmy Awards, YTV Achievement Awards, The Nickelodeon Kids ' Choice Awards as well as The Big Help. She has commented that she stayed "grounded '' during her period as a teen star, mainly through the help of a "strong network of people '' that she is close to. After The Secret World of Alex Mack ended its run, Oleynik had a starring role in the film 10 Things I Hate About You as Bianca. The film was released in April 1999 and did fairly well at the box office, grossing a total of $38 million domestically. From 1998 to 2000, Oleynik appeared in twenty - one episodes of the NBC series 3rd Rock from the Sun as Alissa Strudwick. During 2000, she also appeared in two independent films: 100 Girls (opposite Emmanuelle Chriqui, Katherine Heigl and Jonathan Tucker) and A Time for Dancing (opposite Shiri Appleby); neither film received a theatrical release in the United States. She has appeared in Malcolm in the Middle as Reese Wilkerson 's lesbian army buddy who develops a crush on Lois Wilkerson. Oleynik had a supporting role in the film An American Rhapsody, which received a limited release in August 2001, and appeared in Bringing Rain, a low - budget film. Oleynik was cast in a supporting role in the series Pepper Dennis, which began airing on The WB in April 2006, but was not picked up by The WB 's successor The CW. In March 2008, Oleynik guest - starred in episode 13 of Aliens in America. In 2009, she provided audio commentary for the 10 Things I Hate About You 10th Anniversary Edition Blu - ray. In March 2011, Oleynik started appearing in a recurring role on Hawaii Five - 0 as CIA analyst Jenna Kaye until her character was later killed off. Oleynik appears as Ken Cosgrove 's girlfriend (and later wife) Cynthia Baxter in several episodes on the AMC television show Mad Men. In January 2013, according to TMZ, Oleynik was granted a restraining order against one of her fans, who she claims was so obsessed he changed his last name to hers, as well as leaving gifts for her at her mother 's apartment. She resides in Venice, California.
who wanted to change the anglican church from within
English Reformation - wikipedia The English Reformation was a series of events in 16th century England by which the Church of England broke away from the authority of the Pope and the Roman Catholic Church. These events were, in part, associated with the wider process of the European Protestant Reformation, a religious and political movement that affected the practice of Christianity across western and central Europe during this period. Many factors contributed to the process: the decline of feudalism and the rise of nationalism, the rise of the common law, the invention of the printing press and increased circulation of the Bible, and the transmission of new knowledge and ideas among scholars, the upper and middle classes and readers in general. However, the various phases of the English Reformation, which also covered Wales and Ireland, were largely driven by changes in government policy, to which public opinion gradually accommodated itself. Based on Henry VIII 's desire for an annulment of his marriage (first requested of Pope Clement VII in 1527), the English Reformation was at the outset more of a political affair than a theological dispute. The reality of political differences between Rome and England allowed growing theological disputes to come to the fore. Until the break with Rome, it was the Pope and general councils of the Church that decided doctrine. Church law was governed by canon law with final jurisdiction in Rome. Church taxes were paid straight to Rome, and the Pope had the final word in the appointment of bishops. The break with Rome was effected by a series of acts of Parliament passed between 1532 and 1534, among them the 1534 Act of Supremacy, which declared that Henry was the "Supreme Head on earth of the Church of England ''. (This title was renounced by Mary I in 1553 in the process of restoring papal jurisdiction; when Elizabeth I reasserted the royal supremacy in 1559, her title was Supreme Governor.) Final authority in doctrinal and legal disputes now rested with the monarch, and the papacy was deprived of revenue and the final say on the appointment of bishops. The theology and liturgy of the Church of England became markedly Protestant during the reign of Henry 's son Edward VI largely along lines laid down by Archbishop Thomas Cranmer. Under Mary, the whole process was reversed and the Church of England was again placed under papal jurisdiction. Soon after, Elizabeth reintroduced the Protestant faith but in a more moderate manner. The structure and theology of the church was a matter of fierce dispute for generations. The violent aspect of these disputes, manifested in the English Civil Wars, ended when the last Roman Catholic monarch, James II, was deposed, and Parliament asked William III and Mary II to rule jointly in conjunction with the English Bill of Rights in 1688 (in the "Glorious Revolution ''), from which emerged a church polity with an established church and a number of non-conformist churches whose members at first suffered various civil disabilities that were removed over time. The legacy of the past Roman Catholic Establishment remained an issue for some time, and still exists today. A substantial minority remained Roman Catholic in England, and in an effort to disestablish it from British systems, their church organisation remained illegal until the 19th century. Henry VIII acceded to the English throne in 1509 at the age of 17. He made a dynastic marriage with Catherine of Aragon, widow of his brother Arthur, in June 1509, just before his coronation on Midsummer 's Day. Unlike his father, who was secretive and conservative, the young Henry appeared the epitome of chivalry and sociability. An observant Roman Catholic, he heard up to five masses a day (except during the hunting season); of "powerful but unoriginal mind '', he let himself be influenced by his advisors from whom he was never apart, by night or day. He was thus susceptible to whoever had his ear. This contributed to a state of hostility between his young contemporaries and the Lord Chancellor, Cardinal Thomas Wolsey. As long as Wolsey had his ear, Henry 's Roman Catholicism was secure: in 1521, he had defended the Roman Catholic Church from Martin Luther 's accusations of heresy in a book he wrote -- probably with considerable help from the conservative Bishop of Rochester John Fisher -- entitled The Defence of the Seven Sacraments, for which he was awarded the title "Defender of the Faith '' (Fidei Defensor) by Pope Leo X. (Successive English and British monarchs have retained this title to the present, even after the Anglican Church broke away from Roman Catholicism, in part because the title was re-conferred by Parliament in 1544, after the split.) Wolsey 's enemies at court included those who had been influenced by Lutheran ideas, among whom was the attractive, charismatic Anne Boleyn. Anne arrived at court in 1522 as maid of honour to Queen Catherine, having spent some years in France being educated by Queen Claude of France. She was a woman of "charm, style and wit, with will and savagery which made her a match for Henry. '' Anne was a distinguished French conversationalist, singer, and dancer. She was cultured and is the disputed author of several songs and poems. By 1527, Henry wanted his marriage to Catherine annulled. She had not produced a male heir who survived longer than two months, and Henry wanted a son to secure the Tudor dynasty. Before Henry 's father (Henry VII) ascended the throne, England had been beset by civil warfare over rival claims to the English crown. Henry wanted to avoid a similar uncertainty over the succession. Catherine of Aragon 's only surviving child was Princess Mary. Henry claimed that this lack of a male heir was because his marriage was "blighted in the eyes of God ''. Catherine had been his late brother 's wife, and it was therefore against biblical teachings for Henry to have married her (Leviticus 20: 21); a special dispensation from Pope Julius II had been needed to allow the wedding in the first place. Henry argued the marriage was never valid because the biblical prohibition was part of unbreakable divine law, and even popes could not dispense with it. In 1527, Henry asked Pope Clement VII to annul the marriage, but the Pope refused. According to Canon Law the Pope can not annul a marriage on the basis of a canonical impediment previously dispensed. Clement also feared the wrath of Catherine 's nephew, Holy Roman Emperor Charles V, whose troops earlier that year had sacked Rome and briefly taken the Pope prisoner. The combination of his "scruple of conscience '' and his captivation by Anne Boleyn made his desire to rid himself of his Queen compelling. The indictment of his chancellor Cardinal Wolsey in 1529 for praemunire (taking the authority of the Papacy above the Crown), and subsequent death in November 1530 on his way to London to answer a charge of high treason left Henry open to the opposing influences of the supporters of the Queen and those who sanctioned the abandonment of the Roman allegiance, for whom an annulment was but an opportunity. In 1529, the King summoned Parliament to deal with annulment, thus bringing together those who wanted reform but who disagreed what form it should take; it became known as the Reformation Parliament. There were common lawyers who resented the privileges of the clergy to summon laity to their courts; there were those who had been influenced by Lutheranism and were hostile to the theology of Rome; Thomas Cromwell was both. Henry 's chancellor, Thomas More, successor to Wolsey, also wanted reform: he wanted new laws against heresy. Cromwell was a lawyer and a member of Parliament -- a Protestant who saw how Parliament could be used to advance the Royal Supremacy, which Henry wanted, and to further Protestant beliefs and practices Cromwell and his friends wanted. One of his closest friends was Thomas Cranmer, soon to be made an archbishop. In the matter of the annulment, no progress seemed possible. The Pope seemed more afraid of Emperor Charles V than of Henry. Anne and Cromwell and their allies wished simply to ignore the Pope, but in October 1530 a meeting of clergy and lawyers advised that Parliament could not empower the archbishop to act against the Pope 's prohibition. Henry thus resolved to bully the priests. Having brought down his chancellor, Cardinal Wolsey, Henry VIII finally resolved to charge the whole English clergy with praemunire to secure their agreement to his annulment. The Statute of Praemunire, which forbade obedience to the authority of the Pope or of any foreign rulers, enacted in 1392, had been used against individuals in the ordinary course of court proceedings. Now Henry, having first charged Queen Catherine 's supporters, Bishops John Fisher, Nicholas West and Henry Standish and Archdeacon of Exeter, Adam Travers, decided to proceed against the whole clergy. Henry claimed £ 100,000 from the Convocation of Canterbury (a representative body of English clergy) for their pardon, which was granted by the Convocation on 24 January 1531. The clergy wanted the payment spread over five years, but Henry refused. The convocation responded by withdrawing their payment altogether and demanded Henry fulfill certain guarantees before they would give him the money. Henry refused these conditions. He agreed only to the five - year period of payment and added five articles that specified that: In Parliament, Bishop Fisher championed Catherine and the clergy; he had inserted into the first article the phrase "... as far as the word of God allows... '' In Convocation, however, William Warham, Archbishop of Canterbury, requested a discussion but was met by a stunned silence; then Warham said, "He who is silent seems to consent, '' to which a clergyman responded, "Then we are all silent. '' The Convocation granted consent to the King 's five articles and the payment on 8 March 1531. That same year, Parliament passed the Pardon to Clergy Act 1531. The breaking of the power of Rome proceeded little by little. In 1532, Cromwell brought before Parliament the Supplication Against the Ordinaries, which listed nine grievances against the Church, including abuses of power and Convocation 's independent legislative power. Finally, on 10 May, the King demanded of Convocation that the Church renounce all authority to make laws. On 15 May, the Submission of the Clergy was subscribed, which recognised Royal Supremacy over the Church so that it could no longer make canon law without royal licence -- i.e., without the King 's permission -- thus emasculating it as a law - making body. (Parliament subsequently passed this in 1534 and again in 1536.) The day after this, More resigned as chancellor, leaving Cromwell as Henry 's chief minister. (Cromwell never became chancellor. His power came -- and was lost -- through his informal relations with Henry.) Several acts of Parliament then followed. The Act in Conditional Restraint of Annates proposed that the clergy pay no more than 5 percent of their first year 's revenue (annates) to Rome. This was initially controversial and required that Henry visit the House of Lords three times to browbeat the Commons. The Act in Restraint of Appeals, drafted by Cromwell, apart from outlawing appeals to Rome on ecclesiastical matters, declared that This realm of England is an Empire, and so hath been accepted in the world, governed by one Supreme Head and King having the dignity and royal estate of the Imperial Crown of the same, unto whom a body politic compact of all sorts and degrees of people divided in terms and by names of Spirituality and Temporality, be bounden and owe to bear next to God a natural and humble obedience. This declared England an independent country in every respect. English historian Geoffrey Elton called this act an "essential ingredient '' of the "Tudor revolution '' in that it expounded a theory of national sovereignty. The Act in Absolute Restraint of Annates outlawed all annates to Rome and also ordered that if cathedrals refused the King 's nomination for bishop, they would be liable to punishment by praemunire. Finally in 1534, the Acts of Supremacy made Henry "supreme head in earth of the Church of England '' and disregarded any "usage, custom, foreign laws, foreign authority (or) prescription ''. Meanwhile, having taken Anne to France on a pre-nuptial honeymoon, Henry married her in Westminster Abbey in January 1533. This was made easier by the death of Archbishop Warham, a strong opponent of an annulment. Henry appointed Thomas Cranmer to succeed him as Archbishop of Canterbury. Cranmer was prepared to grant the annulment of the marriage to Catherine as Henry required, going so far as to pronounce the judgment that Henry 's marriage with Catherine was against the law of God on 23 May. Anne gave birth to a daughter, Princess Elizabeth, in September 1533. The Pope responded to the marriage by excommunicating both Henry and Cranmer from the Roman Catholic Church (11 July 1533). Henry was excommunicated again in December 1538. Consequently, in the same year the Act of First Fruits and Tenths transferred the taxes on ecclesiastical income from the Pope to the Crown. The Act Concerning Peter 's Pence and Dispensations outlawed the annual payment by landowners of one penny to the Pope. This Act also reiterated that England had "no superior under God, but only your Grace '' and that Henry 's "imperial crown '' had been diminished by "the unreasonable and uncharitable usurpations and exactions '' of the Pope. In case any of this should be resisted, Parliament passed the Treasons Act 1534, which made it high treason punishable by death to deny Royal Supremacy. The following year, Thomas More and John Fisher were executed under this legislation. Finally, in 1536, Parliament passed the Act against the Pope 's Authority, which removed the last part of papal authority still legal. This was Rome 's power in England to decide disputes concerning Scripture. The break with Rome was not, by itself, a Reformation. The Lollards had been calling for radical theological change since the 14th century. Lollardy was a movement derived from the writings of John Wycliffe, a theologian and Bible translator, that stressed the primacy of Scripture and was declared heretical by the church. Lollards emphasised the preaching of the word over the sacrament of the altar, holding the latter to be but a memorial. After the execution of Sir John Oldcastle, leader of the Lollard rebellion of 1415, they never again had access to the levers of power, and by the 15th century were much reduced in numbers and influence. Yet, many Lollards were still about, especially in London and the Thames Valley, in Essex and Kent, Coventry, Bristol and even in the North, who would be receptive to Protestant ideas when they came, and who looked for a reform in the lifestyle of the clergy. Other ideas critical of papal supremacy were held not only by Lollards but by those who wished to assert the supremacy of the secular state over the church and also by conciliarists, such as Thomas More and, initially, Cranmer. Other Roman Catholic reformists, including John Colet, Dean of St Paul 's, warned that heretics were not nearly so great a danger to the faith as the wicked and indolent lives of the clergy. The views of the German reformer Martin Luther and his school were widely known and disputed in England. The main plank of his thinking, justification by faith alone rather than by good works, threatened the whole basis of the Roman Catholic penitential system with its endowed masses and prayers for the dead as well as its doctrine of purgatory. Faith, not pious acts, prayers or masses, in this view, can secure the grace of God. Moreover, printing, which had become widespread at the end of the previous century, meant that vernacular Bibles could be produced in quantity. A further English translation by William Tyndale was banned but it was impossible to prevent copies from being smuggled and widely read. The Church could no longer effectively dictate its interpretation of Scripture. The first open demonstration of support for Luther took place at Cambridge in 1521 when a student defaced a copy of the papal bull of condemnation against Luther. A group of university students, which met at the White Horse tavern from the mid-1520s and became known as Little Germany, soon became influential. Its members included Robert Barnes, Hugh Latimer, John Frith and Thomas Bilney -- all eventually burned as heretics. As chancellor, Thomas More pursued an aggressive campaign against heresy. Between 1530 and 1533, Thomas Hitton (England 's first Protestant martyr), Thomas Bilney, Richard Bayfield, John Tewkesbury, James Bainham, Thomas Benet, Thomas Harding, John Frith and Andrew Hewet were burned to death. In 1531, William Tracy was posthumously convicted of heresy for denying purgatory and affirming justification by faith, and his corpse was disinterred and burned. While Protestants were only a small portion of the population, the growing rift between the king and papacy gave Protestants a new sense of confidence. Heretical ideas were openly discussed, and militant iconoclasm was seen in Essex and Suffolk between 1531 and 1532. Because they tended to oppose royal supremacy, traditionalists were on the defensive, while Protestants found opportunities to form new alliances with government officials. By 1529, Henry VIII was using the language of radical reform openly -- speaking of Rome 's "vain and superfluous ceremonies '' and blaming the papacy for numerous wars and heresies. At Anne Boleyn 's urging, he was also reading Protestant books, such as Simon Fish 's Supplication for the Beggars and Tyndale 's The Obedience of a Christian Man. Borrowing from Luther, Tyndale argued that papal and clerical claims to independent power were unscriptural and that the king 's "law is God 's law ''. In 1531, Henry sought, through Robert Barnes, Luther 's opinion on his annulment; the theologian did not approve. After Warham 's death, Cranmer was made Archbishop of Canterbury (with papal consent) in 1533. Cranmer 's shift to Protestantism was borne partly by his membership of the team negotiating for the annulment, finally came through his stay with Andreas Osiander in Nuremberg in 1532. (Cranmer also secretly married Osiander 's niece.) Even then the position was complicated by the fact that Lutherans were not in favour of the annulment. Cranmer (and Henry) felt obliged to seek assistance from Strasbourg and Basel, which brought him into contact with the more radical ideas associated with Huldrych Zwingli. In 1534, a new Heresy Act ensured that no one could be punished for speaking against the pope and also made it more difficult to convict someone of heresy. What followed was a period of doctrinal confusion as both conservatives and reformers attempted to shape the church 's future direction. The reformers were aided by Thomas Cromwell. In January 1535, the king made Cromwell his vicegerent in spirituals. Effectively the king 's vicar general, Cromwell 's authority was greater than that of bishops. Even the Archbishop of Canterbury answered to Cromwell. Largely due to Anne Boleyn 's influence, a number of Protestants were appointed bishops between 1534 and 1536. These included Latimer, Thomas Goodrich, John Salcot, Nicholas Shaxton, William Barlow, John Hilsey and Edward Foxe. Protestant bishops remained a minority, however. Cromwell 's programme, assisted by Anne Boleyn 's influence over episcopal appointments, was not merely against the clergy and the power of Rome. He persuaded Henry that safety from political alliances that Rome might attempt to bring together lay in negotiations with the German Lutheran princes of the Schmalkaldic League. There also seemed to be a possibility that Charles V, the Holy Roman Emperor, might act to avenge his rejected aunt (Queen Catherine) and enforce the Pope 's excommunication. It never came to anything but it brought to England Lutheran ideas. In 1536, Convocation adopted the first doctrinal statement for the Church of England, the Ten Articles. This was followed by the Bishops ' Book in 1537. These established a semi-Lutheran doctrine for the church. Justification by faith, qualified by an emphasis on good works following justification, was a core teaching. The traditional seven sacraments were reduced to three only -- baptism, Eucharist and penance. Catholic teaching on praying to saints, purgatory and the use of images was undermined. More noticeable, and objectionable to many, were the Injunctions of 1536 and 1538. The programme began with the abolition of many feast days, "the occasion of vice and idleness '' which, particularly at harvest time, had an immediate effect on village life. The offerings to images were discouraged, as were pilgrimages -- these injunctions were issued while monasteries were being dissolved. Bibles in both English and Latin were to be placed in every church for the people to read, but this requirement was quietly ignored by bishops for a year or more. As the Reformation began to affect the towns and villages of England, in many places, people did not like it. Some parishes took steps to conceal images and relics in order to rescue them from confiscation and destruction. Historian Diarmaid MacCulloch in his study of The Later Reformation in England, 1547 - 1603 argues that after 1537, "England 's Reformation was characterized by its hatred of images, as Margaret Aston 's work on iconoclasm and iconophobia has repeatedly and eloquently demonstrated. '' In 1534, Cromwell initiated a Visitation of the Monasteries ostensibly to examine their character, but in fact, to value their assets with a view to expropriation. The Crown was undergoing financial difficulties, and the wealth of the church, in contrast to its political weakness, made appropriation of church property both tempting and feasible. Suppression of monasteries to raise funds was not unknown previously. Cromwell had done the same thing on the instructions of Cardinal Wolsey to raise funds for two proposed colleges at Ipswich and Oxford years before. Now the Visitation allowed for an inventory of what the monasteries possessed, and the visiting commissioners claimed to have uncovered sexual immorality and financial impropriety amongst the monks and nuns, which became the ostensible justification for their suppression. The Church owned between one - fifth and one - third of the land in all England; Cromwell realised that he could bind the gentry and nobility to Royal Supremacy by selling to them the huge amount of Church lands, and that any reversion to pre-Royal Supremacy would entail upsetting many of the powerful people in the realm. For these various reasons the Dissolution of the Monasteries began in 1536 with the Dissolution of the Lesser Monasteries Act, affecting smaller houses -- those valued at less than £ 200 a year. Henry used the revenue to help build coastal defences (see Device Forts) against expected invasion, and all the land was given to the Crown or sold to the aristocracy. Whereas the royal supremacy had raised few eyebrows, the attack on abbeys and priories affected lay people. Mobs attacked those sent to break up monastic buildings. Suppression commissioners were attacked by local people in several places. In Northern England there were a series of uprisings by Roman Catholics against the dissolutions in late 1536 and early 1537. In the autumn of 1536 there was a great muster, reckoned at up to 40,000 in number, at Horncastle in Lincolnshire. The nervous gentry managed, with difficulty, to disperse these masses -- who had tried unsuccessfully to negotiate with the king by petition. The Pilgrimage of Grace was a more serious matter. Revolt spread through Yorkshire, and the rebels gathered at York. Robert Aske, their leader, negotiated the restoration of sixteen of the twenty - six northern monasteries, which had actually been dissolved. However, the promises made to them by the Duke of Suffolk were ignored on the king 's orders. Suffolk was instructed to put the rebellion down. Forty - seven of the Lincolnshire rebels were executed, and 132 from the northern pilgrimage. Further rebellions took place in Cornwall in early 1537, and in Walsingham (in Norfolk). These received similar treatment. It took Cromwell four years to complete the process. In 1539 he moved to the dissolution of the larger monasteries that had escaped earlier. Many houses gave up voluntarily, though some sought exemption by payment. When their houses were closed down some monks sought to transfer to larger houses. Many became secular priests. A few, including eighteen Carthusians, refused and were killed to the last man. Henry VIII personally devised a plan to form at least thirteen new dioceses so that most counties had one based on a former monastery (or more than one), though this scheme was only partly carried out. New dioceses were established at Bristol, Gloucester, Oxford, Peterborough, Westminster and Chester, but not, for instance, at Shrewsbury, Leicester or Waltham. The abolition of papal authority made way not for orderly change, but for dissension and violence. Iconoclasm, destruction, disputes within communities that led to violence, and radical challenge to all forms of faith were reported daily to Cromwell -- developments he tried to hide from the King. Once Henry knew what was afoot, he acted. Thus at the end of 1538, a proclamation was issued forbidding free discussion of the sacrament and forbidding clerical marriage, on pain of death. Henry personally presided at the trial of John Lambert in November 1538 for denying the real presence. At the same time, he shared in the drafting of a proclamation giving Anabaptists and Sacramentaries ten days to get out of the country. In 1539 Parliament passed the Six Articles reaffirming Roman Catholic practices such as transubstantiation, clerical celibacy and the importance of confession to a priest and prescribed penalties if anyone denied them. Henry himself observed the Easter Triduum in that year with some display. On 28 June 1540 Cromwell, Henry 's longtime advisor and loyal servant, was executed. Different reasons were advanced: that Cromwell would not enforce the Act of Six Articles; that he had supported Barnes, Latimer and other heretics; and that he was responsible for Henry 's marriage to Anne of Cleves, his fourth wife. Many other arrests under the Act followed. Cranmer lay low. In 1540 Henry began his attack upon the free availability of the Bible. In 1536 Cromwell had instructed each parish to acquire "one book of the whole Bible of the largest volume in English '' by Easter 1539. This instruction had been largely ignored, so a new version, the Great Bible (largely William Tyndale 's English translation of the Hebrew and Greek Scriptures), was authorised in August 1537. But by 1539 Henry had announced his desire to have it "corrected '' (which Cranmer referred to the universities to undertake). Many parishes were, in any case, reluctant to use English Bibles. Now the mood was conservatism, which expressed itself in the fear that Bible reading led to heresy. Many Bibles that had been put in place were removed. By the 1543 Act for the Advancement of True Religion, Henry restricted Bible reading to men and women of noble birth. He expressed his fears to Parliament in 1545 that "the Word of God, is disputed, rhymed, sung and jangled in every ale house and tavern, contrary to the true meaning and doctrine of the same. '' By 1546 the conservatives, the Duke of Norfolk, Wriothesly, Gardiner and Tunstall were in the ascendent. They were, by the king 's will, to be members of the regency council on his death. However, by the time he died in 1547, Edward Seymour, Earl of Hertford, brother of Jane Seymour, Henry 's third wife (and therefore uncle to the future Edward VI), managed -- by a number of alliances with influential Protestants such as Lisle -- to gain control over the Privy Council. He persuaded Henry to change his will to replace Norfolk, Wriothesly, Gardiner and Tunstall as executors with Seymour 's supporters. When Henry died in 1547, his nine - year - old son, Edward VI, inherited the throne. Edward was a precocious child who had been brought up as a Protestant, but was initially of little account politically. Edward Seymour was made Lord Protector. He was commissioned as virtual regent with near sovereign powers. Now made Duke of Somerset, he proceeded at first hesitantly, partly because his powers were not unchallenged. When he acted it was because he saw the political advantage in doing so. The 1547 injunctions against images were a more tightly drawn version of those of 1538, but they were more fiercely enforced, at first informally, and then by instruction. All images in churches were to be dismantled. Stained glass, shrines and statues were defaced or destroyed. Roods, and often their lofts and screens, were cut down and bells were taken down. Vestments were prohibited and either burned or sold. Chalices were melted down or sold. The requirement of the clergy to be celibate was lifted. Processions were banned and ashes and palms were prohibited. Chantries (endowments to provide masses for the dead) were abolished completely. How well this was received is disputed. Modern historian A.G. Dickens contends that people had "... ceased to believe in intercessory masses for souls in purgatory '', while others, such as Eamon Duffy, argue that the demolition of chantry chapels and the removal of images coincided with the activity of royal visitors. The evidence is often ambiguous. In 1549 Cranmer introduced a Book of Common Prayer in English, which while to all appearances kept the structure of the Mass, altered the theology so that the holy gifts of consecrated bread and wine were not offered to God as a sacrifice although he was well aware that this had been the Church 's doctrine since the late 2nd century (it would be restored by Scottish non-Jurors of the Episcopal Church of Scotland and the Protestant Episcopal Church of the United States in 1789) In 1550 stone altars were replaced by wooden communion tables, a very public break with the past, as it changed the look and focus of church interiors. Less visible, but still influential, was the new ordinal -- which provided for Protestant ministers rather than Roman Catholic priests, an admittedly conservative adaptation of Bucer 's draft; its Preface explicitly mentions the historic succession but it has been described as "... another case of Cranmer 's opportunist adoption of medieval forms for new purposes. '' In 1551, the episcopate was remodelled by the appointment of Protestants to the bench. This removed the refusal of some bishops to enforce the regulations as an obstacle to change. Henceforth, the Reformation proceeded apace. In 1552, the prayer book -- which the conservative Bishop Stephen Gardiner had approved from his prison cell as being "patient of a Catholic interpretation '' -- was replaced by a second, much more radical prayer book that altered the service to remove any sense that the Eucharist was a material sacrifice offered to God while keeping the belief that it was a sacrifice of thanksgiving and praise (in word). Edward 's Parliament also repealed his father 's Six Articles. The enforcement of the new liturgy did not always take place without a struggle. Conformity was the order of the day, but in East Anglia and in Devon there were rebellions, as also in Cornwall, to which many parishes sent their young men; they were put down only after considerable loss of life. In other places the causes of the rebellions were less easy to pin down, but by July throughout southern England, there was "quavering quiet, '' which burst out into "stirs '' in many places, most significantly in Kett 's Rebellion in Norwich. Apart from these more spectacular pieces of resistance, in some places chantry priests continued to say prayers and landowners to pay them to do so. Opposition to the removal of images was widespread -- so much so that when during the Commonwealth, William Dowsing was commissioned to the task of image breaking in Suffolk, his task, as he records it, was enormous. In Kent and the southeast, compliance was mostly willing and for many, the sale of vestments and plate was an opportunity to make money (but it was also true that in London and Kent, Reformation ideas had permeated more deeply into popular thinking). The effect of the resistance was to topple Somerset as Lord Protector, so that in 1549 it was feared by some that the Reformation would cease. The prayer book was the tipping point. But Lisle, now made Earl of Warwick, was made Lord President of the Privy Council and, ever the opportunist (he died a public Roman Catholic), he saw the further implementation of the reforming policy as a means of defeating his rivals. Outwardly, the destruction and removals for sale had changed the church forever. Many churches concealed their vestments and their silver, and had buried their stone altars. There were many disputes between the government and parishes over church property. Thus, when Edward died in July 1553 and the Duke of Northumberland attempted to have the Protestant Lady Jane Grey made Queen, the unpopularity of the confiscations gave Mary the opportunity to have herself proclaimed Queen, first in Suffolk, and then in London to the acclamation of the crowds. From 1553, under the reign of Henry 's Roman Catholic daughter, Mary I, the Reformation legislation was repealed and Mary sought to achieve the reunion with Rome. Her first Act of Parliament was to retroactively validate Henry 's marriage to her mother and so legitimise her claim to the throne. Achieving her objective was, however, not straightforward. The Pope was only prepared to accept reunion after church property disputes were settled -- which, in practice, meant letting those who had bought former church property keep it. Thus did Cardinal Reginald Pole arrive to become Archbishop of Canterbury in Cranmer 's place. Mary could have had Cranmer, imprisoned as he was, tried and executed for treason -- he had supported the claims of Lady Jane Grey -- but she resolved to have him tried for heresy. His recantations of his Protestantism would have been a major coup. Unhappily for her, he unexpectedly withdrew his recantations at the last minute as he was to be burned at the stake, thus ruining her government 's propaganda victory. If Mary was to secure England for Roman Catholicism, she needed an heir. On the advice of the Holy Roman Emperor she married his son, Philip II of Spain; she needed to prevent her Protestant half - sister Elizabeth from inheriting the Crown and thus returning England to Protestantism. There was opposition, and even a rebellion in Kent (led by Sir Thomas Wyatt); even though it was provided that Philip would never inherit the kingdom if there was no heir, received no estates and had no coronation. He was there to provide an heir. But she never became pregnant, and likely suffered from cancer. Ironically, another blow fell. Pope Julius died and his successor, Pope Paul IV, declared war on Philip and recalled Pole to Rome to have him tried as a heretic. Mary refused to let him go. The support she might have expected from a grateful Pope was thus denied. After 1555, the initial reconciling tone of the regime began to harden. The medieval heresy laws were restored. The Marian Persecutions of Protestants ensued and 283 Protestants were burnt at the stake for heresy. This resulted in the Queen becoming known as Bloody Mary, due to the influence of John Foxe, one of the Protestants who fled Marian England. Foxe 's Book of Martyrs recorded the executions in such detail that it became Mary 's epitaph; Convocation subsequently ordered that Foxe 's book should be placed in every cathedral in the land. In fact, while those who were executed after the revolts of 1536, and the St David 's Down rebellion of 1549, and the unknown number of monks who died for refusing to submit, may not have been tried for heresy, they certainly exceeded that number by some amount. Even so, the heroism of some of the martyrs was an example to those who witnessed them, so that in some places it was the burnings that set people against the regime. There was a slow consolidation in Roman Catholic strength in Mary 's latter years. The reconciled Roman Catholic Edmund Bonner, Bishop of London, produced a catechism and a collection of homilies. Printing presses produced primers and other devotional materials, and recruitment to the English clergy began to rise after almost a decade. Repairs to long - neglected churches began. In the parishes "... restoration and repair continued, new bells were bought, and churches ' ales produced their bucolic profits. '' Commissioners visited to ensure that altars were restored, roods rebuilt and vestments and plate purchased. Moreover, Pole was determined to do more than remake the past. He insisted on scripture, teaching and education, and on improving the clergy 's moral standards. It is difficult to determine how far previous reigns had broken Roman Catholic devotion, with its belief in the saints and in purgatory, but certainties -- especially those that drew public financial support -- had been shaken. Benefactions to the church did not return significantly. Trust in clergy who had changed their minds and were now willing to leave their new wives -- as they were required to do -- was bound to have weakened. Few monasteries, chantries, and guilds were reinstated. "Parish religion was marked by religious and cultural sterility, '' though some have observed enthusiasm, marred only by poor harvests that produced poverty and want. Full restoration of the Roman Catholic faith in England to its pre-Reformation state would take time. Consequently, Protestants secretly ministering to underground congregations, such as Thomas Bentham, were planning for a long haul, a ministry of survival. Mary 's death in November 1558, childless and without having made provision for a Roman Catholic to succeed her, would undo her consolidation. Following Mary 's childless death, her half - sister Elizabeth inherited the throne. One of the most important concerns during Elizabeth 's early reign was religion. Elizabeth could not be Roman Catholic, as that church considered her illegitimate. At the same time, she had observed the turmoil brought about by Edward 's introduction of radical Protestant reforms. Communion with the Roman Catholic Church was again severed by Elizabeth. She relied primarily on her chief advisors, Sir William Cecil, as her Secretary of State, and Sir Nicholas Bacon, as the Lord Keeper of the Great Seal, for direction on the matter. Chiefly she supported her father 's idea of reforming the church but made some minor adjustments. In this way, Elizabeth and her advisors aimed at a church that included most opinions. Two groups were excluded. Roman Catholics who remained loyal to the Pope after he excommunicated the Queen in 1570 would not be tolerated. They were, in fact, regarded as traitors, because the Pope had refused to accept Elizabeth as Queen of England. Roman Catholics were given the hard choice of being loyal either to their church or their country. For some priests it meant life on the run, in some cases death for treason. The other group not to be tolerated was people who wanted reform to go much further, and who finally gave up on the Church of England. They could no longer see it as a true church. They believed it had refused to obey the Bible, so they formed small groups of convinced believers outside the church. The government responded with imprisonment and exile to try to crush these "separatists ''. The Church of England itself contained three groups. Those who believed the form of the church was just what it should be included leaders like John Jewel and Richard Hooker. A second group looked for opportunities to reintroduce some Roman Catholic practices. Under the Stuart kings they would have their chance. The third group, who came to be called Puritans, wanted to remove remaining traces of the old ways. The Stuart kings were to give them a rough passage. At the end of Elizabeth 's reign, the Church of England was firmly in place, but held the seeds of future conflict. Parliament was summoned in 1559 to consider the Reformation Bill and to create a new church. The Reformation Bill defined the Communion as a consubstantial celebration as opposed to a transubstantial celebration, included abuse of the pope in the litany, and ordered that ministers (meaning ordained clergy) should not wear the surplice or other Roman Catholic vestments. It allowed the clergy -- deacons, priests and bishops -- to marry, banned images from churches, and confirmed Elizabeth as Supreme Governor of the Church of England. The Bill met heavy resistance in the House of Lords, as Roman Catholic bishops as well as the lay peers voted against it. They reworked much of the Bill, changed the litany to allow for a transubstantial belief in the Communion and refused to grant Elizabeth the title of Supreme Head of the Church. Parliament was prorogued over Easter, and when it resumed, the government entered two new bills into the Houses -- the Act of Supremacy and the Act of Uniformity. This Act made null and void (with certain specific exceptions) the Marian act of 1554 that had repealed all Henry VIII 's legislation from 1529 onwards, which had denied the authority of the See of Rome and also confirmed Elizabeth as Supreme Governor of the Church of England. Supreme Governor was a suitably equivocal title that made Elizabeth head of the Church without ever saying she was. This was important for two reasons: (1) it satisfied those who felt that a woman could not rule the church, and (2) it acted in a conciliatory way toward English Roman Catholics. For the clergy, Elizabeth 's changes were more wholesale than those of her half - brother, Edward, had been. All but one (Anthony Kitchin) of the bishops lost their posts, a hundred fellows of Oxford colleges were deprived; many dignitaries resigned rather than take the oath. The bishops who were removed from the ecclesiastical bench were replaced by appointees who would agree to the reforms. Since the government was concerned that continuity of Orders continue without a break Mathew Parker was consecrated Archbishop of Canterbury by two bishops who had been consecrated in the mid-1530s using the Pontifical and two with the English Ordinal of 1550. On the question of images, Elizabeth 's initial reaction was to allow crucifixes and candlesticks and the restoration of roods, but some of the new bishops whom she had elevated protested. In 1560 Edmund Grindal, one of the Marian exiles now made Bishop of London, was allowed to enforce the demolition of rood lofts in London and in 1561 the Queen herself ordered the demolition of all lofts. Thereafter, the determination to prevent any further restoration was evidenced by the more thoroughgoing destruction of roods, vestments, stone altars, dooms, statues and other ornaments. The queen also appointed a new Privy Council, removing many Roman Catholic counsellors by doing so. Under Elizabeth, factionalism in the Council and conflicts at court greatly diminished. The Act of Supremacy was passed without difficulty. The Act of Uniformity 1558, which forced people to attend Sunday service in a parish church with a new version of the Book of Common Prayer, passed by only three votes. The Bill of Uniformity was more cautious than the initial Reformation Bill. It revoked the harsh laws proposed against Roman Catholics, it removed the abuse of the pope from the litany and kept the wording that allowed for both consubstantial and transubstantial interpretations of the presence of Christ in the Eucharist without making any declaration about the matter (transubstantion is actually condemned in the Thirty - Nine Articles). After Parliament was dismissed, Elizabeth and Cecil drafted the Royal Injunctions. These were additions to the settlement, and largely stressed continuity with the Catholic past -- clergy were ordered to wear the surplice and the use of the cope was allowed in cathedrals and collegiate chapels -- especially since all the clergy upon her accession the throne were Roman Catholic. Men were ordained to the three traditional orders of deacon, priest and bishop and so referred to in the Prayer Book Rites. The Ornaments Rubric states that the ornaments of the church and ministers thereof shall remain as they were in the second year of the reign of Edward VI, i.e. in 1548, when Mass was still celebrated (the Oxford Movement in the 19th century interpreted this as permission to wear chasubles, dalmatics and other vestments). Wafers, as opposed to ordinary baker 's bread, were to be used as the bread at Communion. Communion would be taken kneeling. The Black Rubric denied the real and essential presence of Christ in the consecrated elements but allowed kneeling as long as this act did not imply adoration. The Queen had it removed. There had been opposition to the settlement in rural England, which for the most part was largely Roman Catholic, so the changes aimed for acceptance of the settlement. What succeeded more than anything else was the sheer length of Elizabeth 's reign; while Mary had been able to impose her programme for a mere five years, Elizabeth had forty - five. Those who delayed, "looking for a new day '' when restoration would again be commanded, were defeated by the passing of years. Elizabeth 's reign saw the emergence of Puritanism, which encompassed those Protestants who, whilst they agreed that there should be one national church, felt that the church had been but partially reformed. Puritanism ranged from hostility to the content of the Prayer Book and "popish '' ceremony, to a desire that church governance be radically reformed. Grindal was made Archbishop of Canterbury in 1575 and chose to oppose even the Queen in his desire to forward the Puritan agenda. He ended a 6,000 - word reproach to her with, "Bear with me, I beseech you Madam, if I choose rather to offend your earthly majesty than to offend the heavenly majesty of God. '' He was placed under house arrest for his trouble and though he was not deprived, his death in 1583 put an end to the hopes of his supporters. Grindal 's successor, Archbishop Whitgift, more reflected the Queen 's determination to discipline those who were unprepared to accept her settlement. A conformist, he imposed a degree of obedience on the clergy that apparently alarmed even the Queen 's ministers, such as Lord Burghley. The Puritan cause was not helped even by its friends. The pseudonymous "Martin Marprelate '' tracts, which attacked conformist clergy with a libellous humorous tone, outraged senior Puritan clergy and set the government on an unsuccessful attempt to run the writer to earth. The defeat of the Spanish Armada in 1588 incidentally made it more difficult for Puritans to resist the conclusion that since God "blew with his wind and they were scattered '' he could not be too offended by the religious establishment in the land. On the other side, there were still huge numbers of Roman Catholics. Some conformed, bending with the times, hoping that there would be a fresh reverse. Vestments were still hidden, golden candlesticks bequeathed, chalices kept. The Mass was still celebrated in some places alongside the new Communion service but was more difficult than before. Both Roman Catholic priests and laity lived a double life, apparently conforming, but avoiding taking the oath of conformity. Only as time passed did recusancy -- refusal to attend Protestant services -- became more common. Jesuits and seminary priests, trained in Douai and Rome to make good the losses of English priests, encouraged this. By the 1570s, an underground church was growing fast as the Church of England became more Protestant and less bearable for Roman Catholics who were still a sizeable minority. Only one public attempt to restore the old religion occurred: the Rising of the Northern earls in 1569. It was a botched attempt; in spite of tumultuous crowds who greeted the rebels in Durham, the rebellion did not spread. The assistance they sought did not materialise, their communication with allies at Court was poor. They came nowhere near to freeing Mary Stuart, whose presence might have rallied support, from her imprisonment in Tutbury. The Roman Catholic Church 's refusal to countenance occasional attendance at Protestant services, as well as the excommunication of Elizabeth by Pope Pius V in 1570, presented the choice to Roman Catholics more starkly. The arrival of the seminary priests, while it was a lifeline to many Roman Catholics, brought further trouble. Elizabeth 's ministers took steps to stem the tide: fines for refusal to attend church were raised from 12 d. per service to £ 20 a month, fifty times an artisan 's wage; it was now treason to be absolved from schism and reconciled to Rome; the execution of priests began -- the first in 1577, four in 1581, eleven in 1582, two in 1583, six in 1584, fifty - three by 1590, and seventy more between 1601 and 1608. It became treasonable for a Roman Catholic priest ordained abroad to enter the country. Because the papacy had called for the deposing of the Queen, the choice for moderate Roman Catholics lay between treason and damnation. The List of Catholic martyrs of the English Reformation was extensive. There is some distance between legislation and its enforcement. The governmental attacks on recusancy were mostly upon the gentry. Few recusants were actually fined; the fines that were imposed were often at reduced rates; the persecution eased; priests came to recognise that they should not refuse communion to occasional conformists. The persecutions did not extinguish the faith, but they tested it sorely. The huge number of Roman Catholics in East Anglia and the North in the 1560s disappeared into the general population in part because recusant priests largely served the great Roman Catholic houses, which alone could hide them. Without the Mass and pastoral care, yeomen, artisans and husbandmen fell into conformism. Roman Catholicism, supported by foreign or expatriate priests, came to be seen as treasonous. By the time of Elizabeth 's death a third party had emerged, "perfectly hostile '' to Puritans but not adherent to Rome. It preferred the revised Book of Common Prayer of 1559, which was without some of the matters offensive to Roman Catholics. The recusants had been removed from the centre of the stage. The new dispute was now between the Puritans (who wished to see an end of the prayer book and episcopacy), and this third party (the considerable body of people who looked kindly on the Elizabethan Settlement, who rejected prophesyings, whose spirituality had been nourished by the Prayer Book and who preferred the governance of bishops). It was between these two groups that, after Elizabeth 's death in 1603, a new, more savage episode of the Reformation was in the process of gestation. During the reigns of the Stuart kings, James I and Charles I, the battle lines were to become more defined, leading ultimately to the English Civil War, the first on English soil to engulf parts of the civilian population. The war was only partly about religion, but the abolition of prayer book and episcopacy by a Puritan Parliament was an element in the causes of the conflict. As historian MacCulloch has noted, the legacy of these tumultuous events can be recognised, throughout the Commonwealth (1649 -- 60) and the Restoration that followed it, and beyond. This third party was to become the core of the restored Church of England, but at the price for further division. The historiography of the English Reformation has seen vigorous clashes among dedicated protagonists and scholars for five centuries. The main factual details at the national level have been clear since 1900, as laid out for example by James Anthony Froude, and Albert Pollard. Reformation historiography has seen many schools of interpretation with Protestant, Catholic, Anglican historians using their own religious perspectives. In addition there has been a highly influential Whig interpretation, based on liberal secularized Protestantism, that depicted the Reformation in England, in the words of Ian Hazlitt, as "the midwife delivering England from the Dark Ages to the threshold of modernity, and so a turning point of progress ''. Finally among the older schools was a neo-Marxist interpretation that stressed the economic decline of the old elites in the rise of the landed gentry and middle classes. All these approaches still have representatives, but the main thrust of scholarly historiography since the 1970s falls into four groupings or schools, according to Hazlett. Geoffrey Elton leads the first faction with an agenda rooted in political historiography. It concentrates on the top of the early modern church - state looking at it at the mechanics of policymaking and the organs of its implementation and enforcement. The key player for Elton was not Henry VIII, but rather his principal Secretary of State Thomas Cromwell. Elton downplays the prophetic spirit of the religious reformers in the theology of keen conviction, dismissing them as the meddlesome intrusions from fanatics and bigots. Secondly, a primarily religious perspective has motivated Geoffrey Dickens and others. They prioritize the religious and subjective side of the movement. While recognizing the Reformation was imposed from the top, just as it was everywhere else in Europe, but it also responded to aspirations from below. He has been criticized by for underestimating the strength of residual and revived Roman Catholicism. He has been praised for his demonstration of the close ties to European influences. In the Dickens school, David Loades has stressed the theological importance of the Reformation for Anglo - British development. Revisionists comprise a third school, led by Christopher Haigh, Jack Scarisbrick and numerous other scholars. Their main achievement was the discovery of an entirely new corpus of primary sources at the local level, leading them to the emphasis on Reformation as it played out on a daily and local basis, with much less emphasis on the control from the top they emphasize turning away from elite sources they emphasize local parish records, diocesan files, guild records, data from boroughs, the courts, and especially telltale individual wills. Finally, Patrick Collinson and others have brought much more precision to the theological landscape, with Calvinist Puritans who were impatient with the Anglican caution sent compromises. Indeed, the Puritans were a distinct subgroup who did not comprise all of Calvinism. The Church of England thus emerged as a coalition of factions, all of them Protestant inspiration. All the recent schools have decentered Henry VIII, and minimized hagiography. They have paid more attention to localities, Catholicism, radicals, and theological niceties. On Catholicism, the older schools overemphasized Thomas More (1470 -- 1535), to the neglect of other bishops and factors inside Catholicism. The older schools too often concentrated on elite London, the newer ones look to the English villages.
what idea was used to justify u.s. foreign policy during the cold war era
Foreign policy of the United States - Wikipedia The foreign policy of the United States is its interactions with foreign nations and how it sets standards of interaction for its organizations, corporations and system citizens of the United States. The officially stated goals of the foreign policy of the United States, including all the Bureaus and Offices in the United States Department of State, as mentioned in the Foreign Policy Agenda of the Department of State, are "to build and sustain a more democratic, secure, and prosperous world for the benefit of the American people and the international community. '' In addition, the United States House Committee on Foreign Affairs states as some of its jurisdictional goals: "export controls, including nonproliferation of nuclear technology and nuclear hardware; measures to foster commercial interaction with foreign nations and to safeguard American business abroad; international commodity agreements; international education; and protection of American citizens abroad and expatriation. '' U.S. foreign policy and foreign aid have been the subject of much debate, praise and criticism, both domestically and abroad. Subject to the advice and consent role of the U.S. Senate, the President of the United States negotiates treaties with foreign nations, but treaties enter into force if ratified by two - thirds of the Senate. The President is also Commander in Chief of the United States Armed Forces, and as such has broad authority over the armed forces. Both the Secretary of State and ambassadors are appointed by the President, with the advice and consent of the Senate. The United States Secretary of State acts similarly to a foreign minister and under Executive leadership is the primary conductor of state - to - state diplomacy. Congress is the only branch of government that has the authority to declare war. Furthermore, Congress writes the civilian and military budget, thus has vast power in military action and foreign aid. Congress also has power to regulate commerce with foreign nations. The main trend regarding the history of U.S. foreign policy since the American Revolution is the shift from non-interventionism before and after World War I, to its growth as a world power and global hegemony during and since World War II and the end of the Cold War in the 20th century. Since the 19th century, U.S. foreign policy also has been characterized by a shift from the realist school to the idealistic or Wilsonian school of international relations. Foreign policy themes were expressed considerably in George Washington 's farewell address; these included among other things, observing good faith and justice towards all nations and cultivating peace and harmony with all, excluding both "inveterate antipathies against particular nations, and passionate attachments for others '', "steer (ing) clear of permanent alliances with any portion of the foreign world '', and advocating trade with all nations. These policies became the basis of the Federalist Party in the 1790s. But the rival Jeffersonians feared Britain and favored France in the 1790s, declaring the War of 1812 on Britain. After the 1778 alliance with France, the U.S. did not sign another permanent treaty until the North Atlantic Treaty in 1949. Over time, other themes, key goals, attitudes, or stances have been variously expressed by Presidential ' doctrines ', named for them. Initially these were uncommon events, but since WWII, these have been made by most presidents. Jeffersonians vigorously opposed a large standing army and any navy until attacks against American shipping by Barbary corsairs spurred the country into developing a naval force projection capability, resulting in the First Barbary War in 1801. Despite two wars with European Powers -- the War of 1812 and the 1898 Spanish -- American War -- American foreign policy was peaceful and marked by steady expansion of its foreign trade during the 19th century. The 1803 Louisiana Purchase doubled the nation 's geographical area; Spain ceded the territory of Florida in 1819; annexation brought in the independent Texas Republic in 1845; a war with Mexico in 1848 added California, Arizona, Utah, Nevada, and New Mexico. The U.S. bought Alaska from the Russian Empire in 1867, and it annexed the independent Republic of Hawaii in 1898. Victory over Spain in 1898 brought the Philippines, and Puerto Rico, as well as oversight of Cuba. The short experiment in imperialism ended by 1908, as the U.S. turned its attention to the Panama Canal and the stabilization of regions to its south, including Mexico. The 20th century was marked by two world wars in which Allied powers, along with the United States, defeated their enemies and through this participation the United States increased its international reputation. President Wilson 's Fourteen Points was developed from his idealistic Wilsonianism program of spreading democracy and fighting militarism so as to end any wars. It became the basis of the German Armistice (which amounted to a military surrender) and the 1919 Paris Peace Conference. The resulting Treaty of Versailles, due to European allies ' punitive and territorial designs, showed insufficient conformity with these points and the U.S. signed separate treaties with each of its adversaries; due to Senate objections also, the U.S. never joined the League of Nations, which was established as a result of Wilson 's initiative. In the 1920s, the United States followed an independent course, and succeeded in a program of naval disarmament, and refunding the German economy. Operating outside the League it became a dominant player in diplomatic affairs. New York became the financial capital of the world, but the Wall Street Crash of 1929 hurled the Western industrialized world into the Great Depression. American trade policy relied on high tariffs under the Republicans, and reciprocal trade agreements under the Democrats, but in any case exports were at very low levels in the 1930s. The United States adopted a non-interventionist foreign policy from 1932 to 1938, but then President Franklin D. Roosevelt moved toward strong support of the Allies in their wars against Germany and Japan. As a result of intense internal debate, the national policy was one of becoming the Arsenal of Democracy, that is financing and equipping the Allied armies without sending American combat soldiers. Roosevelt mentioned four fundamental freedoms, which ought to be enjoyed by people "everywhere in the world ''; these included the freedom of speech and religion, as well as freedom from want and fear. Roosevelt helped establish terms for a post-war world among potential allies at the Atlantic Conference; specific points were included to correct earlier failures, which became a step toward the United Nations. American policy was to threaten Japan, to force it out of China, and to prevent its attacking the Soviet Union. However, Japan reacted by an attack on Pearl Harbor in December 1941, and the United States was at war with Japan, Germany, and Italy. Instead of the loans given to allies in World War I, the United States provided Lend - Lease grants of $50,000,000,000. Working closely with Winston Churchill of Britain, and Joseph Stalin of the Soviet Union, Roosevelt sent his forces into the Pacific against Japan, then into North Africa against Italy and Germany, and finally into Europe starting with France and Italy in 1944 against the Germans. The American economy roared forward, doubling industrial production, and building vast quantities of airplanes, ships, tanks, munitions, and, finally, the atomic bomb. Much of the American war effort went to strategic bombers, which flattened the cities of Japan and Germany. After the war, the U.S. rose to become the dominant non-colonial economic power with broad influence in much of the world, with the key policies of the Marshall Plan and the Truman Doctrine. Almost immediately, however, the world witnessed division into broad two camps during the Cold War; one side was led by the U.S. and the other by the Soviet Union, but this situation also led to the establishment of the Non-Aligned Movement. This period lasted until almost the end of the 20th century and is thought to be both an ideological and power struggle between the two superpowers. A policy of containment was adopted to limit Soviet expansion, and a series of proxy wars were fought with mixed results. In 1991, the Soviet Union dissolved into separate nations, and the Cold War formally ended as the United States gave separate diplomatic recognition to the Russian Federation and other former Soviet states. In domestic politics, foreign policy is not usually a central issue. In 1945 -- 1970 the Democratic Party took a strong anti-Communist line and supported wars in Korea and Vietnam. Then the party split with a strong, "dovish '', pacifist element (typified by 1972 presidential candidate George McGovern). Many "hawks '', advocates for war, joined the Neoconservative movement and started supporting the Republicans -- especially Reagan -- based on foreign policy. Meanwhile, down to 1952 the Republican Party was split between an isolationist wing, based in the Midwest and led by Senator Robert A. Taft, and an internationalist wing based in the East and led by Dwight D. Eisenhower. Eisenhower defeated Taft for the 1952 nomination largely on foreign policy grounds. Since then the Republicans have been characterized by a hawkish and intense American nationalism, and strong opposition to Communism, and strong support for Israel. In the 21st century, U.S. influence remains strong but, in relative terms, is declining in terms of economic output compared to rising nations such as China, India, Russia, and the newly consolidated European Union. Substantial problems remain, such as climate change, nuclear proliferation, and the specter of nuclear terrorism. Foreign policy analysts Hachigian and Sutphen in their book The Next American Century suggest all five powers have similar vested interests in stability and terrorism prevention and trade; if they can find common ground, then the next decades may be marked by peaceful growth and prosperity. In 2017 diplomats from other countries developed new tactics to deal with President Donald Trump. The New York Times reported on the eve of his first foreign trip as president: Trump has numerous aides giving advice on foreign policy. The chief diplomat was Secretary of State Rex Tillerson. His major foreign policy positions, which sometimes are at odds with Trump, include: In the United States, there are three types of treaty - related law: In contrast to most other nations, the United States considers the three types of agreements as distinct. Further, the United States incorporates treaty law into the body of U.S. federal law. As a result, Congress can modify or repeal treaties afterward. It can overrule an agreed - upon treaty obligation even if this is seen as a violation of the treaty under international law. Several U.S. court rulings confirmed this understanding, including Supreme Court decisions in Paquete Habana v. the United States (1900), and Reid v. Covert (1957), as well as a lower court ruling n Garcia - Mir v. Meese (1986). Further, the Supreme Court has declared itself as having the power to rule a treaty as void by declaring it "unconstitutional '', although as of 2011, it has never exercised this power. The State Department has taken the position that the Vienna Convention on the Law of Treaties represents established law. Generally, when the U.S. signs a treaty, it is binding. However, as a result of the Reid v. Covert decision, the U.S. adds a reservation to the text of every treaty that says, in effect, that the U.S. intends to abide by the treaty, but if the treaty is found to be in violation of the Constitution, then the U.S. legally ca n't abide by the treaty since the U.S. signature would be ultra vires. The United States has ratified and participates in many other multilateral treaties, including arms control treaties (especially with the Soviet Union), human rights treaties, environmental protocols, and free trade agreements. The United States is a founding member of the United Nations and most of its specialized agencies, notably including the World Bank Group and International Monetary Fund. The U.S. has at times has withheld payment of dues due to disagreements with the UN. The United States is also member of: After it captured the islands from Japan during World War II, the United States administered the Trust Territory of the Pacific Islands from 1947 to 1986 (1994 for Palau). The Northern Mariana Islands became a U.S. territory (part of the United States), while Federated States of Micronesia, the Marshall Islands, and Palau became independent countries. Each has signed a Compact of Free Association that gives the United States exclusive military access in return for U.S. defense protection and conduct of military foreign affairs (except the declaration of war) and a few billion dollars of aid. These agreements also generally allow citizens of these countries to live and work in the United States with their spouses (and vice versa), and provide for largely free trade. The federal government also grants access to services from domestic agencies, including the Federal Emergency Management Agency, National Weather Service, the United States Postal Service, the Federal Aviation Administration, the Federal Communications Commission, and U.S. representation to the International Frequency Registration Board of the International Telecommunication Union. The United States notably does not participate in various international agreements adhered to by almost all other industrialized countries, by almost all the countries of the Americas, or by almost all other countries in the world. With a large population and economy, on a practical level this can undermine the effect of certain agreements, or give other countries a precedent to cite for non-participation in various agreements. In some cases the arguments against participation include that the United States should maximize its sovereignty and freedom of action, or that ratification would create a basis for lawsuits that would treat American citizens unfairly. In other cases, the debate became involved in domestic political issues, such as gun control, climate change, and the death penalty. Examples include: While America 's relationships with Europe have tended to be in terms of multilateral frameworks, such as NATO, America 's relations with Asia have tended to be based on a "hub and spoke '' model using a series of bilateral relationships where states coordinate with the United States and do not collaborate with each other. On May 30, 2009, at the Shangri - La Dialogue Defense Secretary Robert M. Gates urged the nations of Asia to build on this hub and spoke model as they established and grew multilateral institutions such as ASEAN, APEC and the ad hoc arrangements in the area. However, in 2011 Gates said that the United States must serve as the "indispensable nation, '' for building multilateral cooperation. As of 2014, the U.S. currently produces about 66 % of the oil that it consumes. While its imports have exceeded domestic production since the early 1990s, new hydraulic fracturing techniques and discovery of shale oil deposits in Canada and the American Dakotas offer the potential for increased energy independence from oil exporting countries such as OPEC. Former U.S. President George W. Bush identified dependence on imported oil as an urgent "national security concern ''. Two - thirds of the world 's proven oil reserves are estimated to be found in the Persian Gulf. Despite its distance, the Persian Gulf region was first proclaimed to be of national interest to the United States during World War II. Petroleum is of central importance to modern armies, and the United States -- as the world 's leading oil producer at that time -- supplied most of the oil for the Allied armies. Many U.S. strategists were concerned that the war would dangerously reduce the U.S. oil supply, and so they sought to establish good relations with Saudi Arabia, a kingdom with large oil reserves. The Persian Gulf region continued to be regarded as an area of vital importance to the United States during the Cold War. Three Cold War United States Presidential doctrines -- the Truman Doctrine, the Eisenhower Doctrine, and the Nixon Doctrine -- played roles in the formulation of the Carter Doctrine, which stated that the United States would use military force if necessary to defend its "national interests '' in the Persian Gulf region. Carter 's successor, President Ronald Reagan, extended the policy in October 1981 with what is sometimes called the "Reagan Corollary to the Carter Doctrine '', which proclaimed that the United States would intervene to protect Saudi Arabia, whose security was threatened after the outbreak of the Iran -- Iraq War. Some analysts have argued that the implementation of the Carter Doctrine and the Reagan Corollary also played a role in the outbreak of the 2003 Iraq War. Almost all of Canada 's energy exports go to the United States, making it the largest foreign source of U.S. energy imports: Canada is consistently among the top sources for U.S. oil imports, and it is the largest source of U.S. natural gas and electricity imports. In 2007 the U.S. was Sub-Saharan Africa 's largest single export market accounting for 28 % of exports (second in total to the EU at 31 %). 81 % of U.S. imports from this region were petroleum products. Foreign assistance is a core component of the State Department 's international affairs budget, which is $49 billion in all for 2014. Aid is considered an essential instrument of U.S. foreign policy. There are four major categories of non-military foreign assistance: bilateral development aid, economic assistance supporting U.S. political and security goals, humanitarian aid, and multilateral economic contributions (for example, contributions to the World Bank and International Monetary Fund). In absolute dollar terms, the United States government is the largest international aid donor ($23 billion in 2014). The U.S. Agency for International Development (USAID) manages the bulk of bilateral economic assistance; the Treasury Department handles most multilateral aid. In addition many private agencies, churches and philanthropies provide aid. Although the United States is the largest donor in absolute dollar terms, it is actually ranked 19 out of 27 countries on the Commitment to Development Index. The CDI ranks the 27 richest donor countries on their policies that affect the developing world. In the aid component the United States is penalized for low net aid volume as a share of the economy, a large share of tied or partially tied aid, and a large share of aid given to less poor and relatively undemocratic governments. Foreign aid is a highly partisan issue in the United States, with liberals, on average, supporting foreign aid much more than conservatives do. As of 2016, the United States is actively conducting military operations against the Islamic State of Iraq and the Levant and Al - Qaeda under the Authorization for Use of Military Force Against Terrorists, including in areas of fighting in the Syrian Civil War and Yemeni Civil War. The Guantanamo Bay Naval Base holds what the federal government considers unlawful combatants from these ongoing activities, and has been a controversial issue in foreign relations, domestic politics, and Cuba -- United States relations. Other major U.S. military concerns include stability in Afghanistan and Iraq after the recent invasions of those countries, and Russian military activity in Ukraine. The United States is a founding member of NATO, an alliance of 29 North American and European nations formed to defend Western Europe against the Soviet Union during the Cold War. Under the NATO charter, the United States is compelled to defend any NATO state that is attacked by a foreign power. The United States itself was the first country to invoke the mutual defense provisions of the alliance, in response to the September 11 attacks. The United States also has mutual military defense treaties with: The United States has responsibility for the defense of the three Compact of Free Association states: Federated States of Micronesia, the Marshall Islands, and Palau. In 1989, the United States also granted five nations the major non-NATO ally status (MNNA), and additions by later presidents have brought the list to 28 nations. Each such state has a unique relationship with the United States, involving various military and economic partnerships and alliances. and lesser agreements with: The U.S. participates in various military - related multi-lateral organizations, including: The U.S. also operates hundreds of military bases around the world. The United States has undertaken unilateral and multilateral military operations throughout its history (see Timeline of United States military operations). In the post-World War II era, the country has had permanent membership and veto power in the United Nations Security Council, allowing it to undertake any military action without formal Security Council opposition. With vast military expenditures, the United States is known as the sole remaining superpower after the collapse of the Soviet Union. The U.S. contributes a relatively small number of personnel for United Nations peacekeeping operations. It sometimes acts through NATO, as with the NATO intervention in Bosnia and Herzegovina, NATO bombing of Yugoslavia, and ISAF in Afghanistan, but often acts unilaterally or in ad - hoc coalitions as with the 2003 invasion of Iraq. The United Nations Charter requires that military operations be either for self - defense or affirmatively approved by the Security Council. Though many of their operations have followed these rules, the United States and NATO have been accused of committing crimes against peace in international law, for example in the 1999 Yugoslavia and 2003 Iraq operations. The U.S. provides military aid through many different channels. Counting the items that appear in the budget as ' Foreign Military Financing ' and ' Plan Colombia ', the U.S. spent approximately $4.5 billion in military aid in 2001, of which $2 billion went to Israel, $1.3 billion went to Egypt, and $1 billion went to Colombia. Since 9 / 11, Pakistan has received approximately $11.5 billion in direct military aid. As of 2004, according to Fox News, the U.S. had more than 700 military bases in 130 different countries. Estimated U.S. foreign military financing and aid by recipient for 2010: According to a 2016 report by the Congressional Research Service, the U.S. topped the market in global weapon sales for 2015, with $40 billion sold. The largest buyers were Qatar, Egypt, Saudi Arabia, South Korea, Pakistan, Israel, the United Arab Emirates and Iraq. The Strategic Defense Initiative (SDI) was a proposal by U.S. President Ronald Reagan on March 23, 1983 to use ground and space - based systems to protect the United States from attack by strategic nuclear ballistic missiles, later dubbed "Star Wars ''. The initiative focused on strategic defense rather than the prior strategic offense doctrine of mutual assured destruction (MAD). Though it was never fully developed or deployed, the research and technologies of SDI paved the way for some anti-ballistic missile systems of today. In February 2007, the U.S. started formal negotiations with Poland and Czech Republic concerning construction of missile shield installations in those countries for a Ground - Based Midcourse Defense system (in April 2007, 57 % of Poles opposed the plan). According to press reports the government of the Czech Republic agreed (while 67 % Czechs disagree) to host a missile defense radar on its territory while a base of missile interceptors is supposed to be built in Poland. Russia threatened to place short - range nuclear missiles on the Russia 's border with NATO if the United States refuses to abandon plans to deploy 10 interceptor missiles and a radar in Poland and the Czech Republic. In April 2007, Putin warned of a new Cold War if the Americans deployed the shield in Central Europe. Putin also said that Russia is prepared to abandon its obligations under an Intermediate - Range Nuclear Forces Treaty of 1987 with the United States. On August 14, 2008, the United States and Poland announced a deal to implement the missile defense system in Polish territory, with a tracking system placed in the Czech Republic. "The fact that this was signed in a period of very difficult crisis in the relations between Russia and the United States over the situation in Georgia shows that, of course, the missile defense system will be deployed not against Iran but against the strategic potential of Russia '', Dmitry Rogozin, Russia 's NATO envoy, said. Keir A. Lieber and Daryl G. Press, argue in Foreign Affairs that U.S. missile defenses are designed to secure Washington 's nuclear primacy and are chiefly directed at potential rivals, such as Russia and China. The authors note that Washington continues to eschew nuclear first strike and contend that deploying missile defenses "would be valuable primarily in an offensive context, not a defensive one; as an adjunct to a US First Strike capability, not as a stand - alone shield '': If the United States launched a nuclear attack against Russia (or China), the targeted country would be left with only a tiny surviving arsenal, if any at all. At that point, even a relatively modest or inefficient missile defense system might well be enough to protect against any retaliatory strikes. This analysis is corroborated by the Pentagon 's 1992 Defense Planning Guidance (DPG), prepared by then Secretary of Defense Richard Cheney and his deputies. The DPG declares that the United States should use its power to "prevent the reemergence of a new rival '' either on former Soviet territory or elsewhere. The authors of the Guidance determined that the United States had to "Field a missile defense system as a shield against accidental missile launches or limited missile strikes by ' international outlaws ' '' and also must "Find ways to integrate the ' new democracies ' of the former Soviet bloc into the U.S. - led system ''. The National Archive notes that Document 10 of the DPG includes wording about "disarming capabilities to destroy '' which is followed by several blacked out words. "This suggests that some of the heavily excised pages in the still - classified DPG drafts may include some discussion of preventive action against threatening nuclear and other WMD programs. '' Finally, Robert David English, writing in Foreign Affairs, observes that in addition to the deployment U.S. missile defenses, the DPG 's second recommendation has also been proceeding on course. "Washington has pursued policies that have ignored Russian interests (and sometimes international law as well) in order to encircle Moscow with military alliances and trade blocs conducive to U.S. interests. '' In United States history, critics have charged that presidents have used democracy to justify military intervention abroad. Critics have also charged that the U.S. helped local militaries overthrow democratically elected governments in Iran, Guatemala, and in other instances. Studies have been devoted to the historical success rate of the U.S. in exporting democracy abroad. Some studies of American intervention have been pessimistic about the overall effectiveness of U.S. efforts to encourage democracy in foreign nations. Until recently, scholars have generally agreed with international relations professor Abraham Lowenthal that U.S. attempts to export democracy have been "negligible, often counterproductive, and only occasionally positive. '' Other studies find U.S. intervention has had mixed results, and another by Hermann and Kegley has found that military interventions have improved democracy in other countries. Professor Paul W. Drake argued that the U.S. first attempted to export democracy in Latin America through intervention from 1912 to 1932. Drake argued that this was contradictory because international law defines intervention as "dictatorial interference in the affairs of another state for the purpose of altering the condition of things. '' The study suggested that efforts to promote democracy failed because democracy needs to develop out of internal conditions, and can not be forcibly imposed. There was disagreement about what constituted democracy; Drake suggested American leaders sometimes defined democracy in a narrow sense of a nation having elections; Drake suggested a broader understanding was needed. Further, there was disagreement about what constituted a "rebellion ''; Drake saw a pattern in which the U.S. State Department disapproved of any type of rebellion, even so - called "revolutions '', and in some instances rebellions against dictatorships. Historian Walter LaFeber stated, "The world 's leading revolutionary nation (the U.S.) in the eighteenth century became the leading protector of the status quo in the twentieth century. '' Mesquita and Downs evaluated 35 U.S. interventions from 1945 to 2004 and concluded that in only one case, Colombia, did a "full fledged, stable democracy '' develop within ten years following the intervention. Samia Amin Pei argued that nation building in developed countries usually unravelled four to six years after American intervention ended. Pei, based on study of a database on worldwide democracies called Polity, agreed with Mesquita and Downs that U.S. intervention efforts usually do n't produce real democracies, and that most cases result in greater authoritarianism after ten years. Professor Joshua Muravchik argued U.S. occupation was critical for Axis power democratization after World War II, but America 's failure to encourage democracy in the third world "prove... that U.S. military occupation is not a sufficient condition to make a country democratic. '' The success of democracy in former Axis countries such as Italy were seen as a result of high national per - capita income, although U.S. protection was seen as a key to stabilization and important for encouraging the transition to democracy. Steven Krasner agreed that there was a link between wealth and democracy; when per - capita incomes of $6,000 were achieved in a democracy, there was little chance of that country ever reverting to an autocracy, according to an analysis of his research in the Los Angeles Times. Tures examined 228 cases of American intervention from 1973 to 2005, using Freedom House data. A plurality of interventions, 96, caused no change in the country 's democracy. In 69 instances, the country became less democratic after the intervention. In the remaining 63 cases, a country became more democratic. However this does not take into account the direction the country would have gone with no U.S. intervention. Hermann and Kegley found that American military interventions designed to protect or promote democracy increased freedom in those countries. Peceny argued that the democracies created after military intervention are still closer to an autocracy than a democracy, quoting Przeworski "while some democracies are more democratic than others, unless offices are contested, no regime should be considered democratic. '' Therefore, Peceny concludes, it is difficult to know from the Hermann and Kegley study whether U.S. intervention has only produced less repressive autocratic governments or genuine democracies. Peceny stated that the United States attempted to export democracy in 33 of its 93 20th - century military interventions. Peceny argued that proliberal policies after military intervention had a positive impact on democracy. A global survey done by Pewglobal indicated that at (as of 2014) least 33 surveyed countries have a positive view (50 % or above) of the United States. With the top ten most positive countries being Philippines (92 %), Israel (84 %), South Korea (82 %), Kenya (80 %), El Salvador (80 %), Italy (78 %), Ghana (77 %), Vietnam (76 %), Bangladesh (76 %), and Tanzania (75 %). While 10 surveyed countries have the most negative view (Below 50 %) of the United States. With the countries being Egypt (10 %), Jordan (12 %), Pakistan (14 %), Turkey (19 %), Russia (23 %), Palestinian Territories (30 %), Greece (34 %), Argentina (36 %), Lebanon (41 %), Tunisia (42 %). Americans ' own view of the United States was viewed at 84 %. International opinion about the US has often changed with different executive administrations. For example in 2009, the French public favored the United States when President Barack Obama (75 % favorable) replaced President George W. Bush (42 %). After President Donald Trump took the helm in 2017, French public opinion about the US fell from 63 % to 46 %. These trends were also seen in other European countries. United States foreign policy also includes covert actions to topple foreign governments that have been opposed to the United States. According to J. Dana Stuster, writing in Foreign Policy, there are seven "confirmed cases '' where the U.S. -- acting principally through the Central Intelligence Agency (CIA), but sometimes with the support of other parts of the U.S. government, including the Navy and State Department -- covertly assisted in the overthrow of a foreign government: Iran in 1953, Guatemala in 1954, Congo in 1960, the Dominican Republic in 1961, South Vietnam in 1963, Brazil in 1964, and Chile in 1973. Stuster states that this list excludes "U.S. - supported insurgencies and failed assassination attempts '' such as those directed against Cuba 's Fidel Castro, as well as instances where U.S. involvement has been alleged but not proven (such as Syria in 1949). In 1953 the CIA, working with the British government, initiated Operation Ajax against the Prime Minister of Iran Mohammad Mossadegh who had attempted to nationalize Iran 's oil, threatening the interests of the Anglo - Persian Oil Company. This had the effect of restoring and strengthening the authoritarian monarchical reign of Shah Mohammad Reza Pahlavi. In 1957, the CIA and Israeli Mossad aided the Iranian government in establishing its intelligence service, SAVAK, later blamed for the torture and execution of the regime 's opponents. A year later, in Operation PBSUCCESS, the CIA assisted the local military in toppling the democratically elected left - wing government of Jacobo Árbenz in Guatemala and installing the military dictator Carlos Castillo Armas. The United Fruit Company lobbied for Árbenz overthrow as his land reforms jeopardized their land holdings in Guatemala, and painted these reforms as a communist threat. The coup triggered a decades long civil war which claimed the lives of an estimated 200,000 people (42,275 individual cases have been documented), mostly through 626 massacres against the Maya population perpetrated by the U.S. - backed Guatemalan military. An independent Historical Clarification Commission found that U.S. corporations and government officials "exercised pressure to maintain the country 's archaic and unjust socio - economic structure, '' and that U.S. military assistance had a "significant bearing on human rights violations during the armed confrontation. '' During the massacre of at least 500,000 alleged communists in 1960s Indonesia, U.S. government officials encouraged and applauded the mass killings while providing covert assistance to the Indonesian military which helped facilitate them. This included the U.S. Embassy in Jakarta supplying Indonesian forces with lists of up to 5,000 names of suspected members of the Communist Party of Indonesia (PKI), who were subsequently killed in the massacres. In 2001, the CIA attempted to prevent the publication of the State Department volume Foreign Relations of the United States, 1964 -- 1968, which documents the U.S. role in providing covert assistance to the Indonesian military for the express purpose of the extirpation of the PKI. In July 2016, an international panel of judges ruled the killings constitute crimes against humanity, and that the US, along with other Western governments, were complicit in these crimes. In 1970, the CIA worked with coup - plotters in Chile in the attempted kidnapping of General René Schneider, who was targeted for refusing to participate in a military coup upon the election of Salvador Allende. Schneider was shot in the botched attempt and died three days later. The CIA later paid the group $35,000 for the failed kidnapping. According to one peer - reviewed study, the U.S. intervened in 81 foreign elections between 1946 and 2000, while the Soviet Union or Russia intervened in 36. Since the 1970s, issues of human rights have become increasingly important in American foreign policy. Congress took the lead in the 1970s. Following the Vietnam War, the feeling that U.S. foreign policy had grown apart from traditional American values was seized upon by Senator Donald M. Fraser (D, MI), leading the Subcommittee on International Organizations and Movements, in criticizing Republican Foreign Policy under the Nixon administration. In the early 1970s, Congress concluded the Vietnam War and passed the War Powers Act. As "part of a growing assertiveness by Congress about many aspects of Foreign Policy, '' Human Rights concerns became a battleground between the Legislative and the Executive branches in the formulation of foreign policy. David Forsythe points to three specific, early examples of Congress interjecting its own thoughts on foreign policy: These measures were repeatedly used by Congress, with varying success, to affect U.S. foreign policy towards the inclusion of Human Rights concerns. Specific examples include El Salvador, Nicaragua, Guatemala and South Africa. The Executive (from Nixon to Reagan) argued that the Cold War required placing regional security in favor of U.S. interests over any behavioral concerns of national allies. Congress argued the opposite, in favor of distancing the United States from oppressive regimes. Nevertheless, according to historian Daniel Goldhagen, during the last two decades of the Cold War, the number of American client states practicing mass murder outnumbered those of the Soviet Union. John Henry Coatsworth, a historian of Latin America and the provost of Columbia University, suggests the number of repression victims in Latin America alone far surpassed that of the USSR and its East European satellites during the period 1960 to 1990. W. John Green contends that the United States was an "essential enabler '' of "Latin America 's political murder habit, bringing out and allowing to flourish some of the region 's worst tendencies. '' On December 6, 2011, Obama instructed agencies to consider LGBT rights when issuing financial aid to foreign countries. He also criticized Russia 's law discriminating against gays, joining other western leaders in the boycott of the 2014 Winter Olympics in Russia. In June 2014, a Chilean court ruled that the United States played a key role in the murders of Charles Horman and Frank Teruggi, both American citizens, shortly after the 1973 Chilean coup d'état. United States foreign policy is influenced by the efforts of the U.S. government to control imports of illicit drugs, including cocaine, heroin, methamphetamine, and cannabis. This is especially true in Latin America, a focus for the U.S. War on Drugs. Those efforts date back to at least 1880, when the U.S. and China completed an agreement that prohibited the shipment of opium between the two countries. Over a century later, the Foreign Relations Authorization Act requires the President to identify the major drug transit or major illicit drug - producing countries. In September 2005, the following countries were identified: Bahamas, Bolivia, Brazil, Burma, Colombia, Dominican Republic, Ecuador, Guatemala, Haiti, India, Jamaica, Laos, Mexico, Nigeria, Pakistan, Panama, Paraguay, Peru and Venezuela. Two of these, Burma and Venezuela are countries that the U.S. considers to have failed to adhere to their obligations under international counternarcotics agreements during the previous 12 months. Notably absent from the 2005 list were Afghanistan, the People 's Republic of China and Vietnam; Canada was also omitted in spite of evidence that criminal groups there are increasingly involved in the production of MDMA destined for the United States and that large - scale cross-border trafficking of Canadian - grown cannabis continues. The U.S. believes that the Netherlands are successfully countering the production and flow of MDMA to the U.S. Critics from the left cite episodes that undercut leftist governments or showed support for Israel. Others cite human rights abuses and violations of international law. Critics have charged that the U.S. presidents have used democracy to justify military intervention abroad. Critics also point to declassified records which indicate that the CIA under Allen Dulles and the FBI under J. Edgar Hoover aggressively recruited more than 1,000 Nazis, including those responsible for war crimes, to use as spies and informants against the Soviet Union in the Cold War. The U.S. has faced criticism for backing right - wing dictators that systematically violated human rights, such as Augusto Pinochet of Chile, Alfredo Stroessner of Paraguay, Efraín Ríos Montt of Guatemala, Jorge Rafael Videla of Argentina, Hissène Habré of Chad Yahya Khan of Pakistan and Suharto of Indonesia. Critics have also accused the United States of facilitating and supporting state terrorism in the Global South during the Cold War, such as Operation Condor, an international campaign of political assassination and state terror organized by right - wing military dictatorships in the Southern Cone of South America. Journalists and human rights organizations have been critical of US - led airstrikes and targeted killings by drones which have in some cases resulted in collateral damage of civilian populations. In early 2017, the U.S. faced criticism from some scholars, activists and media outlets for dropping 26,171 bombs on seven different countries throughout 2016: Syria, Iraq, Afghanistan, Libya, Yemen, Somalia and Pakistan. The U.S. has been accused of complicity in war crimes for backing the Saudi Arabian - led intervention into the Yemeni Civil War (2015 -- present), which has triggered a humanitarian catastrophe, including a cholera outbreak and millions facing starvation. Studies have been devoted to the historical success rate of the U.S. in exporting democracy abroad. Some studies of American intervention have been pessimistic about the overall effectiveness of U.S. efforts to encourage democracy in foreign nations. Some scholars have generally agreed with international relations professor Abraham Lowenthal that U.S. attempts to export democracy have been "negligible, often counterproductive, and only occasionally positive. '' Other studies find U.S. intervention has had mixed results, and another by Hermann and Kegley has found that military interventions have improved democracy in other countries. A 2013 global poll in 68 countries with 66,000 respondents by Win / Gallup found that the U.S. is perceived as the biggest threat to world peace. Regarding support for certain anti-Communist dictatorships during the Cold War, a response is that they were seen as a necessary evil, with the alternatives even worse Communist or fundamentalist dictatorships. David Schmitz says this policy did not serve U.S. interests. Friendly tyrants resisted necessary reforms and destroyed the political center (though not in South Korea), while the ' realist ' policy of coddling dictators brought a backlash among foreign populations with long memories. Many democracies have voluntary military ties with United States. See NATO, ANZUS, Treaty of Mutual Cooperation and Security between the United States and Japan, Mutual Defense Treaty with South Korea, and Major non-NATO ally. Those nations with military alliances with the U.S. can spend less on the military since they can count on U.S. protection. This may give a false impression that the U.S. is less peaceful than those nations. Research on the democratic peace theory has generally found that democracies, including the United States, have not made war on one another. There have been U.S. support for coups against some democracies, but for example Spencer R. Weart argues that part of the explanation was the perception, correct or not, that these states were turning into Communist dictatorships. Also important was the role of rarely transparent United States government agencies, who sometimes mislead or did not fully implement the decisions of elected civilian leaders. Empirical studies (see democide) have found that democracies, including the United States, have killed much fewer civilians than dictatorships. Media may be biased against the U.S. regarding reporting human rights violations. Studies have found that The New York Times coverage of worldwide human rights violations predominantly focuses on the human rights violations in nations where there is clear U.S. involvement, while having relatively little coverage of the human rights violations in other nations. For example, the bloodiest war in recent time, involving eight nations and killing millions of civilians, was the Second Congo War, which was almost completely ignored by the media. Niall Ferguson argues that the U.S. is incorrectly blamed for all the human rights violations in nations they have supported. He writes that it is generally agreed that Guatemala was the worst of the US - backed regimes during the Cold War. However, the U.S. can not credibly be blamed for all the 200,000 deaths during the long Guatemalan Civil War. The U.S. Intelligence Oversight Board writes that military aid was cut for long periods because of such violations, that the U.S. helped stop a coup in 1993, and that efforts were made to improve the conduct of the security services. Today the U.S. states that democratic nations best support U.S. national interests. According to the U.S. State Department, "Democracy is the one national interest that helps to secure all the others. Democratically governed nations are more likely to secure the peace, deter aggression, expand open markets, promote economic development, protect American citizens, combat international terrorism and crime, uphold human and worker rights, avoid humanitarian crises and refugee flows, improve the global environment, and protect human health. '' According to former U.S. President Bill Clinton, "Ultimately, the best strategy to ensure our security and to build a durable peace is to support the advance of democracy elsewhere. Democracies do n't attack each other. '' In one view mentioned by the U.S. State Department, democracy is also good for business. Countries that embrace political reforms are also more likely to pursue economic reforms that improve the productivity of businesses. Accordingly, since the mid-1980s, under President Ronald Reagan, there has been an increase in levels of foreign direct investment going to emerging market democracies relative to countries that have not undertaken political reforms. Leaked cables in 2010 suggested that the "dark shadow of terrorism still dominates the United States ' relations with the world ''. The United States officially maintains that it supports democracy and human rights through several tools Examples of these tools are as follows:
what do metalloids metals and nonmetals have in common
Properties of metals, metalloids and nonmetals - wikipedia The chemical elements can be broadly divided into metals, metalloids and nonmetals according to their shared physical and chemical properties. All metals have a shiny appearance (at least when freshly polished); are good conductors of heat and electricity; form alloys with other metals; and have at least one basic oxide. Metalloids are metallic - looking brittle solids that are either semiconductors or exist in semiconducting forms, and have amphoteric or weakly acidic oxides. Typical nonmetals have a dull, coloured or colourless appearance; are brittle when solid; are poor conductors of heat and electricity; and have acidic oxides. Most or some elements in each category share a range of other properties; a few elements have properties that are either anomalous given their category, or otherwise extraordinary. Metals appear lustrous (beneath any patina); form mixtures (alloys) when combined with other metals; tend to lose or share electrons when they react with other substances; and each forms at least one predominantly basic oxide. Most metals are silvery looking, high density, relatively soft and easily deformed solids with good electrical and thermal conductivity, closely packed structures, low ionisation energies and electronegativities, and are found naturally in combined states. Some metals appear coloured (Cu, Cs, Au), have low densities (e.g. Be, Al) or very high melting points, are liquids at or near room temperature, are brittle (e.g. Os, Bi), not easily machined (e.g. Ti, Re), or are noble (hard to oxidise) or have nonmetallic structures (Mn and Ga are structurally analogous to, respectively, white P and I). Metals comprise the large majority of the elements, and can be subdivided into several different categories. From left to right in the periodic table, these categories include the highly reactive alkali metals; the less reactive alkaline earth metals, lanthanides and radioactive actinides; the archetypal transition metals, and the physically and chemically weak post-transition metals. Specialized subcategories such as the refractory metals and the noble metals also exist. Metalloids are metallic looking brittle solids; tend to share electrons when they react with other substances; have weakly acidic or amphoteric oxides; and are usually found naturally in combined states. Most are semiconductors, and moderate thermal conductors, and have structures that are more open than those of most metals. Some metalloids (As, Sb) conduct electricity like metals. The metalloids, as the smallest major category of elements, are not subdivided further. Nonmetals have open structures (unless solidified from gaseous or liquid forms); tend to gain or share electrons when they react with other substances; and do not form distinctly basic oxides. Most are gases at room temperature; have relatively low densities; are poor electrical and thermal conductors; have relatively high ionisation energies and electronegativities; form acidic oxides; and are found naturally in uncombined states in large amounts. Some nonmetals (C, black P, S and Se) are brittle solids at room temperature (although each of these also have malleable, pliable or ductile allotropes). From left to right in the periodic table, the nonmetals can be subdivided into the polyatomic nonmetals which, being nearest to the metalloids, show some incipient metallic character; the diatomic nonmetals, which are essentially nonmetallic; and the monatomic noble gases, which are almost completely inert. The characteristic properties of metals and nonmetals are quite distinct, as shown in the table below. Metalloids, straddling the metal - nonmetal border, are mostly distinct from either, but in a few properties resemble one or the other, as shown in the shading of the metalloid column below and summarized in the small table at the top of this section. Authors differ in where they divide metals from nonmetals and in whether they recognize an intermediate metalloid category. Some authors count metalloids as nonmetals with weakly nonmetallic properties. Others count some of the metalloids as post-transition metals. Within each category, elements can be found with one or two properties very different from the expected norm, or that are otherwise notable. Sodium, potassium, rubidium, caesium, barium, platinum, gold Manganese Iron, cobalt, nickel, gadolinium, terbium, dysprosium, holmium, erbium, thulium Iridium Gold Mercury Lead Bismuth Uranium Plutonium Boron Boron, antimony Silicon Arsenic Antimony Hydrogen Helium Carbon Phosphorus Iodine
when does it usually get warm in wisconsin
Climate of Minnesota - wikipedia Minnesota has a continental climate, with hot summers and cold winters. Minnesota 's location in the Upper Midwest allows it to experience some of the widest variety of weather in the United States, with each of the four seasons having its own distinct characteristics. The areas near Lake Superior in the Minnesota Arrowhead region experience weather unique from the rest of the state. The moderating effect of Lake Superior keeps the surrounding area relatively cooler in the summer and relatively warmer in the winter, giving that region a smaller yearly temperature range. On the Köppen climate classification, much of the southern third of Minnesota -- roughly from the Twin Cities region southward -- falls in the hot summer humid continental climate zone (Dfa), and the northern two - thirds of Minnesota falls in the warm summer great continental climate zone (Dfb). Winter in Minnesota is characterized by cold (below freezing) temperatures. Snow is the main form of winter precipitation, but freezing rain, sleet, and occasionally rain are all possible during the winter months. Common storm systems include Alberta clippers or Panhandle hooks; some of which develop into blizzards. Annual snowfall extremes have ranged from over 170 inches (432 cm) in the rugged Superior Highlands of the North Shore to as little as 5 inches (13 cm) in southern Minnesota. Temperatures as low as − 60 ° F (− 51 ° C) have occurred during Minnesota winters. Spring is a time of major transition in Minnesota. Snowstorms are common early in the spring, but by late - spring as temperatures begin to moderate the state can experience tornado outbreaks, a risk which diminishes but does not cease through the summer and into the autumn. In summer, heat and humidity predominate in the south, while warm and less humid conditions are generally present in the north. These humid conditions initiate thunderstorm activity 30 -- 40 days per year. Summer high temperatures in Minnesota average in the mid-80s F (30 ° C) in the south to the upper - 70s F (25 ° C) in the north, with temperatures as hot as 114 ° F (46 ° C) possible. The growing season in Minnesota varies from 90 days per year in the Iron Range to 160 days in southeast Minnesota. Tornadoes are possible in Minnesota from March through November, but the peak tornado month is June, followed by July, May, and August. The state averages 27 tornadoes per year. Minnesota is the driest state in the Midwest. Average annual precipitation across the state ranges from around 35 inches (890 mm) in the southeast to 20 inches (510 mm) in the northwest. Autumn weather in Minnesota is largely the reverse of spring weather. The jet stream -- which tends to weaken in summer -- begins to re-strengthen, leading to a quicker changing of weather patterns and an increased variability of temperatures. By late October and November these storm systems become strong enough to form major winter storms. Autumn and spring are the windiest times of the year in Minnesota. Because of its location in North America, Minnesota experiences temperature extremes characteristic of a continental climate, with cold winters and mild to hot summers in the south and frigid winters and generally cool summers in the north. Each season has distinctive upper air patterns which bring different weather conditions with them. The state is nearly 500 miles (805 km) from any large body of water (with the exception of Lake Superior), and temperatures and precipitation vary widely. It is far enough north to experience − 60 ° F (− 51 ° C) temperatures and blizzards during the winter months, but far enough south to have 114 ° F (46 ° C) temperatures and tornado outbreaks in the summer. The 174 degree Fahrenheit (97 ° C) variation between Minnesota 's highest and lowest temperature is the 11th largest variation of any U.S. state, and 3rd largest of any non-mountainous state (behind North Dakota and South Dakota). Minnesota is far from major sources of moisture and is in the transition zone between the moist East and the arid Great Plains. Annual average precipitation across the state ranges from around 35 inches (890 mm) in the southeast to 20 inches (510 mm) in the northwest. Snow is the main form of precipitation from November through March, while rain is the most common the rest of the year. Annual snowfall extremes have ranged from over 170 inches (432 cm) in the rugged Superior Highlands of the North Shore to as little as 2.3 inches (5.8 cm) in southern Minnesota. It has snowed in Minnesota during every month with the exception of July, and the state averages 110 days per year with snow cover of an inch (2.5 cm) or greater. Lake Superior moderates the climate of those parts of Minnesota 's Arrowhead Region near the shore. The lake acts as a heat sink, keeping the state 's North Shore area relatively cooler in the summer and warmer in the winter. While this effect is marked near the lake, it does not reach very far inland. For example, Grand Marais on the lakeshore has an average July high temperature of 70 ° F (21 ° C), while Virginia, at about the same latitude but inland about 100 miles (161 km) to the west, has an average July high of 77 ° F (25 ° C). Virginia 's average high temperature in January is 15 ° F (− 9 ° C), while Grand Marais ' is 23 ° F (− 5 ° C). Just a few miles inland from Lake Superior are the Sawtooth Mountains, which almost completely confine the marine air masses and associated precipitation to lower elevations near the lake. The prevailing northwest winter winds also limit the lake 's influence. Places near the shoreline can receive lake - effect snow, but because the state lies north and west of the lake, snowfall amounts are not nearly as large as they are in locations like Wisconsin and Michigan that lie downwind to the south. Even so, the single largest snowstorm in Minnesota history was a lake effect event. On January 6, 1994, Finland, Minnesota, received 36 inches (91 cm) of lake effect snow in 24 hours, and 47 inches (119 cm) over a three - day period. Both are Minnesota records. At 85 inches (216 cm) per year, the port city of Duluth has the highest average snowfall total of any city in Minnesota. At 58.9 ° F (14.9 ° C), Grand Marais has the lowest average summer temperature of any city in the state. The climatological effects of Lake Superior tend to stifle convection, thus limiting the potential for tornadoes. Although Cook and Lake counties are two of the largest counties in the state, they have experienced only seven tornadoes in the past 56 years. One of those tornadoes was a large F3 that occurred in the 1969 Minnesota tornado outbreak. Even though winter does not officially start until late December, Minnesota usually begins experiencing winter - like conditions in November, sometimes as early as October. As with many other Midwestern states, winter in Minnesota is characterized by cold (below freezing) temperatures and snowfall. Weather systems can move in from the north, west, or south, with the majority of the weather being driven in from the north. A vigorous jet stream brings high and low - pressure systems through in quick succession, which can cause large temperature variations over a short period of time. As the last remnants of summertime air in the southern U.S. start to lose their grip, cold polar air building up in northern Canada starts to push farther south, eventually spreading into Minnesota. By the time December and January arrive, Minnesota is fully engulfed in the polar air and is then subjected to outbreaks of arctic air masses. Because there are no natural barriers north or northwest of Minnesota to block arctic air from pouring south, Minnesota gets regular shots of the arctic air through the winter. High pressure systems which descend south from the Canadian plains behind the fronts bring light winds, clear skies, and bitterly cold temperatures. The northern part of Minnesota gets the brunt of the cold air. International Falls, sometimes called the "Icebox of the nation '', has the coldest average annual temperature of any National Weather Service first - order station in the contiguous United States at 37.4 ° F (3.0 ° C). Tower, Minnesota, sinks below 0 ° F (− 18 ° C) on an average of seventy - one days per year, while the ten coldest counties in the contiguous US, based on January minimums, are all located in Minnesota. The air mass then slowly moderates as it moves south into the rest of the state. Alberta clippers alternate with these high - pressure systems, bringing high winds and some snowfall with them. Minnesota occasionally gets breaks from the polar and arctic air when a zonal flow takes hold. This means that the jet stream will move in a west to east motion -- rather than north to south -- and warmer air from the western United States is pushed into the region. In Minnesota this pattern commonly leads to a prolonged period of above freezing high temperatures that gives Minnesotans a break from the winter freeze. Storms that move into Minnesota from a more westerly direction generally do not bring significant amounts of precipitation with them. Winter precipitation comes in a few different forms. Snow is the main form of precipitation, but freezing rain, ice, sleet and sometimes even rain are all possible during the winter months. Larger storm systems, often Panhandle hooks or other storms that occur with a meridional flow, can bring large amounts of snow and even blizzard conditions. Alberta clippers are fast - moving areas of low pressure that move through Minnesota during the winter months. Clippers get their name from Alberta, Canada, the province from which they begin their southward track. (Other variations of the same type of storm systems are "Saskatchewan Screamers '' or "Manitoba Maulers ''.) Although clippers often originate over the northern Pacific Ocean, they lose most of their moisture through orographic lift when they collide with the Canadian Rockies. Because of the limited moisture content and quick movement of the systems, clippers rarely produce more than 6 in (15 cm) of snow as they pass through Minnesota. The biggest effects of an Alberta Clipper are what follows them, and that is arctic air, high wind speed, and dangerous wind chills. This often results in severe blowing and drifting snow, and sometimes even blizzard conditions. Alberta Clippers often proceed to become copious lake - effect snow producers on the southern and eastern shores of the Great Lakes. In terms of their characteristics, Panhandle hooks are nearly the opposite of Alberta clippers. Instead of forming in the north and dropping south, these low pressure systems form in the southwestern United States and then move northeast. They get their name from the location where they usually make their turn to the north; near the panhandles of Oklahoma and Texas. Unlike clippers, these storms usually have a great deal of moisture to work with. As the storms make their turn to the north, they pull in moist air from the nearby Gulf of Mexico and pull it northward toward Minnesota and other parts of the Midwest. As these systems move to the northeast, there will usually be a heavy band of snow to the northwest of the low pressure center if there is enough cold air present. A wintery mix of precipitation, rain, or sometimes even thunderstorms will then often occur to the south of it. Snowfall of over a foot (30 cm) is not uncommon with a panhandle hook, and because of the high moisture content in these systems the snow is usually wet and heavy. Large panhandle hooks can become powerful enough to draw in arctic air after they pass by the state, leaving bitter cold temperatures and wind chills in their wake. Panhandle Hooks are responsible for some of the most famous blizzards that have occurred in the Midwest, including the blizzard of November 1940 and the Great Storm of 1975. Spring is a time of major transition in Minnesota. As winter nears its end, the sun rises higher in the sky and temperatures begin to moderate. As this happens much of the Midwest starts to experience severe thunderstorms and tornadoes. Storm systems that move inland from the Pacific begin to collide with the increasingly warm and moist air from the Gulf of Mexico. In the early part of the spring, Minnesota is usually not in a geographically favorable position to experience severe weather since the warm air needed for it has not yet pushed that far to the north. Early spring tornado outbreaks do occur occasionally in Minnesota though, as evidenced by the 1998 Comfrey -- St. Peter tornado outbreak on March 29, 1998. More often, Minnesota is on the northern (cooler) side of major storm systems in the early spring, which instead results in only rain and possibly snow. Even though the winter snow pack typically starts to melt in southern Minnesota in early March, there is usually still enough cold air present over Canada to allow for major snow storms in Minnesota until late April. Very heavy (30 + inches) monthly snowfalls can occur in March, such as in 1951 and 1965, and rarely, April, such as in April 2002 when over 20 inches of snow fell at Minneapolis, and April 2013, when heavy snow blanketed Minnesota and Duluth received a record monthly snowfall (for any month) of over 50 inches. As spring progresses, the jet stream starts to push storm systems farther to the north, and southern Minnesota becomes more prone to severe thunderstorms and tornadoes. As spring moves into the later stages, the chances for snow continue to drop and eventually disappear, south to north. By the time it gets warm enough for severe weather in northern Minnesota, the strength of storm systems have usually started to decrease, which results in fewer severe storms in northern Minnesota compared to the southern part of the state. With the exception of areas along the shores of Lake Superior, winds in Minnesota generally prevail from the north and northwest in the winter, and south and southeast in the summer. On average, autumn and spring are the windiest times of the year in Minnesota. October is the windiest month in northwest Minnesota, while April is the windiest over the rest of the state. Winds generally average between 9 and 11 mph (14 and 18 km / h) across the state, with one major exception. The heaviest winds in the state are found on the Buffalo Ridge, or Coteau des Prairies, a flatiron - shaped area extending from Watertown, South Dakota, diagonally across southwestern Minnesota and into Iowa. Created by two lobes of a glacier parting around a pre-existing plateau during the (Pleistocene) Ice Age, the Buffalo Ridge is ideal for wind power generation, with average wind speeds of 16.1 mph (26.8 km / h). Minnesota is prone to flooding in its major rivers by spring snowmelt runoff and ice jams. Spring flooding to some degree occurs almost annually on some Minnesota rivers, but major floods have occurred in 1965, 1969, 1997, 2001, and 2009. The flooding in 1965 was the worst flood in Minnesota history on the Mississippi River, while the flooding in 1997 was the worst in history on the Red River. The Red River flood of 1997 was aided heavily by the 11 blizzards that struck Minnesota that winter. Besides heavy winter and spring snowfall, cold winter temperatures and heavy autumn and spring rains causing sudden run - off surges are also common causes of spring river flooding in Minnesota. Minnesota is also prone to both river flooding and localized flash flooding by extended periods of heavy late - spring and summer rainfall. The Great Flood of 1993 on the Mississippi River was caused by copious amounts of rain that fell after the spring snow melt. The 2007 Midwest flooding, which affected the hilly Driftless area of southeast Minnesota was the result of a training pattern of storms mixing warm moist air from Tropical Storm Erin with cooler Canadian air, resulting in record 24 - hour rainfall totals of up to 17 inches (432 mm), with a similar flooding event in 2010 as a result of the remnants of tropical storm Georgette in the eastern Pacific and Hurricane Karl in the Gulf of Mexico. During a Minnesota summer, heat and humidity predominate in the south, while warm and less humid conditions are generally present in the north. A main feature of summer weather in Minnesota and the Midwestern United States as a whole is the weakening of the jet stream, leading to slower movement of air masses, a general increase in the stability of temperatures, and less wind. The strong wind that does blow almost always comes from the south, bringing in warm temperatures and humidity. These humid conditions and a jet stream that has pushed into the northern parts of the U.S. initiate thunderstorm activity 30 -- 40 days per year. Daily average summer temperatures in Minnesota range from the low 70s (22 ° C) in the south to the mid 60s ° F (19 ° C) in the north. Because summer time air masses are not as volatile as in the winter, daily high and low temperatures rarely vary more than 15 degrees (7 ° C) either side of normal. While summertime around much of the country means long stretches of hot and humid weather, Minnesota is located far enough north where periods of cooler, drier polar air frequently move in behind polar fronts dropping south from Canada. The polar air typically does not linger very long however and is quickly replaced by the warmer and more humid air from the Gulf of Mexico again. The cool, dry polar air colliding with hot and humid summertime air keep the threat of thunderstorms and tornadoes in Minnesota through July and August. Northern Minnesota is considerably cooler and less humid than southern Minnesota during the summer months. For example, Duluth 's annual average temperature and dew point are 6 degrees (3.4 ° C) cooler than Minneapolis '. July is the hottest month in Minnesota statewide and is usually the month when the peak heat waves occur. In July 1936, Minnesota and the rest of the Midwest suffered through its most severe heat wave on record. Most of the state was engulfed in 100 ° F (38 ° C) temperatures for several days in a row, and Minnesota 's all - time record high temperature of 114 ° F (46 ° C) was equaled during this stretch. This heat wave was also responsible for the Twin Cities ' all - time record high of 108 ° F (42 ° C), as well as the all - time record high of several other cities across the state. The western region of Minnesota experiences the hottest summer temperatures. Coteau des Prairies can heat cities to the north of it similar to how places in the Rocky Mountains are warmed by Chinook winds. As southwest winds blow down the slope of Coteau des Prairies, the air compresses and warms. This heats the hot air even further and often brings locations such as Beardsley and Moorhead the warmest temperatures in the state, despite their higher latitudes. The summer months of June, July, August, and September account for nearly half of the annual precipitation total across the state of Minnesota. Most of this rain falls from thunderstorms, a frequent summer occurrence. Even though summer is the primary season for Minnesota to experience thunderstorms, they can occur from March to November. These storms can become severe, producing large hail, strong tornadoes, and large bow echos that result in damaging straight - line winds. Minnesota has experienced several major derecho events, most recently the Boundary Waters - Canadian Derecho which blew down millions of trees in the Boundary Waters Canoe Area Wilderness on July 4, 1999. Summertime thunderstorms are fueled by dew points that often reach into the 70s ° F (21 ° C) and sometimes even 80 ° F (27 ° C). In addition to severe conditions, thunderstorms produce heavy rain and cloud to ground lightning. Heavy rain brings flash floods to Minnesota an average of three days per year. With the exception of hail, summer precipitation in Minnesota is almost always in the form of rain. The lone exception is in far northern Minnesota, where in mid-September, small amounts of snow become a possibility. Droughts are an annual summer concern in Minnesota, especially for farmers. The growing season (which varies from 90 days per year in the Iron Range to 160 days in southeast Minnesota) is when Minnesota averages its highest percentage of annual precipitation, so a lack of rainfall during this time period can be devastating to crops. The last major drought in Minnesota was in 1988. During that year, the period of April -- July was the 2nd driest in the previous century, and the period of May -- August was the hottest on record. The combination of dry skies and heat caused a severe drought which cost the state approximately 1.2 billion dollars in crop losses. Other memorable drought years were 1976 and the Dust Bowl years of the 1930s. During the dust bowl, inappropriate farming techniques enhanced by years of drought conditions led to dust storms in Southern Minnesota and the other parts of the Midwest. Drought conditions also have helped spawn forest fires. In 1894 the Great Hinckley Fire destroyed Hinckley killing an estimated 459 people, and in 1918 a forest fire killed 453 people in the vicinity of Cloquet. More recently, in 2006, the Cavity Lake Fire burned 31,830 acres (129 km2) in the Boundary Waters Canoe Area Wilderness. Tornadoes are possible in Minnesota from March -- November, but the peak tornado month is June, followed by July, May, and August. Tornadoes are most common in the southern half of the state, which is located on the northern edge of Tornado Alley. Just over a third of tornadoes in Minnesota strike between 4: 00 pm and 6: 00 pm. The state averages 27 tornadoes per year, 99 % of which have ratings of F2 or weaker. On average Minnesota has an F5 tornado once every 25 years. Some of the notable Minnesota tornadoes and outbreaks are: Autumn weather in Minnesota is marked by the rapid decrease of severe thunderstorms, dramatic cooling, and eventually the possibility of blizzards. With summer - time heat still prevalent in the southern U.S. and colder air quickly taking hold in Canada, Minnesota can be affected by wide temperature swings in short periods of time. Because of this, the jet stream, which tends to weaken during the summer months, begins to re-strengthen. This leads to quicker changes in weather patterns and increasingly strong storm systems. As autumn moves on, these storm systems bring with them progressively colder air, eventually changing the rain over snow, generally starting in October in the northern part of the state and November in the south. From September to December the average temperature in the state falls by approximately 43 ° F (23 ° C), the largest such temperature swing within any Minnesota season. By late October and November atmospheric dynamics are generally in place to allow storm systems to become very intense. In fact, Minnesota 's all - time record low pressure was recorded during autumn on October 26, 2010. If these powerful storm systems are able to draw enough cold air southward from Canada, they can evolve into powerful blizzards. Some of Minnesota 's most memorable winter storm events have occurred during the middle part of the autumn season. On November 11, 1940, the southeast half of Minnesota was surprised by the Armistice Day Blizzard. Temperatures in the 60s ° F (16 ° C) on the morning of November 11 dropped into the single digits (below -- 12 ° C) by the morning of November 12, bringing with them 27 inches (69 cm) of snow and 60 mph (100 km / h) winds. Known deaths in this blizzard reached 154, 49 of them in Minnesota. On October 31, 1991, much of Minnesota was hit by the Halloween Blizzard. A band of snowfall of 24 + in (60 + cm) fell from the Twin Cities north to Duluth. It was the single largest snowfall ever recorded in many communities across eastern Minnesota. Minnesota 's climate has done much to shape the image of the state. Minnesota has a late but intense spring, a summer of water sports, an autumn of brilliantly colored leaves, and a long winter with outdoor sports and activities. "Summer at the lake '' is a Minnesota tradition. Water skiing was invented in Minnesota by Ralph Samuelson, and the Minneapolis Aquatennial features a milk carton boat race. Contestants build boats from milk cartons and float them on Minneapolis area lakes, with recognition based more on colorful and imaginative designs than on actual racing performance. But while Minnesota 's warm summers provide its natives and tourists with a variety of outdoor activities, the state is known for its winters. The state has produced curlers, skiers, and lugers who have competed in the Winter Olympics, pioneers who invented the snowmobile, and legions of ice fishing enthusiasts. The state is also known for enthusiastic ice hockey players, both at the amateur and professional levels. Eveleth, Minnesota, home to the United States Hockey Hall of Fame, boasts of the number of quality players and the contributions of the city (and the rest of the Mesabi Range) to the growth and development of hockey in the United States. To many outsiders, Minnesota 's winters appear to be cold and inhospitable. A World War II newscaster, in describing the brutally cold conditions of the Russian front, stated that at least Minnesotans could understand it. A New York journalist visited St. Paul and declared that the city was "another Siberia, unfit for human habitation. '' In response, the city decided to build a huge ice palace in 1886, similar to one that Montreal had built in 1885. They hired the architects of the Canadian ice palace to design one for St. Paul and built a palace 106 ft (32.3 m) high with ice blocks cut from a nearby lake. This began the tradition of the Saint Paul Winter Carnival, a ten - day festival which celebrates Minnesota 's winter season. Minnesota 's winters are the setting of several television programs and Hollywood films, including the 1996 film Fargo which features the backdrop of a Minnesota winter, but like most of the characters in the movie, the climate is portrayed as bleak and inhospitable.
write the origin of social studies in nigeria
Social Studies - wikipedia In the United States education system, social studies is the integrated study of multiple fields of social science and the humanities, including history, geography, and political science. The term was first coined by American educators around the turn of the twentieth century as a catch - all for these subjects, as well as others which did not fit into the traditional models of lower education in the United States, such as philosophy and psychology. In 1912, the Bureau of Education (not to be confused with its successor agency, the United States Department of Education) was tasked by then Secretary of the Interior Franklin Knight Lane with completely restructuring the American education system for the twentieth century. In response, the Bureau of Education, together with the National Education Association, created the Commission on the Reorganization of Secondary Education. The Commission was made up of 16 committees (a 17th was established two years later, in 1916), each one tasked with the reform of a specific aspect of the American Education system. Notable among these was the Committee on Social Studies, which was created to consolidate and standardize various subjects which did not fit within normal school curricula into a new subject, to be called "the social studies. '' In 1920, the work done by the Committee on Social Studies culminated in the publication and release of Bulletin No. 28 (also called "The Committee on Social Studies Report, 1916 ''). The 66 - page bulletin published and distributed by the Bureau of Education is believed to be the first written work dedicated entirely to the subject. It was designed to introduce the concept to American educators and serve as a guide for the creation of nationwide curricula based around social studies. The bulletin proposed many ideas which were considered radical at the time, and it is regarded by many educators as one the most controversial educational resources of the early twentieth century. In the years after its release, the bulletin received criticism from educators on its vagueness, especially in regards to the definition of Social Studies itself. Critics often point to Section 1 of the report, which vaguely defines Social Studies as "... understood to be those whose subject matter relates directly to the organization and development of human society, and to man as a member of social groups. ''
what was the name of the two atomic bombs
Atomic bombings of Hiroshima and Nagasaki - wikipedia Hiroshima: Nagasaki: Southeast Asia Southwest Pacific North America Japan Manchuria During the final stage of World War II, the United States dropped nuclear weapons on the Japanese cities of Hiroshima and Nagasaki on August 6 and 9, 1945, respectively. The United States had dropped the bombs with the consent of the United Kingdom as outlined in the Quebec Agreement. The two bombings, which killed at least 129,000 people, remain the only use of nuclear weapons for warfare in history. In the final year of the war, the Allies prepared for what was anticipated to be a very costly invasion of the Japanese mainland. This was preceded by a U.S. conventional and firebombing campaign that destroyed 67 Japanese cities. The war in Europe had concluded when Nazi Germany signed its instrument of surrender on May 8, 1945. The Japanese, facing the same fate, refused to accept the Allies ' demands for unconditional surrender and the Pacific War continued. The Allies called for the unconditional surrender of the Imperial Japanese armed forces in the Potsdam Declaration on July 26, 1945 -- the alternative being "prompt and utter destruction ''. The Japanese response to this ultimatum was to ignore it. By August 1945, the Allies ' Manhattan Project had produced two types of atomic bomb, and the 509th Composite Group of the United States Army Air Forces (USAAF) was equipped with the specialized Silverplate version of the Boeing B - 29 Superfortress that could deliver them from Tinian in the Mariana Islands. Orders for atomic bombs to be used on four Japanese cities were issued on July 25. On August 6, the U.S. dropped a uranium gun - type (Little Boy) bomb on Hiroshima, and American President Harry S. Truman called for Japan 's surrender, warning it to "expect a rain of ruin from the air, the like of which has never been seen on this earth. '' Three days later, on August 9, a plutonium implosion - type (Fat Man) bomb was dropped on Nagasaki. Within the first two to four months following the bombings, the acute effects of the atomic bombings had killed 90,000 -- 146,000 people in Hiroshima and 39,000 -- 80,000 in Nagasaki; roughly half of the deaths in each city occurred on the first day. During the following months, large numbers died from the effect of burns, radiation sickness, and other injuries, compounded by illness and malnutrition. In both cities, most of the dead were civilians, although Hiroshima had a sizable military garrison. Japan announced its surrender to the Allies on August 15, six days after the bombing of Nagasaki and the Soviet Union 's declaration of war. On September 2, the Japanese government signed the instrument of surrender, effectively ending World War II. The justification for the bombings of Hiroshima and Nagasaki is still debated to this day. In 1945, the Pacific War between the Empire of Japan and the Allies entered its fourth year. The Japanese fought fiercely, ensuring that U.S. victory would come at an enormous cost. Of the 1.25 million battle casualties incurred by the United States in World War II, including both military personnel killed in action and wounded in action, nearly one million occurred in the twelve - month period from June 1944 to June 1945. December 1944 saw American battle casualties hit an all - time monthly high of 88,000 as a result of the German Ardennes Offensive. In the Pacific, the Allies returned to the Philippines, recaptured Burma, and invaded Borneo. Offensives were undertaken to reduce the Japanese forces remaining in Bougainville, New Guinea and the Philippines. In April 1945, American forces landed on Okinawa, where heavy fighting continued until June. Along the way, the ratio of Japanese to American casualties dropped from 5: 1 in the Philippines to 2: 1 on Okinawa. Although some Japanese were taken prisoner, most fought until they were killed or committed suicide. Nearly 99 % of the 21,000 defenders of Iwo Jima were killed. Of the 117,000 Japanese troops defending Okinawa in April -- June 1945, 94 % were killed. American military leaders used these figures to estimate high casualties among American soldiers in the planned invasion of Japan. As the Allies advanced towards Japan, conditions became steadily worse for the Japanese people. Japan 's merchant fleet declined from 5,250,000 gross tons in 1941 to 1,560,000 tons in March 1945, and 557,000 tons in August 1945. Lack of raw materials forced the Japanese war economy into a steep decline after the middle of 1944. The civilian economy, which had slowly deteriorated throughout the war, reached disastrous levels by the middle of 1945. The loss of shipping also affected the fishing fleet, and the 1945 catch was only 22 % of that in 1941. The 1945 rice harvest was the worst since 1909, and hunger and malnutrition became widespread. U.S. industrial production was overwhelmingly superior to Japan 's. By 1943, the U.S. produced almost 100,000 aircraft a year, compared to Japan 's production of 70,000 for the entire war. By the middle of 1944, the U.S. had almost a hundred aircraft carriers in the Pacific, far more than Japan 's twenty - five for the entire war. In February 1945, Prince Fumimaro Konoe advised the Emperor Hirohito that defeat was inevitable, and urged him to abdicate. Even before the surrender of Nazi Germany on May 8, 1945, plans were under way for the largest operation of the Pacific War, Operation Downfall, the invasion of Japan. The operation had two parts: Operation Olympic and Operation Coronet. Set to begin in October 1945, Olympic involved a series of landings by the U.S. Sixth Army intended to capture the southern third of the southernmost main Japanese island, Kyūshū. Operation Olympic was to be followed in March 1946 by Operation Coronet, the capture of the Kantō Plain, near Tokyo on the main Japanese island of Honshū by the U.S. First, Eighth and Tenth Armies, as well as a Commonwealth Corps made up of Australian, British and Canadian divisions. The target date was chosen to allow for Olympic to complete its objectives, for troops to be redeployed from Europe, and the Japanese winter to pass. Japan 's geography made this invasion plan obvious to the Japanese; they were able to predict the Allied invasion plans accurately and thus adjust their defensive plan, Operation Ketsugō, accordingly. The Japanese planned an all - out defense of Kyūshū, with little left in reserve for any subsequent defense operations. Four veteran divisions were withdrawn from the Kwantung Army in Manchuria in March 1945 to strengthen the forces in Japan, and 45 new divisions were activated between February and May 1945. Most were immobile formations for coastal defense, but 16 were high quality mobile divisions. In all, there were 2.3 million Japanese Army troops prepared to defend the home islands, backed by a civilian militia of 28 million men and women. Casualty predictions varied widely, but were extremely high. The Vice Chief of the Imperial Japanese Navy General Staff, Vice Admiral Takijirō Ōnishi, predicted up to 20 million Japanese deaths. A study from June 15, 1945, by the Joint War Plans Committee, who provided planning information to the Joint Chiefs of Staff, estimated that Olympic would result in between 130,000 and 220,000 U.S. casualties, of which U.S. dead would be in the range from 25,000 to 46,000. Delivered on June 15, 1945, after insight gained from the Battle of Okinawa, the study noted Japan 's inadequate defenses due to the very effective sea blockade and the American firebombing campaign. The Chief of Staff of the United States Army, General of the Army George Marshall, and the Army Commander in Chief in the Pacific, General of the Army Douglas MacArthur, signed documents agreeing with the Joint War Plans Committee estimate. The Americans were alarmed by the Japanese buildup, which was accurately tracked through Ultra intelligence. Secretary of War Henry L. Stimson was sufficiently concerned about high American estimates of probable casualties to commission his own study by Quincy Wright and William Shockley. Wright and Shockley spoke with Colonels James McCormack and Dean Rusk, and examined casualty forecasts by Michael E. DeBakey and Gilbert Beebe. Wright and Shockley estimated the invading Allies would suffer between 1.7 and 4 million casualties in such a scenario, of whom between 400,000 and 800,000 would be dead, while Japanese fatalities would have been around 5 to 10 million. Marshall began contemplating the use of a weapon which was "readily available and which assuredly can decrease the cost in American lives '': poison gas. Quantities of phosgene, mustard gas, tear gas and cyanogen chloride were moved to Luzon from stockpiles in Australia and New Guinea in preparation for Operation Olympic, and MacArthur ensured that Chemical Warfare Service units were trained in their use. Consideration was also given to using biological weapons against Japan. While the United States had developed plans for an air campaign against Japan prior to the Pacific War, the capture of Allied bases in the western Pacific in the first weeks of the conflict meant that this offensive did not begin until mid-1944 when the long - ranged Boeing B - 29 Superfortress became ready for use in combat. Operation Matterhorn involved India - based B - 29s staging through bases around Chengdu in China to make a series of raids on strategic targets in Japan. This effort failed to achieve the strategic objectives that its planners had intended, largely because of logistical problems, the bomber 's mechanical difficulties, the vulnerability of Chinese staging bases, and the extreme range required to reach key Japanese cities. United States Army Air Forces (USAAF) Brigadier General Haywood S. Hansell determined that Guam, Tinian, and Saipan in the Mariana Islands would better serve as B - 29 bases, but they were in Japanese hands. Strategies were shifted to accommodate the air war, and the islands were captured between June and August 1944. Air bases were developed, and B - 29 operations commenced from the Marianas in October 1944. These bases were easily resupplied by cargo ships. The XXI Bomber Command began missions against Japan on November 18, 1944. The early attempts to bomb Japan from the Marianas proved just as ineffective as the China - based B - 29s had been. Hansell continued the practice of conducting so - called high - altitude precision bombing, aimed at key industries and transportation networks, even after these tactics had not produced acceptable results. These efforts proved unsuccessful due to logistical difficulties with the remote location, technical problems with the new and advanced aircraft, unfavorable weather conditions, and enemy action. Hansell 's successor, Major General Curtis LeMay, assumed command in January 1945 and initially continued to use the same precision bombing tactics, with equally unsatisfactory results. The attacks initially targeted key industrial facilities but much of the Japanese manufacturing process was carried out in small workshops and private homes. Under pressure from USAAF headquarters in Washington, LeMay changed tactics and decided that low - level incendiary raids against Japanese cities were the only way to destroy their production capabilities, shifting from precision bombing to area bombardment with incendiaries. Like most strategic bombing during World War II, the aim of the USAAF offensive against Japan was to destroy the enemy 's war industries, kill or disable civilian employees of these industries, and undermine civilian morale. Civilians who took part in the war effort through such activities as building fortifications and manufacturing munitions and other war materials in factories and workshops were considered combatants in a legal sense and therefore liable to be attacked. Over the next six months, the XXI Bomber Command under LeMay firebombed 67 Japanese cities. The firebombing of Tokyo, codenamed Operation Meetinghouse, on March 9 -- 10 killed an estimated 100,000 people and destroyed 16 square miles (41 km) of the city and 267,000 buildings in a single night. It was the deadliest bombing raid of the war, at a cost of 20 B - 29s shot down by flak and fighters. By May, 75 % of bombs dropped were incendiaries designed to burn down Japan 's "paper cities ''. By mid-June, Japan 's six largest cities had been devastated. The end of the fighting on Okinawa that month provided airfields even closer to the Japanese mainland, allowing the bombing campaign to be further escalated. Aircraft flying from Allied aircraft carriers and the Ryukyu Islands also regularly struck targets in Japan during 1945 in preparation for Operation Downfall. Firebombing switched to smaller cities, with populations ranging from 60,000 to 350,000. According to Yuki Tanaka, the U.S. fire - bombed over a hundred Japanese towns and cities. These raids were also devastating. The Japanese military was unable to stop the Allied attacks and the country 's civil defense preparations proved inadequate. Japanese fighters and antiaircraft guns had difficulty engaging bombers flying at high altitude. From April 1945, the Japanese interceptors also had to face American fighter escorts based on Iwo Jima and Okinawa. That month, the Imperial Japanese Army Air Service and Imperial Japanese Navy Air Service stopped attempting to intercept the air raids in order to preserve fighter aircraft to counter the expected invasion. By mid-1945 the Japanese only occasionally scrambled aircraft to intercept individual B - 29s conducting reconnaissance sorties over the country, in order to conserve supplies of fuel. By July 1945, the Japanese had stockpiled 1,156,000 US barrels (137,800,000 l; 36,400,000 US gal; 30,300,000 imp gal) of avgas for the invasion of Japan. While the Japanese military decided to resume attacks on Allied bombers from late June, by this time there were too few operational fighters available for this change of tactics to hinder the Allied air raids. The discovery of nuclear fission by German chemists Otto Hahn and Fritz Strassmann in 1938, and its theoretical explanation by Lise Meitner and Otto Frisch, made the development of an atomic bomb a theoretical possibility. Fears that a German atomic bomb project would develop atomic weapons first, especially among scientists who were refugees from Nazi Germany and other fascist countries, were expressed in the Einstein - Szilard letter. This prompted preliminary research in the United States in late 1939. Progress was slow until the arrival of the British MAUD Committee report in late 1941, which indicated that only 5 -- 10 kilograms of isotopically enriched uranium - 235 was needed for a bomb instead of tons of un-enriched uranium and a neutron moderator (e.g. heavy water). Working in collaboration with the United Kingdom and Canada, with their respective projects Tube Alloys and Chalk River Laboratories, the Manhattan Project, under the direction of Major General Leslie R. Groves, Jr., of the U.S. Army Corps of Engineers, designed and built the first atomic bombs. Groves appointed J. Robert Oppenheimer to organize and head the project 's Los Alamos Laboratory in New Mexico, where bomb design work was carried out. Two types of bombs were eventually developed. Little Boy was a gun - type fission weapon that used uranium - 235, a rare isotope of uranium separated at the Clinton Engineer Works at Oak Ridge, Tennessee. The other, known as a Fat Man device (both types named by Robert Serber), was a more powerful and efficient, but more complicated, implosion - type nuclear weapon that used plutonium created in nuclear reactors at Hanford, Washington. A test implosion weapon, the gadget, was detonated at Trinity Site, on July 16, 1945, near Alamogordo, New Mexico. There was a Japanese nuclear weapon program, but it lacked the human, mineral and financial resources of the Manhattan Project, and never made much progress towards developing an atomic bomb. The 509th Composite Group was constituted on December 9, 1944, and activated on December 17, 1944, at Wendover Army Air Field, Utah, commanded by Colonel Paul Tibbets. Tibbets was assigned to organize and command a combat group to develop the means of delivering an atomic weapon against targets in Germany and Japan. Because the flying squadrons of the group consisted of both bomber and transport aircraft, the group was designated as a "composite '' rather than a "bombardment '' unit. Working with the Manhattan Project at Los Alamos, Tibbets selected Wendover for his training base over Great Bend, Kansas, and Mountain Home, Idaho, because of its remoteness. Each bombardier completed at least 50 practice drops of inert or conventional explosive pumpkin bombs and Tibbets declared his group combat - ready. The 509th Composite Group had an authorized strength of 225 officers and 1,542 enlisted men, almost all of whom eventually deployed to Tinian. In addition to its authorized strength, the 509th had attached to it on Tinian 51 civilian and military personnel from Project Alberta, known as the 1st Technical Detachment. The 509th Composite Group 's 393d Bombardment Squadron was equipped with 15 Silverplate B - 29s. These aircraft were specially adapted to carry nuclear weapons, and were equipped with fuel - injected engines, Curtiss Electric reversible - pitch propellers, pneumatic actuators for rapid opening and closing of bomb bay doors and other improvements. The ground support echelon of the 509th Composite Group moved by rail on April 26, 1945, to its port of embarkation at Seattle, Washington. On May 6 the support elements sailed on the SS Cape Victory for the Marianas, while group materiel was shipped on the SS Emile Berliner. The Cape Victory made brief port calls at Honolulu and Eniwetok but the passengers were not permitted to leave the dock area. An advance party of the air echelon, consisting of 29 officers and 61 enlisted men flew by C - 54 to North Field on Tinian, between May 15 and May 22. There were also two representatives from Washington, D.C., Brigadier General Thomas Farrell, the deputy commander of the Manhattan Project, and Rear Admiral William R. Purnell of the Military Policy Committee, who were on hand to decide higher policy matters on the spot. Along with Captain William S. Parsons, the commander of Project Alberta, they became known as the "Tinian Joint Chiefs ''. In April 1945, Marshall asked Groves to nominate specific targets for bombing for final approval by himself and Stimson. Groves formed a Target Committee, chaired by himself, that included Farrell, Major John A. Derry, Colonel William P. Fisher, Joyce C. Stearns and David M. Dennison from the USAAF; and scientists John von Neumann, Robert R. Wilson and William Penney from the Manhattan Project. The Target Committee met in Washington on April 27; at Los Alamos on May 10, where it was able to talk to the scientists and technicians there; and finally in Washington on May 28, where it was briefed by Tibbets and Commander Frederick Ashworth from Project Alberta, and the Manhattan Project 's scientific advisor, Richard C. Tolman. The Target Committee nominated five targets: Kokura, the site of one of Japan 's largest munitions plants; Hiroshima, an embarkation port and industrial center that was the site of a major military headquarters; Yokohama, an urban center for aircraft manufacture, machine tools, docks, electrical equipment and oil refineries; Niigata, a port with industrial facilities including steel and aluminum plants and an oil refinery; and Kyoto, a major industrial center. The target selection was subject to the following criteria: These cities were largely untouched during the nightly bombing raids and the Army Air Forces agreed to leave them off the target list so accurate assessment of the weapon could be made. Hiroshima was described as "an important army depot and port of embarkation in the middle of an urban industrial area. It is a good radar target and it is such a size that a large part of the city could be extensively damaged. There are adjacent hills which are likely to produce a focusing effect which would considerably increase the blast damage. Due to rivers it is not a good incendiary target. '' The Target Committee stated that "It was agreed that psychological factors in the target selection were of great importance. Two aspects of this are (1) obtaining the greatest psychological effect against Japan and (2) making the initial use sufficiently spectacular for the importance of the weapon to be internationally recognized when publicity on it is released. Kyoto had the advantage of being an important center for military industry, as well an intellectual center and hence a population better able to appreciate the significance of the weapon. The Emperor 's palace in Tokyo has a greater fame than any other target but is of least strategic value. '' Edwin O. Reischauer, a Japan expert for the U.S. Army Intelligence Service, was incorrectly said to have prevented the bombing of Kyoto. In his autobiography, Reischauer specifically refuted this claim: ... the only person deserving credit for saving Kyoto from destruction is Henry L. Stimson, the Secretary of War at the time, who had known and admired Kyoto ever since his honeymoon there several decades earlier. On May 30, Stimson asked Groves to remove Kyoto from the target list due to its historical, religious and cultural significance, but Groves pointed to its military and industrial significance. Stimson then approached President Harry S. Truman about the matter. Truman agreed with Stimson, and Kyoto was temporarily removed from the target list. Groves attempted to restore Kyoto to the target list in July, but Stimson remained adamant. On July 25, Nagasaki was put on the target list in place of Kyoto. In early May 1945, the Interim Committee was created by Stimson at the urging of leaders of the Manhattan Project and with the approval of Truman to advise on matters pertaining to nuclear energy. During the meetings on May 31 and June 1, scientist Ernest Lawrence had suggested giving the Japanese a non-combat demonstration. Arthur Compton later recalled that: It was evident that everyone would suspect trickery. If a bomb were exploded in Japan with previous notice, the Japanese air power was still adequate to give serious interference. An atomic bomb was an intricate device, still in the developmental stage. Its operation would be far from routine. If during the final adjustments of the bomb the Japanese defenders should attack, a faulty move might easily result in some kind of failure. Such an end to an advertised demonstration of power would be much worse than if the attempt had not been made. It was now evident that when the time came for the bombs to be used we should have only one of them available, followed afterwards by others at all - too - long intervals. We could not afford the chance that one of them might be a dud. If the test were made on some neutral territory, it was hard to believe that Japan 's determined and fanatical military men would be impressed. If such an open test were made first and failed to bring surrender, the chance would be gone to give the shock of surprise that proved so effective. On the contrary, it would make the Japanese ready to interfere with an atomic attack if they could. Though the possibility of a demonstration that would not destroy human lives was attractive, no one could suggest a way in which it could be made so convincing that it would be likely to stop the war. The possibility of a demonstration was raised again in the Franck Report issued by physicist James Franck on June 11 and the Scientific Advisory Panel rejected his report on June 16, saying that "we can propose no technical demonstration likely to bring an end to the war; we see no acceptable alternative to direct military use. '' Franck then took the report to Washington, D.C., where the Interim Committee met on June 21 to re-examine its earlier conclusions; but it reaffirmed that there was no alternative to the use of the bomb on a military target. Like Compton, many U.S. officials and scientists argued that a demonstration would sacrifice the shock value of the atomic attack, and the Japanese could deny the atomic bomb was lethal, making the mission less likely to produce surrender. Allied prisoners of war might be moved to the demonstration site and be killed by the bomb. They also worried that the bomb might be a dud since the Trinity test was of a stationary device, not an air - dropped bomb. In addition, only two bombs would be available at the start of August, although more were in production, and they cost billions of dollars, so using one for a demonstration would be expensive. For several months, the U.S. had dropped more than 63 million leaflets across Japan warning civilians of air raids. Many Japanese cities suffered terrible damage from aerial bombings; some were as much as 97 % destroyed. LeMay thought that leaflets would increase the psychological impact of bombing, and reduce the international stigma of area - bombing cities. Even with the warnings, Japanese opposition to the war remained ineffective. In general, the Japanese regarded the leaflet messages as truthful, with many Japanese choosing to leave major cities. The leaflets caused such concern amongst the Empire of Japan that they ordered the arrest of anyone caught in possession of a leaflet. Leaflet texts were prepared by recent Japanese prisoners of war because they were thought to be the best choice "to appeal to their compatriots ''. In preparation for dropping an atomic bomb on Hiroshima, the Oppenheimer - led Scientific Panel of the Interim Committee decided against a demonstration bomb, and against a special leaflet warning, in both cases because of the uncertainty of a successful detonation, and the wish to maximize shock in the leadership. No warning was given to Hiroshima that a new and much more destructive bomb was going to be dropped. Various sources give conflicting information about when the last leaflets were dropped on Hiroshima prior to the atomic bomb. Robert Jay Lifton writes that it was July 27, and Theodore H. McNelly that it was July 3. The USAAF history notes eleven cities were targeted with leaflets on July 27, but Hiroshima was not one of them, and there were no leaflet sorties on July 30. Leaflet sorties were undertaken on August 1 and August 4. Hiroshima may have been leafleted in late July or early August, as survivor accounts talk about a delivery of leaflets a few days before the atomic bomb was dropped. Three versions were printed of a leaflet listing 11 or 12 cities targeted for firebombing; a total of 33 cities listed. With the text of this leaflet reading in Japanese "... we can not promise that only these cities will be among those attacked... '' Hiroshima is not listed. Under the 1943 Quebec Agreement with the United Kingdom, nuclear weapons would not be used against another country without mutual consent. Stimson therefore had to obtain British permission. A meeting of the Combined Policy Committee was held at the Pentagon on July 4, 1945. Stimson and Vannevar Bush represented the United States; Britain was represented by the head of the British Joint Staff Mission, Field Marshal Sir Henry Maitland Wilson; and Canada by Clarence D. Howe. In addition, Harvey Bundy, Sir James Chadwick, Groves, Lord Halifax, George L. Harrison, and Roger Makins were all present at the meeting by invitation. Wilson announced that the British government concurred with the use of nuclear weapons against Japan, which would be officially recorded as a decision of the Combined Policy Committee. As the release of information to third parties was also controlled by the Quebec Agreement, discussion then turned to what scientific details would be revealed in the press announcement of the bombing. The meeting also considered what Truman could reveal to Joseph Stalin, the leader of the Soviet Union, at the upcoming Potsdam Conference. Orders for the attack were issued to General Carl Spaatz on July 25 under the signature of General Thomas T. Handy, the acting Chief of Staff, since Marshall was at the Potsdam Conference with Truman. That day, Truman noted in his diary that: This weapon is to be used against Japan between now and August 10th. I have told the Sec. of War, Mr. Stimson, to use it so that military objectives and soldiers and sailors are the target and not women and children. Even if the Japs are savages, ruthless, merciless and fanatic, we as the leader of the world for the common welfare can not drop that terrible bomb on the old capital (Kyoto) or the new (Tokyo). He and I are in accord. The target will be a purely military one. The successful Trinity Test of July 16 exceeded expectations. On July 26, Allied leaders issued the Potsdam Declaration outlining terms of surrender for Japan. It was presented as an ultimatum and stated that without a surrender, the Allies would attack Japan, resulting in "the inevitable and complete destruction of the Japanese armed forces and just as inevitably the utter devastation of the Japanese homeland ''. The atomic bomb was not mentioned in the communiqué. On July 28, Japanese papers reported that the declaration had been rejected by the Japanese government. That afternoon, Prime Minister Suzuki Kantarō declared at a press conference that the Potsdam Declaration was no more than a rehash (yakinaoshi) of the Cairo Declaration and that the government intended to ignore it (mokusatsu, "kill by silence ''). The statement was taken by both Japanese and foreign papers as a clear rejection of the declaration. Emperor Hirohito, who was waiting for a Soviet reply to non-committal Japanese peace feelers, made no move to change the government position. Japan 's willingness to surrender remained conditional on the preservation of the imperial institution; that Japan not be occupied; that the Japanese armed forces be disbanded voluntarily; and that war criminals be prosecuted by Japanese courts. At Potsdam, Truman agreed to a request from Winston Churchill that Britain be represented when the atomic bomb was dropped. William Penney and Group Captain Leonard Cheshire were sent to Tinian, but found that LeMay would not let them accompany the mission. All they could do was send a strongly worded signal back to Wilson. The Little Boy bomb, except for the uranium payload, was ready at the beginning of May 1945. The uranium - 235 projectile was completed on June 15, and the target insert on July 24. The target and bomb pre-assemblies (partly assembled bombs without the fissile components) left Hunters Point Naval Shipyard, California, on July 16 aboard the cruiser USS Indianapolis, and arrived on Tinian on July 26. The target inserts followed by air on July 30. The first plutonium core, along with its polonium - beryllium urchin initiator, was transported in the custody of Project Alberta courier Raemer Schreiber in a magnesium field carrying case designed for the purpose by Philip Morrison. Magnesium was chosen because it does not act as a tamper. The core departed from Kirtland Army Air Field on a C - 54 transport aircraft of the 509th Composite Group 's 320th Troop Carrier Squadron on July 26, and arrived at North Field July 28. Three Fat Man high - explosive pre-assemblies, designated F31, F32, and F33, were picked up at Kirtland on July 28 by three B - 29s, two from the 393d Bombardment Squadron plus one from the 216th Army Air Force Base Unit, and transported to North Field, arriving on August 2. At the time of its bombing, Hiroshima was a city of both industrial and military significance. A number of military units were located nearby, the most important of which was the headquarters of Field Marshal Shunroku Hata 's Second General Army, which commanded the defense of all of southern Japan, and was located in Hiroshima Castle. Hata 's command consisted of some 400,000 men, most of whom were on Kyushu where an Allied invasion was correctly anticipated. Also present in Hiroshima were the headquarters of the 59th Army, the 5th Division and the 224th Division, a recently formed mobile unit. The city was defended by five batteries of 7 - cm and 8 - cm (2.8 and 3.1 inch) anti-aircraft guns of the 3rd Anti-Aircraft Division, including units from the 121st and 122nd Anti-Aircraft Regiments and the 22nd and 45th Separate Anti-Aircraft Battalions. In total, an estimated 40,000 Japanese military personnel were stationed in the city. Hiroshima was a minor supply and logistics base for the Japanese military, but it also had large stockpiles of military supplies. The city was also a communications center, a key port for shipping and an assembly area for troops. It was a beehive of war industry, manufacturing parts for planes and boats, for bombs, rifles, and handguns; children were shown how to construct and hurl gasoline bombs and the wheelchair - bound and bedridden were assembling booby traps to be planted in the beaches of Kyushu. A new slogan appeared on the walls of Hiroshima: "FORGET SELF! ALL OUT FOR YOUR COUNTRY! '' It was also the second largest city in Japan after Kyoto that was still undamaged by air raids, due to the fact that it lacked the aircraft manufacturing industry that was the XXI Bomber Command 's priority target. On July 3, the Joint Chiefs of Staff placed it off limits to bombers, along with Kokura, Niigata and Kyoto. The center of the city contained several reinforced concrete buildings and lighter structures. Outside the center, the area was congested by a dense collection of small timber - made workshops set among Japanese houses. A few larger industrial plants lay near the outskirts of the city. The houses were constructed of timber with tile roofs, and many of the industrial buildings were also built around timber frames. The city as a whole was highly susceptible to fire damage. The population of Hiroshima had reached a peak of over 381,000 earlier in the war but prior to the atomic bombing, the population had steadily decreased because of a systematic evacuation ordered by the Japanese government. At the time of the attack, the population was approximately 340,000 -- 350,000. Residents wondered why Hiroshima had been spared destruction by firebombing. Some speculated that the city was to be saved for U.S. occupation headquarters, others thought perhaps their relatives in Hawaii and California had petitioned the U.S. government to avoid bombing Hiroshima. More realistic city officials had ordered buildings torn down to create long, straight firebreaks, beginning in 1944. Firebreaks continued to be expanded and extended up to the morning of August 6, 1945. Hiroshima was the primary target of the first nuclear bombing mission on August 6, with Kokura and Nagasaki as alternative targets. Having been fully briefed under the terms of Operations Order No. 35, the 393d Bombardment Squadron B - 29 Enola Gay, piloted by Tibbets, took off from North Field, Tinian, about six hours ' flight time from Japan. The Enola Gay (named after Tibbets ' mother) was accompanied by two other B - 29s. The Great Artiste, commanded by Major Charles Sweeney, carried instrumentation, and a then - nameless aircraft later called Necessary Evil, commanded by Captain George Marquardt, served as the photography aircraft. After leaving Tinian the aircraft made their way separately to Iwo Jima to rendezvous with Sweeney and Marquardt at 05: 55 at 9,200 feet (2,800 m), and set course for Japan. The aircraft arrived over the target in clear visibility at 31,060 feet (9,470 m). Parsons, who was in command of the mission, armed the bomb during the flight to minimize the risks during takeoff. He had witnessed four B - 29s crash and burn at takeoff, and feared that a nuclear explosion would occur if a B - 29 crashed with an armed Little Boy on board. His assistant, Second Lieutenant Morris R. Jeppson, removed the safety devices 30 minutes before reaching the target area. During the night of August 5 -- 6, Japanese early warning radar detected the approach of numerous American aircraft headed for the southern part of Japan. Radar detected 65 bombers headed for Saga, 102 bound for Maebashi, 261 en route to Nishinomiya, 111 headed for Ube and 66 bound for Imabari. An alert was given and radio broadcasting stopped in many cities, among them Hiroshima. The all - clear was sounded in Hiroshima at 00: 05. About an hour before the bombing, the air raid alert was sounded again, as Straight Flush flew over the city. It broadcast a short message which was picked up by Enola Gay. It read: "Cloud cover less than 3 / 10th at all altitudes. Advice: bomb primary. '' The all - clear was sounded over Hiroshima again at 07: 09. At 08: 09, Tibbets started his bomb run and handed control over to his bombardier, Major Thomas Ferebee. The release at 08: 15 (Hiroshima time) went as planned, and the Little Boy containing about 64 kg (141 lb) of uranium - 235 took 44.4 seconds to fall from the aircraft flying at about 31,000 feet (9,400 m) to a detonation height of about 1,900 feet (580 m) above the city. Enola Gay traveled 11.5 mi (18.5 km) before it felt the shock waves from the blast. Due to crosswind, the bomb missed the aiming point, the Aioi Bridge, by approximately 800 ft (240 m) and detonated directly over Shima Surgical Clinic at 34 ° 23 ′ 41 '' N 132 ° 27 ′ 17 '' E  /  34.39468 ° N 132.45462 ° E  / 34.39468; 132.45462. It released the equivalent energy of 16 kilotons of TNT (67 TJ), ± 2 kt. The weapon was considered very inefficient, with only 1.7 % of its material fissioning. The radius of total destruction was about 1 mile (1.6 km), with resulting fires across 4.4 square miles (11 km). People on the ground reported seeing a pika (ピカ) -- a brilliant flash of light -- followed by a don (ドン) -- a loud booming sound. Some 70,000 -- 80,000 people, or around 30 % of the population of Hiroshima, were killed by the blast and resultant firestorm, and another 70,000 injured. Perhaps as many as 20,000 Japanese military personnel were killed. Enola Gay stayed over the target area for two minutes and was ten miles away when the bomb detonated. Only Tibbets, Parsons, and Ferebee knew of the nature of the weapon; the others on the bomber were only told to expect a blinding flash and given black goggles. "It was hard to believe what we saw '', Tibbets told reporters, while Parsons said "the whole thing was tremendous and awe - inspiring... the men aboard with me gasped ' My God ' ''. He and Tibbets compared the shockwave to "a close burst of ack - ack fire ''. Some of the reinforced concrete buildings in Hiroshima had been very strongly constructed because of the earthquake danger in Japan, and their framework did not collapse even though they were fairly close to the blast center. Since the bomb detonated in the air, the blast was directed more downward than sideways, which was largely responsible for the survival of the Prefectural Industrial Promotional Hall, now commonly known as the Genbaku (A-bomb) dome. This building was designed and built by the Czech architect Jan Letzel, and was only 150 m (490 ft) from ground zero. The ruin was named Hiroshima Peace Memorial and was made a UNESCO World Heritage Site in 1996 over the objections of the United States and China, which expressed reservations on the grounds that other Asian nations were the ones who suffered the greatest loss of life and property, and a focus on Japan lacked historical perspective. The US surveys estimated that 4.7 square miles (12 km) of the city were destroyed. Japanese officials determined that 69 % of Hiroshima 's buildings were destroyed and another 6 -- 7 % damaged. The bombing started fires that spread rapidly through timber and paper homes. As in other Japanese cities, the firebreaks proved ineffective. Eizō Nomura was the closest known survivor, who was in the basement of a reinforced concrete building (it remained as the Rest House after the war) only 170 metres (560 ft) from ground zero (the hypocenter) at the time of the attack. He lived into his 80s. Akiko Takakura was among the closest survivors to the hypocenter of the blast. She had been in the solidly built Bank of Hiroshima only 300 meters (980 ft) from ground - zero at the time of the attack. Over 90 % of the doctors and 93 % of the nurses in Hiroshima were killed or injured -- most had been in the downtown area which received the greatest damage. The hospitals were destroyed or heavily damaged. Only one doctor, Terufumi Sasaki, remained on duty at the Red Cross Hospital. Nonetheless, by early afternoon, the police and volunteers had established evacuation centres at hospitals, schools and tram stations, and a morgue was established in the Asano library. Most elements of the Japanese Second General Army headquarters were at physical training on the grounds of Hiroshima Castle, barely 900 yards (820 m) from the hypocenter. The attack killed 3,243 troops on the parade ground. The communications room of Chugoku Military District Headquarters that was responsible for issuing and lifting air raid warnings was in a semi-basement in the castle. Yoshie Oka, a Hijiyama Girls High School student who had been conscripted / mobilized to serve as a communications officer had just sent a message that the alarm had been issued for Hiroshima and neighboring Yamaguchi, when the bomb exploded. She used a special phone to inform Fukuyama Headquarters (some 100 km away) that "Hiroshima has been attacked by a new type of bomb. The city is in a state of near - total destruction. '' Since Mayor Senkichi Awaya had been killed while eating breakfast with his son and granddaughter at the mayoral residence, Field Marshal Hata, who was only slightly wounded, took over the administration of the city, and coordinated relief efforts. Many of his staff had been killed or fatally wounded, including a Korean prince of the Joseon Dynasty, Yi Wu, who was serving as a lieutenant colonel in the Japanese Army. Hata 's senior surviving staff officer was the wounded Colonel Kumao Imoto, who acted as his chief of staff. Soldiers from the undamaged Hiroshima Ujina Harbor used suicide boats, intended to repel the American invasion, to collect the wounded and take them down the rivers to the military hospital at Ujina. Trucks and trains brought in relief supplies and evacuated survivors from the city. Twelve American airmen were imprisoned at the Chugoku Military Police Headquarters located about 1,300 feet (400 m) from the hypocenter of the blast. Most died instantly, although two were reported to have been executed by their captors, and two prisoners badly injured by the bombing were left next to the Aioi Bridge by the Kempei Tai, where they were stoned to death. Eight U.S. prisoners of war killed as part of the medical experiments program at Kyushu University were falsely reported by Japanese authorities as having been killed in the atomic blast as part of an attempted cover up. The Tokyo control operator of the Japan Broadcasting Corporation noticed that the Hiroshima station had gone off the air. He tried to re-establish his program by using another telephone line, but it too had failed. About 20 minutes later the Tokyo railroad telegraph center realized that the main line telegraph had stopped working just north of Hiroshima. From some small railway stops within 16 km (10 mi) of the city came unofficial and confused reports of a terrible explosion in Hiroshima. All these reports were transmitted to the headquarters of the Imperial Japanese Army General Staff. Military bases repeatedly tried to call the Army Control Station in Hiroshima. The complete silence from that city puzzled the General Staff; they knew that no large enemy raid had occurred and that no sizable store of explosives was in Hiroshima at that time. A young officer was instructed to fly immediately to Hiroshima, to land, survey the damage, and return to Tokyo with reliable information for the staff. It was felt that nothing serious had taken place and that the explosion was just a rumor. The staff officer went to the airport and took off for the southwest. After flying for about three hours, while still nearly 160 km (100 mi) from Hiroshima, he and his pilot saw a great cloud of smoke from the bomb. After circling the city in order to survey the damage they landed south of the city, where the staff officer, after reporting to Tokyo, began to organize relief measures. Tokyo 's first indication that the city had been destroyed by a new type of bomb came from President Truman 's announcement of the strike, sixteen hours later. After the Hiroshima bombing, Truman issued a statement announcing the use of the new weapon. He stated, "We may be grateful to Providence '' that the German atomic bomb project had failed, and that the United States and its allies had "spent two billion dollars on the greatest scientific gamble in history -- and won ''. Truman then warned Japan: "If they do not now accept our terms, they may expect a rain of ruin from the air, the like of which has never been seen on this earth. Behind this air attack will follow sea and land forces in such numbers and power as they have not yet seen and with the fighting skill of which they are already well aware. '' This was a widely broadcast speech picked up by Japanese news agencies. By August 8, the 50,000 - watt standard wave station on Saipan the OWI radio station, broadcast a similar message to Japan every 15 minutes about Hiroshima, stating that more Japanese cities would face a similar fate in the absence of immediate acceptance of the terms of the Potsdam Declaration and emphatically urged civilians to evacuate major cities. Radio Japan, which continued to extoll victory for Japan by never surrendering, had informed the Japanese of the destruction of Hiroshima by a single bomb. Prime Minister Suzuki felt compelled to meet the Japanese press, to whom he reiterated his government 's commitment to ignore the Allies ' demands and fight on. The Japanese government did not react. Emperor Hirohito, the government, and the war council considered four conditions for surrender: the preservation of the kokutai (Imperial institution and national polity), assumption by the Imperial Headquarters of responsibility for disarmament and demobilization, no occupation of the Japanese Home Islands, Korea, or Formosa, and delegation of the punishment of war criminals to the Japanese government. Soviet Foreign Minister Vyacheslav Molotov informed Tokyo of the Soviet Union 's unilateral abrogation of the Soviet -- Japanese Neutrality Pact on August 5. At two minutes past midnight on August 9, Tokyo time, Soviet infantry, armor, and air forces had launched the Manchurian Strategic Offensive Operation. Four hours later, word reached Tokyo of the Soviet Union 's official declaration of war. The senior leadership of the Japanese Army began preparations to impose martial law on the nation, with the support of Minister of War Korechika Anami, in order to stop anyone attempting to make peace. On August 7, a day after Hiroshima was destroyed, Dr. Yoshio Nishina and other atomic physicists arrived at the city, and carefully examined the damage. They then went back to Tokyo and told the cabinet that Hiroshima was indeed destroyed by an atomic bomb. Admiral Soemu Toyoda, the Chief of the Naval General Staff, estimated that no more than one or two additional bombs could be readied, so they decided to endure the remaining attacks, acknowledging "there would be more destruction but the war would go on ''. American Magic codebreakers intercepted the cabinet 's messages. Purnell, Parsons, Tibbets, Spaatz, and LeMay met on Guam that same day to discuss what should be done next. Since there was no indication of Japan surrendering, they decided to proceed with dropping another bomb. Parsons said that Project Alberta would have it ready by August 11, but Tibbets pointed to weather reports indicating poor flying conditions on that day due to a storm, and asked if the bomb could be readied by August 9. Parsons agreed to try to do so. The city of Nagasaki had been one of the largest seaports in southern Japan, and was of great wartime importance because of its wide - ranging industrial activity, including the production of ordnance, ships, military equipment, and other war materials. The four largest companies in the city were Mitsubishi Shipyards, Electrical Shipyards, Arms Plant, and Steel and Arms Works, which employed about 90 % of the city 's labor force, and accounted for 90 % of the city 's industry. Although an important industrial city, Nagasaki had been spared from firebombing because its geography made it difficult to locate at night with AN / APQ - 13 radar. Unlike the other target cities, Nagasaki had not been placed off limits to bombers by the Joint Chiefs of Staff 's July 3 directive, and was bombed on a small scale five times. During one of these raids on August 1, a number of conventional high - explosive bombs were dropped on the city. A few hit the shipyards and dock areas in the southwest portion of the city, and several hit the Mitsubishi Steel and Arms Works. By early August, the city was defended by the 134th Anti-Aircraft Regiment of the 4th Anti-Aircraft Division with four batteries of 7 cm (2.8 in) anti-aircraft guns and two searchlight batteries. In contrast to Hiroshima, almost all of the buildings were of old - fashioned Japanese construction, consisting of timber or timber - framed buildings with timber walls (with or without plaster) and tile roofs. Many of the smaller industries and business establishments were also situated in buildings of timber or other materials not designed to withstand explosions. Nagasaki had been permitted to grow for many years without conforming to any definite city zoning plan; residences were erected adjacent to factory buildings and to each other almost as closely as possible throughout the entire industrial valley. On the day of the bombing, an estimated 263,000 people were in Nagasaki, including 240,000 Japanese residents, 10,000 Korean residents, 2,500 conscripted Korean workers, 9,000 Japanese soldiers, 600 conscripted Chinese workers, and 400 Allied prisoners of war in a camp to the north of Nagasaki. Responsibility for the timing of the second bombing was delegated to Tibbets. Scheduled for August 11 against Kokura, the raid was moved earlier by two days to avoid a five - day period of bad weather forecast to begin on August 10. Three bomb pre-assemblies had been transported to Tinian, labeled F - 31, F - 32, and F - 33 on their exteriors. On August 8, a dress rehearsal was conducted off Tinian by Sweeney using Bockscar as the drop airplane. Assembly F - 33 was expended testing the components and F - 31 was designated for the August 9 mission. At 03: 49 on the morning of August 9, 1945, Bockscar, flown by Sweeney 's crew, carried Fat Man, with Kokura as the primary target and Nagasaki the secondary target. The mission plan for the second attack was nearly identical to that of the Hiroshima mission, with two B - 29s flying an hour ahead as weather scouts and two additional B - 29s in Sweeney 's flight for instrumentation and photographic support of the mission. Sweeney took off with his weapon already armed but with the electrical safety plugs still engaged. During pre-flight inspection of Bockscar, the flight engineer notified Sweeney that an inoperative fuel transfer pump made it impossible to use 640 US gallons (2,400 l; 530 imp gal) of fuel carried in a reserve tank. This fuel would still have to be carried all the way to Japan and back, consuming still more fuel. Replacing the pump would take hours; moving the Fat Man to another aircraft might take just as long and was dangerous as well, as the bomb was live. Tibbets and Sweeney therefore elected to have Bockscar continue the mission. This time Penney and Cheshire were allowed to accompany the mission, flying as observers on the third plane, Big Stink, flown by the group 's operations officer, Major James I. Hopkins, Jr. Observers aboard the weather planes reported both targets clear. When Sweeney 's aircraft arrived at the assembly point for his flight off the coast of Japan, Big Stink failed to make the rendezvous. According to Cheshire, Hopkins was at varying heights including 9,000 feet (2,700 m) higher than he should have been, and was not flying tight circles over Yakushima as previously agreed with Sweeney and Captain Frederick C. Bock, who was piloting the support B - 29 The Great Artiste. Instead, Hopkins was flying 40 - mile (64 km) dogleg patterns. Though ordered not to circle longer than fifteen minutes, Sweeney continued to wait for Big Stink, at the urging of Ashworth, the plane 's weaponeer, who was in command of the mission. After exceeding the original departure time limit by a half - hour, Bockscar, accompanied by The Great Artiste, proceeded to Kokura, thirty minutes away. The delay at the rendezvous had resulted in clouds and drifting smoke over Kokura from fires started by a major firebombing raid by 224 B - 29s on nearby Yahata the previous day. Additionally, the Yawata Steel Works intentionally burned coal tar, to produce black smoke. The clouds and smoke resulted in 70 % of the area over Kokura being covered, obscuring the aiming point. Three bomb runs were made over the next 50 minutes, burning fuel and exposing the aircraft repeatedly to the heavy defenses of Yawata, but the bombardier was unable to drop visually. By the time of the third bomb run, Japanese antiaircraft fire was getting close, and Second Lieutenant Jacob Beser, who was monitoring Japanese communications, reported activity on the Japanese fighter direction radio bands. After three runs over the city, and with fuel running low because of the failed fuel pump, they headed for their secondary target, Nagasaki. Fuel consumption calculations made en route indicated that Bockscar had insufficient fuel to reach Iwo Jima and would be forced to divert to Okinawa, which had become entirely Allied - occupied territory only six weeks earlier. After initially deciding that if Nagasaki were obscured on their arrival the crew would carry the bomb to Okinawa and dispose of it in the ocean if necessary, Ashworth ruled that a radar approach would be used if the target was obscured. At about 07: 50 Japanese time, an air raid alert was sounded in Nagasaki, but the "all clear '' signal was given at 08: 30. When only two B - 29 Superfortresses were sighted at 10: 53, the Japanese apparently assumed that the planes were only on reconnaissance and no further alarm was given. A few minutes later at 11: 00, The Great Artiste dropped instruments attached to three parachutes. These instruments also contained an unsigned letter to Professor Ryokichi Sagane, a physicist at the University of Tokyo who studied with three of the scientists responsible for the atomic bomb at the University of California, Berkeley, urging him to tell the public about the danger involved with these weapons of mass destruction. The messages were found by military authorities but not turned over to Sagane until a month later. In 1949, one of the authors of the letter, Luis Alvarez, met with Sagane and signed the document. At 11: 01, a last - minute break in the clouds over Nagasaki allowed Bockscar 's bombardier, Captain Kermit Beahan, to visually sight the target as ordered. The Fat Man weapon, containing a core of about 6.4 kg (14 lb) of plutonium, was dropped over the city 's industrial valley at 32 ° 46 ′ 25 '' N 129 ° 51 ′ 48 '' E  /  32.77372 ° N 129.86325 ° E  / 32.77372; 129.86325. It exploded 47 seconds later at 1,650 ± 33 ft (503 ± 10 m), above a tennis court halfway between the Mitsubishi Steel and Arms Works in the south and the Nagasaki Arsenal in the north. This was nearly 3 km (1.9 mi) northwest of the planned hypocenter; the blast was confined to the Urakami Valley and a major portion of the city was protected by the intervening hills. The resulting explosion released the equivalent energy of 21 ± 2 kt (87.9 ± 8.4 TJ). The explosion generated temperatures inside the fireball estimated at 3,900 ° C (7,050 ° F) and winds that were estimated at over 1,000 km / h (620 mph). Big Stink spotted the explosion from a hundred miles away, and flew over to observe. Because of the delays in the mission and the inoperative fuel transfer pump, Bockscar did not have sufficient fuel to reach the emergency landing field at Iwo Jima, so Sweeney and Bock flew to Okinawa. Arriving there, Sweeney circled for 20 minutes trying to contact the control tower for landing clearance, finally concluding that his radio was faulty. Critically low on fuel, Bockscar barely made it to the runway on Okinawa 's Yontan Airfield. With enough fuel for only one landing attempt, Sweeney and Albury brought Bockscar in at 150 miles per hour (240 km / h) instead of the normal 120 miles per hour (190 km / h), firing distress flares to alert the field of the uncleared landing. The number two engine died from fuel starvation as Bockscar began its final approach. Touching the runway hard, the heavy B - 29 slewed left and towards a row of parked B - 24 bombers before the pilots managed to regain control. The B - 29 's reversible propellers were insufficient to slow the aircraft adequately, and with both pilots standing on the brakes, Bockscar made a swerving 90 - degree turn at the end of the runway to avoid running off the runway. A second engine died from fuel exhaustion by the time the plane came to a stop. The flight engineer later measured fuel in the tanks and concluded that less than five minutes total remained. Following the mission, there was confusion over the identification of the plane. The first eyewitness account by war correspondent William L. Laurence of The New York Times, who accompanied the mission aboard the aircraft piloted by Bock, reported that Sweeney was leading the mission in The Great Artiste. He also noted its "Victor '' number as 77, which was that of Bockscar. Laurence had interviewed Sweeney and his crew, and was aware that they referred to their airplane as The Great Artiste. Except for Enola Gay, none of the 393d 's B - 29s had yet had names painted on the noses, a fact which Laurence himself noted in his account. Unaware of the switch in aircraft, Laurence assumed Victor 77 was The Great Artiste, which was in fact, Victor 89. Although the bomb was more powerful than the one used on Hiroshima, the effect was confined by hillsides to the narrow Urakami Valley. Of 7,500 Japanese employees who worked inside the Mitsubishi Munitions plant, including "mobilized '' / conscripted students and regular workers, 6,200 were killed. Some 17,000 -- 22,000 others who worked in other war plants and factories in the city died as well. Casualty estimates for immediate deaths vary widely, ranging from 22,000 to 75,000 At least 35,000 -- 40,000 people were killed and 60,000 others injured. In the days and months following the explosion, more people died from bomb effects. Because of the presence of undocumented foreign workers, and a number of military personnel in transit, there are great discrepancies in the estimates of total deaths by the end of 1945; a range of 39,000 to 80,000 can be found in various studies. Unlike Hiroshima 's military death toll, only 150 Japanese soldiers were killed instantly, including thirty - six from the 134th AAA Regiment of the 4th AAA Division. At least eight known POWs died from the bombing and as many as 13 may have died, including a British prisoner of war, Royal Air Force Corporal Ronald Shaw, and seven Dutch POWs. One American POW, Joe Kieyoomia, was in Nagasaki at the time of the bombing but survived, reportedly having been shielded from the effects of the bomb by the concrete walls of his cell. There were 24 Australian POWs in Nagasaki, all of whom survived. The radius of total destruction was about 1 mi (1.6 km), followed by fires across the northern portion of the city to 2 mi (3.2 km) south of the bomb. About 58 % of the Mitsubishi Arms Plant was damaged, and about 78 % of the Mitsubishi Steel Works. The Mitsubishi Electric Works suffered only 10 % structural damage as it was on the border of the main destruction zone. The Nagasaki Arsenal was destroyed in the blast. Although many fires likewise burnt following the bombing, in contrast to Hiroshima where sufficient fuel density was available, no firestorm developed in Nagasaki as the damaged areas did not furnish enough fuel to generate the phenomenon. Instead, the ambient wind at the time pushed the fire spread along the valley. Groves expected to have another atomic bomb ready for use on August 19, with three more in September and a further three in October. On August 10, he sent a memorandum to Marshall in which he wrote that "the next bomb... should be ready for delivery on the first suitable weather after 17 or 18 August. '' On the same day, Marshall endorsed the memo with the comment, "It is not to be released over Japan without express authority from the President. '' Truman had secretly requested this on August 10. This modified the previous order that the target cities were to be attacked with atomic bombs "as made ready ''. There was already discussion in the War Department about conserving the bombs then in production for Operation Downfall. "The problem now (August 13) is whether or not, assuming the Japanese do not capitulate, to continue dropping them every time one is made and shipped out there or whether to hold them... and then pour them all on in a reasonably short time. Not all in one day, but over a short period. And that also takes into consideration the target that we are after. In other words, should we not concentrate on targets that will be of the greatest assistance to an invasion rather than industry, morale, psychology, and the like? Nearer the tactical use rather than other use. '' Two more Fat Man assemblies were readied, and scheduled to leave Kirtland Field for Tinian on August 11 and 14, and Tibbets was ordered by LeMay to return to Albuquerque, New Mexico, to collect them. At Los Alamos, technicians worked 24 hours straight to cast another plutonium core. Although cast, it still needed to be pressed and coated, which would take until August 16. Therefore, it could have been ready for use on August 19. Unable to reach Marshall, Groves ordered on his own authority on August 13 that the core should not be shipped. Until August 9, Japan 's war council still insisted on its four conditions for surrender. On that day Hirohito ordered Kōichi Kido to "quickly control the situation... because the Soviet Union has declared war against us. '' He then held an Imperial conference during which he authorized minister Shigenori Tōgō to notify the Allies that Japan would accept their terms on one condition, that the declaration "does not comprise any demand which prejudices the prerogatives of His Majesty as a Sovereign ruler. '' On August 12, the Emperor informed the imperial family of his decision to surrender. One of his uncles, Prince Asaka, then asked whether the war would be continued if the kokutai could not be preserved. Hirohito simply replied, "Of course. '' As the Allied terms seemed to leave intact the principle of the preservation of the Throne, Hirohito recorded on August 14 his capitulation announcement which was broadcast to the Japanese nation the next day despite a short rebellion by militarists opposed to the surrender. In his declaration, Hirohito referred to the atomic bombings: Moreover, the enemy now possesses a new and terrible weapon with the power to destroy many innocent lives and do incalculable damage. Should we continue to fight, not only would it result in an ultimate collapse and obliteration of the Japanese nation, but also it would lead to the total extinction of human civilization. Such being the case, how are we to save the millions of our subjects, or to atone ourselves before the hallowed spirits of our imperial ancestors? This is the reason why we have ordered the acceptance of the provisions of the joint declaration of the powers. In his "Rescript to the Soldiers and Sailors '' delivered on August 17, he stressed the impact of the Soviet invasion on his decision to surrender. Hirohito met with General MacArthur on September 27, saying to him that "(t) he peace party did not prevail until the bombing of Hiroshima created a situation which could be dramatized ''. Furthermore, the "Rescript to the Soldiers and Sailors '' speech he told MacArthur about was just personal, not political, and never stated that the Soviet intervention in Manchuria was the main reason for surrender. In fact, a day after the bombing of Nagasaki and the Soviet invasion of Manchuria, Hirohito ordered his advisers, primarily Chief Cabinet Secretary Hisatsune Sakomizu, Kawada Mizuho, and Masahiro Yasuoka, to write up a surrender speech. In Hirohito 's speech, days before announcing it on radio on August 15, he gave three major reasons for surrender: Tokyo 's defenses would not be complete before the American invasion of Japan, Ise Shrine would be lost to the Americans, and atomic weapons deployed by the Americans would lead to the death of the entire Japanese race. Despite the Soviet intervention, Hirohito did not mention the Soviets as the main factor for surrender. During the war, the British embassy in Washington reported that Americans regarded the Japanese as "a nameless mass of vermin ''; caricatures depicting Japanese as less than human, e.g. monkeys, were common. A 1944 opinion poll that asked what should be done with Japan found that 13 % of the U.S. public were in favor of "killing off '' all Japanese people. After the Hiroshima bomb detonated successfully, Oppenheimer addressed an assembly at Los Alamos "clasping his hands together like a prize - winning boxer ''. The bombing amazed Otto Hahn and other German atomic scientists the British held at Farm Hall in Operation Epsilon. Hahn stated that he had not believed an atomic weapon "would be possible for another twenty years ''; Werner Heisenberg did not believe the news at first. Carl Friedrich von Weizsäcker said "I think it 's dreadful of the Americans to have done it. I think it is madness on their part '', but Heisenberg replied, "One could equally well say ' That 's the quickest way of ending the war ' ''. Hahn was grateful that the German project had not succeeded in developing "such an inhumane weapon ''; Karl Wirtz observed that even if it had, "we would have obliterated London but would still not have conquered the world, and then they would have dropped them on us ''. Hahn told the others, "Once I wanted to suggest that all uranium should be sunk to the bottom of the ocean ''. The Vatican agreed; L'Osservatore Romano expressed regret that the bomb 's inventors did not destroy the weapon for the benefit of humanity. Rev. Cuthbert Thicknesse, the Dean of St Albans, prohibited using St Albans Abbey for a thanksgiving service for the war 's end, calling the use of atomic weapons "an act of wholesale, indiscriminate massacre ''. Nonetheless, news of the atomic bombing was greeted enthusiastically in the U.S.; a poll in Fortune magazine in late 1945 showed a significant minority of Americans (22.7 %) wishing that more atomic bombs could have been dropped on Japan. The initial positive response was supported by the imagery presented to the public (mainly the powerful images of the mushroom cloud) and the censorship of photographs that showed corpses and maimed survivors. Such "censorship '' was however the status - quo at the time, with no major news outlets depicting corpses or maimed survivors as a result from other events, US or otherwise. On August 10, 1945, the day after the Nagasaki bombing, Yōsuke Yamahata, correspondent Higashi and artist Yamada arrived in the city with orders to record the destruction for maximum propaganda purposes, Yamahata took scores of photographs and on August 21 they appeared in Mainichi Shimbun, a popular Japanese newspaper. Wilfred Burchett was the first western journalist to visit Hiroshima after the atom bomb was dropped, arriving alone by train from Tokyo on September 2, the day of the formal surrender aboard the USS Missouri. His Morse code dispatch was printed by the Daily Express newspaper in London on September 5, 1945, entitled "The Atomic Plague '', the first public report to mention the effects of radiation and nuclear fallout. Burchett 's reporting was unpopular with the U.S. military. The U.S. censors suppressed a supporting story submitted by George Weller of the Chicago Daily News, and accused Burchett of being under the sway of Japanese propaganda. Laurence dismissed the reports on radiation sickness as Japanese efforts to undermine American morale, ignoring his own account of Hiroshima 's radiation sickness published one week earlier. A member of the U.S. Strategic Bombing Survey, Lieutenant Daniel McGovern, used a film crew to document the results in early 1946. The film crew 's work resulted in a three - hour documentary entitled The Effects of the Atomic Bombs Against Hiroshima and Nagasaki. The documentary included images from hospitals showing the human effects of the bomb; it showed burned out buildings and cars, and rows of skulls and bones on the ground. It was classified "secret '' for the next 22 years. During this time in America, it was a common practice for editors to keep graphic images of death out of films, magazines, and newspapers. The total of 90,000 ft (27,000 m) of film shot by McGovern 's cameramen had not been fully aired as of 2009. According to Greg Mitchell, with the 2004 documentary film Original Child Bomb, a small part of that footage managed to reach part of the American public "in the unflinching and powerful form its creators intended ''. Motion picture company Nippon Eigasha started sending cameramen to Nagasaki and Hiroshima in September 1945. On October 24, 1945, a U.S. military policeman stopped a Nippon Eigasha cameraman from continuing to film in Nagasaki. All Nippon Eigasha 's reels were then confiscated by the American authorities. These reels were in turn requested by the Japanese government, declassified, and saved from oblivion. Some black - and - white motion pictures were released and shown for the first time to Japanese and American audiences in the years from 1968 to 1970. The public release of film footage of the city post-attack, and some research about the human effects of the attack, was restricted during the occupation of Japan, and much of this information was censored until the signing of the San Francisco Peace Treaty in 1951, restoring control to the Japanese. Only the most politically charged and detailed weapons effects information was censored during this period. The Hiroshima - based magazine, Chugoku Bunka for example, in its first issue published March 10, 1946, devotes itself to detailing the damage from the bombing. Similarly, there was no censorship of the factually written witness accounts, the book Hiroshima written by Pulitzer Prize winner John Hersey, which was originally published in article form in the popular magazine The New Yorker, on August 31, 1946, is reported to have reached Tokyo in English by January 1947, and the translated version was released in Japan in 1949. The book narrates the stories of the lives of six bomb survivors from immediately prior to, and months after, the dropping of the Little Boy bomb. Beginning in 1974 a compilation of drawings and artwork made by the survivors of the bombings began to be compiled, with completion in 1977 and under both book and exhibition format, it was titled The Unforgettable Fire. In the spring of 1948, the Atomic Bomb Casualty Commission (ABCC) was established in accordance with a presidential directive from Truman to the National Academy of Sciences -- National Research Council to conduct investigations of the late effects of radiation among the survivors in Hiroshima and Nagasaki. One of the early studies conducted by the ABCC was on the outcome of pregnancies occurring in Hiroshima and Nagasaki, and in a control city, Kure, located 18 mi (29 km) south of Hiroshima, in order to discern the conditions and outcomes related to radiation exposure. Dr. James V. Neel led the study which found that the number of birth defects was not significantly higher among the children of survivors who were pregnant at the time of the bombings. Neel also studied the longevity of the children who survived the bombings of Hiroshima and Nagasaki, reporting that between 90 and 95 percent were still living 50 years later. The National Academy of Sciences questioned Neel 's procedure which did not filter the Kure population for possible radiation exposure. Overall, while a statistically insignificant increase in birth defects occurred directly after the bombings of Nagasaki and Hiroshima, Neel and others noted that in approximately 50 humans who were of an early gestational age at the time of the bombing and who were all within about 1 km from the hypocenter, an increase in microencephaly and anencephaly was observed upon birth, with the incidence of these two particular malformations being nearly 3 times what was to be expected when compared to the control group in Kure. In 1985, Johns Hopkins University human geneticist James F. Crow examined Neel 's research and confirmed that the number of birth defects was not significantly higher in Hiroshima and Nagasaki. Many members of the ABCC and its successor Radiation Effects Research Foundation (RERF) were still looking for possible birth defects or other causes among the survivors decades later, but found no evidence that they were more common among the survivors. Despite the insignificance of birth defects found in Neel 's study and the detailed medical literature, up to 1987, historians such as Ronald E. Powaski frequently wrote that Hiroshima experienced "an increase in stillbirths, birth defects, and infant mortality '' following the atomic bomb. As cancers do not immediately emerge after exposure to radiation instead radiation - induced cancer has a minimum latency period of some 5 + years. An epidemiology study by the RERF estimates that from 1950 to 2000, 46 % of leukemia deaths and 11 % of solid cancers of unspecificed lethality, could be due to radiation from the bombs, with the statistical excess being 200 leukemia deaths and 1,700 solid cancers of undeclared lethality. Both of these statistics being derived from the observation of approximately half of the total survivors, strictly those who took part in the study. The survivors of the bombings are called hibakusha (被爆 者, Japanese pronunciation: (çibakɯ̥ɕa)), a Japanese word that literally translates to "explosion - affected people ''. The Japanese government has recognized about 650,000 people as hibakusha. As of March 31, 2017, 164,621 are still alive, mostly in Japan. The government of Japan recognizes about 1 % of these as having illnesses caused by radiation. The memorials in Hiroshima and Nagasaki contain lists of the names of the hibakusha who are known to have died since the bombings. Updated annually on the anniversaries of the bombings, as of August 2017 the memorials record the names of almost 485,000 hibakusha; 308,725 in Hiroshima and 175,743 in Nagasaki. Hibakusha and their children were (and still are) victims of severe discrimination in Japan due to public ignorance about the consequences of radiation sickness, with much of the public believing it to be hereditary or even contagious. This is despite the fact that no statistically demonstrable increase of birth defects or congenital malformations was found among the later conceived children born to survivors of Hiroshima and Nagasaki. A study of the long - term psychological effects of the bombings on the survivors found that even 17 -- 20 years after the bombings had occurred survivors showed a higher prevalence of anxiety and somatization symptoms. On March 24, 2009, the Japanese government officially recognized Tsutomu Yamaguchi as a double hibakusha. He was confirmed to be 3 km (1.9 mi) from ground zero in Hiroshima on a business trip when Little Boy was detonated. He was seriously burnt on his left side and spent the night in Hiroshima. He arrived at his home city of Nagasaki on August 8, the day before Fat Man was dropped, and he was exposed to residual radiation while searching for his relatives. He was the first officially recognized survivor of both bombings. He died on January 4, 2010, at the age of 93, after a battle with stomach cancer. The 2006 documentary Twice Survived: The Doubly Atomic Bombed of Hiroshima and Nagasaki documented 165 nijū hibakusha (lit. double explosion - affected people), and was screened at the United Nations. During the war, Japan brought as many as 670,000 Korean conscripts to Japan to work as forced labor. About 5,000 -- 8,000 Koreans were killed in Hiroshima and another 1,500 -- 2,000 died in Nagasaki. For many years, Korean survivors had a difficult time fighting for the same recognition as Hibakusha as afforded to all Japanese survivors, a situation which resulted in the denial of the free health benefits to them. Most issues were eventually addressed in 2008 through lawsuits. The role of the bombings in Japan 's surrender and the U.S. 's ethical justification for them has been the subject of scholarly and popular debate for decades. J. Samuel Walker wrote in an April 2005 overview of recent historiography on the issue, "the controversy over the use of the bomb seems certain to continue. '' He wrote that "The fundamental issue that has divided scholars over a period of nearly four decades is whether the use of the bomb was necessary to achieve victory in the war in the Pacific on terms satisfactory to the United States. '' Supporters of the bombings generally assert that they caused the Japanese surrender, preventing casualties on both sides during Operation Downfall. One figure of speech, "One hundred million (subjects of the Japanese Empire) will die for the Emperor and Nation '', served as a unifying slogan, although that phrase was intended as a figure of speech along the lines of the "ten thousand years '' phrase. In Truman 's 1955 Memoirs, "he states that the atomic bomb probably saved half a million U.S. lives -- anticipated casualties in an Allied invasion of Japan planned for November. Stimson subsequently talked of saving one million U.S. casualties, and Churchill of saving one million American and half that number of British lives. '' Scholars have pointed out various alternatives that could have ended the war without an invasion, but these alternatives could have resulted in the deaths of many more Japanese. Supporters also point to an order given by the Japanese War Ministry on August 1, 1944, ordering the execution of Allied prisoners of war when the POW camp was in the combat zone. Those who oppose the bombings cite a number of reasons for their view, among them: a belief that atomic bombing is fundamentally immoral, that the bombings counted as war crimes, that they were militarily unnecessary, that they constituted state terrorism, and that they involved racism against and the dehumanization of the Japanese people. The bombings were part of an already fierce conventional bombing campaign. This, together with the naval blockade, could also have eventually led to a Japanese surrender. At the time the United States dropped its atomic bomb on Nagasaki on August 9, 1945, the Soviet Union launched a surprise attack with 1.6 million troops against the Kwantung Army in Manchuria. "The Soviet entry into the war '', argued Japanese historian Tsuyoshi Hasegawa, "played a much greater role than the atomic bombs in inducing Japan to surrender because it dashed any hope that Japan could terminate the war through Moscow 's mediation ''. Another popular view among critics of the bombings, originating with Gar Alperovitz in 1965 and becoming the default position in Japanese school history textbooks, is the idea of atomic diplomacy: that the United States used nuclear weapons in order to intimidate the Soviet Union in the early stages of the Cold War.
when does school end for high school students
Middle school - wikipedia A middle school (also known as intermediate school or junior high school) is an educational stage which exists in some countries, providing education between primary school and secondary school. The concept, regulation and classification of middle schools, as well as the ages covered, vary between, and sometimes within, countries. In Afghanistan, middle school consists of grades 6, 7 and 8. In Albania, middle school is included in the primary education which lasts 9 years and attendance is mandatory. In Algeria, a middle school includes 4 grades; 6, 7, 8 and 9 consisting of students from ages 11 to 14 or 12 to 15. The ciclo básico of secondary education (ages 12 -- 16) is roughly equivalent to middle school. Most regions of Australia do not have middle schools, as students go directly from primary school (for years K -- 6) to secondary school (years 7 - 12, usually referred to as high school). As an alternative to the middle school model, some secondary schools divided their grades into "junior high school '' (years 7, 8, 9 and 10) and "senior high school '' (years 11 and 12). Some have 3 levels, "junior '' (years 7 and 8), "intermediate '' (years 9 and 10), and "senior '' (years 11 and 12). In 1996 and 1997, a national conference met to develop what became known as the National Middle Schooling Project, which aimed to develop a common Australian view of The first middle school established in Australia was The Armidale School, in Armidale (approximately 370 km (230 mi) north of Sydney, 360 km (220 mi) SSW of Brisbane and approximately 140 km (87 mi) due west of Coffs Harbour on the coast). Other schools have since followed this trend. The Northern Territory has introduced a three tier system featuring Middle Schools for years 7 -- 9 (approximate age 13 -- 15) and high school year 10 -- 12 (approximate age 16 -- 18). Many schools across Queensland have introduced a Middle School tier within their schools. The middle schools cover years 6 to 8. In Bangladesh, middle school is not separated like other countries. Generally schools are from class 1 to class 10. It means lower primary (1 - 5), upper primary (6 - 10). From class 6 - 8 is thought as middle school. Grades 1, 2, 3, 4 and 5 are said to be primary school while all the classes from 6 to 9 are considered high school (as middle school and high school are not considered separate) while 10 - 12 is called college. There are n't middle schools in Bolivia since 1994. Students aged 11 -- 15 attend the last years of elementary education or the first years of secondary education. In Bosnia and Herzegovina "middle school '' refers to educational institutions for ages between 14 and 18, and lasts 3 -- 4 years, following elementary school (which lasts 8 or 9 years). Gymnasiums are the most prestigious type of "middle '' school. In Brazil, middle school is a mandatory stage that precedes High School (Ensino Médio) called "Ensino Fundamental II '' consisting of grades 6 to 9, ages 10 to 15. In Canada, the terms "Middle School '' and "Junior High School '' are both used, depending on which grades the school caters to. Junior high schools tend to include only grades 7, 8, and sometimes 9 (some older schools with the name ' carved in concrete ' still use "Junior High '' as part of their name, although grade nine is now missing), whereas middle schools are usually grades 6 -- 8 or only grades 7 - 8 or 6 - 7 (i.e. around ages 11 -- 14), varying from area to area and also according to population vs. building capacity. Another common model is grades 5 -- 8. Alberta, Nova Scotia, Newfoundland, and Prince Edward Island junior high schools (the term "Middle School '' is not commonly used) include only grades 7 -- 9, with the first year of high school traditionally being grade 10. In some places students go from elementary school to secondary school. In Ontario, the term "Middle School and Secondary School are used). Quebec uses a grade system that is different from those of the other provinces. The Secondary level has five grades, starting after Elementary Grade 6. These are called Secondary I to Secondary V. There are n't middle schools in Chile. Students aged 11 to 16 attend the last years of educación básica (basic education, until age 14) or the first years of educación media (middle education, equivalent to middle and high school). In the People 's Republic of China, middle school has two stages, junior stage (grades 7 -- 9, some places are grades 6 -- 9) and senior stage (grades 10 -- 12). The junior stage education is the last 3 years of 9 - year - compulsory education for all young citizens; while the senior stage education is optional but considered as a critical preparation for college education. Some middle schools have both stages while some have either of them. The admissions for most students to enroll in senior middle schools from junior stage are on the basis of the scores that they get in "Senior Middle School Entrance Exam '', which are held by local governments. Other students may avoid the exam, based on their distinctive talents, like athletics, or excellent daily performance in junior stage. Secondary education is divided in basic secondary (grades 6 to 9) and mid secondary (grades 10 and 11). The students in basic secondary, roughly equivalent to middle school, are 11 or 12 to 15 or 16 years old. In Croatia "middle school '' refers to educational institutions for ages between 14 and 18, and lasts 3 -- 5 years, following elementary school (which lasts 8 years). Gymnasiums are the most prestigious type of "middle '' school. Secundaria básica (basic secondary, seventh through ninth grades) is the approximate equivalent of middle school in Cuba. In the Czech Republic after completing the nine - year elementary school (compulsory school attendance) a student may apply for high school or gymnasium. Students have the opportunity to enroll in high school from Grade 5 or (less commonly) Grade 7 of elementary school, spending eight or six years respectively at high school that otherwise takes four years. Thus they can spend five years in elementary school, followed by eight in high school. The first four years of eight - year study program at high school are comparable with junior high school. Gymnasium focuses on a more advanced academic approach to education. All other types of high schools except gymnasiums and conservatories (e.g. lyceums) accept only students that finished Grade 9. The 4th and last level of educación general básica (ages 12 -- 14) is roughly equivalent to middle school. In Egypt, middle school precedes high school. It is called the preparatory stage and consists of three phases: first preparatory in which students study more subjects than primary with different branches. For instance, algebra and geometry are taught instead of "mathematics. '' In the second preparatory phase, students study science, geography, the history of Egypt starting with pharaonic history, including Coptic history, Islamic history, and concluding with modern history. The students are taught two languages, Arabic and English. Middle school (preparatory stage) lasts for three years. In France, the equivalent period to middle school is collège, which lasts four years from the Sixième (sixth, the equivalent of the Canadian and American Grade 6) to the Troisième (third, the equivalent of the Canadian and American Grade 9), accommodating pupils aged between 11 and 15. Upon completion of the latter, students are awarded a Brevet des collèges if they obtain a certain amount of points on a series of tests in various subjects (French, history / geography, mathematics, science / physics / chemistry), but also on a series of skills completed during the last year and on oral examinations (e.g., about cross-subjects themes they work on the latest years, the fourth year of collège). They can then enter high school (called lycée), which lasts three years from the Seconde to the Terminale until the baccalauréat, and during which they can choose a general or a professional field of study. There are four middle schools in Gibraltar, following the English model of middle - deemed - primary schools accommodating pupils aged between 9 and 12 (National Curriculum Years 4 to 7). The schools were opened in 1972 when the government introduced comprehensive education in the country. In Greece, the equivalent period to middle school is call Gymnasium (Γυμνάσιο), which caters to children between the ages 13 and 15, i.e. 7th, 8th and 9th grade. CBSE (Central Board of Secondary Education) classifies Middle School as a combination of Lower (Class 1 -- 5) and Upper Primary (Class 5 -- 8). There are other Central Boards / Councils such as CISCE (Council for the Indian School Certificate Examination). Each state has its own State Board. Each has its own standards, which might be different from the Central Boards. In some institutions, providing education for 5th to 10th are known as secondary school. In Indonesia, middle school (Indonesian: Sekolah Menengah Pertama, SMP) covers ages 12 to 14 or class 7 to class 9. Although compulsory education ends at junior high, most pursue higher education. There are around 22,000 middle schools in Indonesia with a balanced ownership between public and private sector. Iran calls Middle School Secondary School, which caters to children between the ages 13 and 16, i.e. 7th, 8th and 9th grade. In most of the cities in Israel, middle school (Hebrew: חטיבת ביניים, Khativat Beynaiym) covers ages 12 to 15. From the 7th grade to the 9th. In Italy the equivalent is the "scuola secondaria di primo grado '' formerly and commonly called "middle lower school '' (Scuola Media Inferiore), often shortened to "middle school '' (Scuola Media). When the "Scuola secondaria di secondo grado '', the equivalent of high school, was formerly called "middle higher school '' (Scuola Media Superiore), commonly called "Superiori ''. The Middle School lasts three years from the student age of 11 to age 14. Since 2009, after "Gelmini reform '', the middle school was renamed "Scuola secondaria di primo grado '' ("junior secondary school ''). Middle school in Jamaica is called a Junior High School. It is from grade 7 - 9 but this idea is becoming rare now so grades 7 - 9 is considered lower secondary. (They also have a primary school (grades 1 - 6)) Junior high schools (中学校 chūgakkō) for students aged 12 through 15 years old. In Kosovo "middle school '' refers to educational institutions for ages between 14 and 18, and lasts 3 -- 4 years, following elementary school (which lasts 8 or 9 years). Gymnasiums are the most prestigious type of "middle '' school. In Kuwait, middle school is from grade 6 - 9 and from age 11 - 14. In Lebanon, middle school or intermediate school consists of grades 7, 8, and 9. At the end of 9th grade, the student is given the National diploma examination. In Macedonia "middle school '' refers to educational institutions for ages between 14 and 18, and lasts 3 -- 4 years, following elementary school (which lasts 8 or 9 years). Gymnasiums are the most prestigious type of "middle '' school. In Malaysia, the middle school equivalent is called lower secondary school which consists of students from age 13 to 15 (Form 1 - 3). Usually, these lower secondary schools are combined with upper secondary schools to form a single secondary school which is also known as high school. Students at the end of their lower secondary studies are required to sit for an examination called PT3 (Form 3. 7 subjects for non-Muslim students and 8 subjects for Muslim students) in order to determine their field of studies for upper secondary (Form 4 - 5). In Mexico, the middle school system is called Secundaria and usually comprises three years, grades 7 -- 9 (ages: 7: 12 -- 13, 8: 13 -- 14, 9: 14 -- 15). It is completed after Primaria (Elementary School, up to grade 6: ages 6 -- 12) and before Preparatoria / Bachillerato (High School, grades 10 -- 12 ages 15 -- 18). In Montenegro "middle school '' refers to educational institutions for ages between 14 and 18, and lasts 3 -- 4 years, following elementary school (which lasts 8 or 9 years). Gymnasiums are the most prestigious type of "middle '' school. In New Zealand middle schools are known as intermediate schools. They generally cover years 7 and 8 (formerly known as Forms 1 to 2). Students are generally aged between 10 and 13. There are full primary schools which also contain year 7 and 8 with students continuing to high school at year 9 (formerly known as Form 3). Some high schools also include years 7 and 8. After 2000 there was an increased interest in middle schooling (for years 7 -- 10) with at least seven schools offering education to this age group opening around the country in Auckland, Cambridge, Hamilton, Christchurch and Upper Hutt. In Pakistan, middle school (Class 1 -- 8) is a combination of primary (Class 1 -- 5) and middle (Class 6 -- 8). There are n't middle schools in Peru. Students aged 12 to 15 attend the first years of educación secundaria (secondary school.) Middle School in the Philippines is called Junior High School which starts at 7th Grade to 10th Grade & formerly called First Year to Fourth Year. It often starts at the age of 12 to the age of 16 & Senior High School which starts at 11th Grade to 12th Grade & formerly called First Year to Second Year College. It often starts at the age of 16 to the age of 18. Some schools, such as Miriam College in Loyola Heights, have their Middle Schools from 6th Grade to 8th Grade. Middle school in Poland, called gimnazjum, was first introduced in 1932. The education was intended for pupils of at least 12 years of age and lasted four years. Middle schools were part of the educational system until the reform of 1947, except during World War II. The middle schools were reinstated in Poland in 1999 now lasting three years after six years of primary school. Pupils entering gimnazjum are usually 13 years old. Middle school is compulsory for all students, and it is also the final stage of mandatory education. In the final year students take a standardized test to evaluate their academic skills. Higher scorers in the test are allowed first pick of school if they want to continue their education, which is encouraged. Starting with the school year 2017 / 18, middle schools are scheduled to be disbanded and primary schools to be extended to lasting eight years, as it was before 1999. In Portugal, the middle school is known as 2nd and 3rd cycles of basic education (2o e 3o ciclos do ensino básico). It comprises the 5th till 9th year of compulsory education, for children between ten and fifteen years old. After the education reform of 1986, the former preparatory school (escola preparatória) and the first three years of the liceu, became part of basic education (educação básica). Basic education now includes: Middle school in Romania, or gymnasium, includes grades 5 to 8 and the students usually share the building with the students of Primary school but in different wings / floors. Primary school lessons are taught by a handful of teachers: most are covered by one of them, and more specific areas such as foreign languages, religion or gym may have dedicated teachers. The transition to middle school changes that to a one teacher per course model where the students usually remain in the same classroom while the teachers rotate between courses. At the end of the eighth grade (usually corresponding to age 15 or 16), students take a written exam that counts for 80 % (before, it used to be 50 %) of the average needed to enroll in high school. Students then go to high school or vocational school, depending on their final grade. Schooling is compulsory until the tenth grade (which corresponds with the age of 16 or 17). The education process is done in numbered semesters, the first semester lasting 19 weeks between September and February and the second semester lasting 16 weeks between February and June. Middle school in Russia covers grades 5 to 9, and is a natural continuation of primary school activities (usually they share the building but are located in different wings / floors). Primary school lessons are taught by a handful of teachers: most are covered by one of them, and more specific areas such as English or gym may have dedicated teachers. The transition to middle school changes that to a one teacher per course model, where teachers stay in their classrooms and pupils change rooms during breaks. Examples of courses include mathematics (split from grade 7 into algebra, geometry and physics), visual arts, Russian language, foreign language, history, literature, geography, biology, computer science, chemistry (from grade 8), social theory (in grade 9). The education process is done in numbered quarters, with the first quarter covering September and October, second quarter November and December, third quarter going from mid January to mid March, fourth quarter covering April and May. There are one week long holidays between quarters 1 and 2 as well as 3 and 4, somewhat longer holidays between quarters 2 and 3 to allow for New Year festivities, and a three - month break between the years. At the end of middle school most people stay in school for two more years and get a certificate allowing them to pursue university, but some switch to vocational - technical schools. In Saudi Arabia, middle school includes grade 7 through 9, consisting of students from ages 12 to 15. In Serbia "middle school '' refers to educational institutions for ages between 14 and 18, and lasts 3 -- 4 years, following elementary school (which lasts 8 or 9 years). Gymnasiums are the most prestigious type of "middle '' school. In Singapore, middle school is usually referred to as secondary school. Students start secondary school after completing primary school at the age of 13, and to 16 (four years if they are taking the Special, Express or Normal Technical courses), or 17 (five years if they are taking the Normal Academic courses). Students from the Special and Express courses take the GCE ' O ' Levels after four years at the end of secondary education, and students from the Normal (Academic and Technical) courses take the GCE ' N ' Level examinations after four years, and the Normal Academic students has the option to continue for the ' O ' Levels. Selected excelling students also have the option to change classes which then affect the years they study. After completing secondary school, students move on to pre-tertiary education (i.e. in institutes such as junior colleges, polytechnics, ITE). In Slovenia "middle school '' refers to educational institutions for ages between 14 and 18, and lasts 3 -- 4 years, following elementary school (which lasts 8 or 9 years). Gymnasiums are the most prestigious type of "middle '' school. In Somalia, middle school identified as intermediate school is the four years between secondary school and primary school. Pupils start middle school from form as referred to in Somalia or year 5 and finish it at year 8. Students start middle school from the age of 11 and finish it when they are 14 -- 15. Subjects, which middle school pupils take are: Somali, Arabic, English, Religion, Science, Geography, History, Math, Textiles, Art and Design, Physical Education (PE) (Football) and sometimes Music. In some middle schools, it is obligatory to study Italian. In South Korea, a middle school is called a jung hakgyo (Hangul: 중학교; Hanja: 中 學 校) which includes grades 7 through 9 (referred to as: middle school 1st -- 3rd grades; approx. age 13 -- 15). In Spain, education is compulsory for children between 6 and 16 years. Basic education is divided into Educación Primaria (first grade through sixth grade), which is the Spanish equivalent of elementary school; and Educación Secundaria Obligatoria or ESO (seventh through tenth grade), roughly the Spanish equivalent of middle school and (partially) high school. The usual ages in ESO are 12 to 15 years old, but they can range between 11 and 16 depending on the birth date (a student who was born late in the year may start ESO at 11 if he or she will turn 12 before January 1st, and a student who was born early in the year may finish ESO after turning 16). After ESO, students can continue their pre-university education attending to Bachillerato (eleventh and twelfth grade) or choose a Ciclo de Formación Profesional (an improved type of vocational school). Junior high schools (three years from 7th to 9th grade) in Taiwan were originally called "primary middle school ''. However, in August 1968, they were renamed "nationals ' middle school '' often translated "junior high '') when they became free of charge and compulsory. Private middle school nowadays are still called "primary middle school ''. Taiwanese students older than twelve normally attend junior high school. Accompanied with the switch from junior high to middle school was the cancellation of entrance examination needed to enter middle school. In Tunisia and Morocco, a middle school includes grades 7 through 9, consisting of students from ages 12 to 15. In England and Wales, local education authorities introduced middle schools in the 1960s and 1970s. The notion of Middle Schools was mooted by the Plowden Report of 1967 which proposed a change to a three - tier model including First schools for children aged between 4 and 7, Middle Schools for 7 -- 11 year - olds, and then upper or high schools for 11 -- 16 year - olds. Some authorities introduced Middle Schools for ideological reasons, in line with the report, while others did so for more pragmatic reasons relating to the raising of the school leaving age in compulsory education to 16, or to introduce a comprehensive system. Different authorities introduced different age - range schools, although in the main, three models were used: In many areas primary school rather than first school was used to denote the first tier. In addition, some schools were provided as combined schools catering for pupils in the 5 -- 12 age range as a combined first and middle school. Around 2000 middle and combined schools were in place in the early 1980s. However, that number began to fall in the later 1980s with the introduction of the National Curriculum. The new curriculum 's splits in Key Stages at age 11 encouraged the majority of local education authorities to return to a two - tier system of Primary (sometimes split into Infant schools and Junior schools) and Secondary schools. There are now fewer than 150 middle schools still operational in the United Kingdom, meaning that approximately 90 % of middle schools have closed or reverted to primary school status since 1980. The system of 8 - 12 middle schools has fallen into complete disuse. Under current legislation, all middle schools must be deemed either primary or secondary. Thus, schools which have more primary year groups than KS3 or KS4 are termed deemed primaries or middles - deemed - primaries, while those with more secondary - aged pupils, or with pupils in Y11 are termed deemed secondaries or middles - deemed - secondaries. For statistical purposes, such schools are often included under primary and secondary categories "as deemed ''. Notably, most schools also follow teaching patterns in line with their deemed status, with most deemed - primary schools offering a primary - style curriculum taught by one class teacher, and most deemed - secondary schools adopting a more specialist - centred approach. Legally all - through schools are also considered middle schools (deemed secondary), although they are rarely referred to as such. Some middle schools still exist in various areas of England. They are supported by the National Middle Schools ' Forum. See List of middle schools in England. In Scotland, a similar system to the English one was trialled in Grangemouth middle schools, Falkirk between 1975 and 1987. The label of junior high school is used for some through schools in Orkney and Shetland which cater for pupils from 5 up to the age of 14, at which point they transfer to a nearby secondary school. In Northern Ireland, in the Armagh, Banbridge and Craigavon District Council area in County Armagh, the Dickson Plan operates, whereby pupils attend a primary school from ages 4 -- 10, a junior high school from 11 -- 14, and a senior high school or grammar school from 14 -- 19. This is not dissimilar to the middle school system. Middle schools in the United States usually cover grades 5 -- 8, 6 -- 8, or 7 -- 8. Historically, local public control (and private alternatives) have allowed for some variation in the organization of schools. Elementary school includes kindergarten through to sixth grade, or kindergarten through to fifth grade, i.e. up to age 12, but some elementary schools have four or eight grades, i.e. up to ages 10 or 14 (also known as the intermediate grades). Basic subjects are taught and students often remain in one or two classrooms throughout the school day, except for physical education, library, music, and art classes. In 2001, there were about 3.6 million children in each grade in the United States. "Middle schools '' and "junior high schools '' are schools that span grades 5 or 6 to 8 and 7 to 8, respectively, but junior high schools spanning grades 7 to 9 were common until the 1980s. The range defined by either is often based on demographic factors, such as an increase or decrease in the relative numbers of younger or older students, with the aim of maintaining stable school populations. At this time, students are given more independence, moving to different classrooms for different subjects, which includes math, social studies, science, and language arts. Also, students are able to choose some of their class subjects (electives). Usually, starting in ninth or tenth grade, grades become part of a student 's official transcript. In the U.S., children within this grade - range are sometimes referred to as junior highers. The middle school format has now replaced the junior high format by a ratio of about ten to one in the United States, but at least two school districts had incorporated both systems in 2010. The "junior high school '' concept was introduced in 1909, in Columbus, Ohio. In the late 19th century and early 20th century most American elementary schools had grades 1 through 8, and this organization still exists, where some concepts of middle school organization have been adapted to the intermediate grades. As time passed, the junior high school concept increased quickly as new school districts proliferated, or systems modernized buildings and curriculum. This expansion continued through the 1960s. Jon Wiles, author of Developing Successful K -- 8 Schools: A Principal 's Guide, said that "(a) major problem '' for the original model was "the inclusion of the ninth grade '', because of the lack of instructional flexibility, due to the requirement of having to earn high school credits in the ninth grade -- and that "the fully adolescent ninth grader in junior high school did not seem to belong with the students experiencing the onset of puberty ''. The new middle school model began to appear in the mid-1960s. Wiles said, "At first, it was difficult to determine the difference between a junior high school and a middle school, but as the middle school became established, the differences became more pronounced (...). '' Junior high schools were created for "bridging the gap between the elementary and the high school '', an emphasis credited to Charles W. Eliot. The faculty is organised into academic departments that operate more or less independently of one another. In Uruguay, the public middle school consists of two stages, one mandatory called "Basic Cycle '' or "First Cycle ''. This consists of three years, ages 12 -- 13, 13 -- 14 and 14 -- 15, and one optional called "Second Cycle '', ages 15 -- 16, 16 -- 17 and 17 -- 18. The Second Cycle is divided into 4 options in the 5th grade: "Human Sciences '', "Biology '', "Scientific '' and "Arts '', and 7 options in the 6th and last grade: "Law '' or "Economy '' (if Human Sciences course taken in 5th), "Medicine '' or "Agronomy '' (if Biological course taken in 5th), "Architecture '' or "Engineering '' (if Scientific course taken in 5th) and "Arts '' (if Arts course taken in 5th). Both these stages are commonly known as "Liceo '' (Spanish for "high school ''). Middle school starts at grade 7 and ends at grade 9. In Venezuela, public middle schools have a different Spanish name than private schools. The school system includes a preparatory year before first grade, so nominal grade levels are offset when compared to other countries (except those countries who have mandatory pre-school). Middle schools (educación media general, ages 12 -- 15) are from 7th grade (equivalent to 8th grade US) to 11th grade, which is equivalent to 12th grade. In some institutions called "Technical Schools '' there is an extra grade, for those who want to graduate as "Middle technician '' in a certain area. This education would allow them to be hired at a higher level, or get introduced more easily into a college career. There is a "college test '' from main universities of the country. Their score on this test might allow them to more quickly obtain a spot within an institution. Students with high qualifications during the high school, have more chances to have the spot. Secondary school, or Junior High school, includes grade 6 to 9. After finishing grade 9, students have to take the graduating test nationally, which includes Mathematics, Literature and English. The maximum score for each test is 10, with the first two subjects (called the Core Subjects) multiplied by two for a total possible score of 50. Reward points from the vocational course; spanning from 1.0 to 2.0 according to the students ' record could also be added to the final score. Some public schools use the graduating exam 's score and student 's transcripts to make their decision. Many other public and private schools require students who apply for those schools to take their entrance exams. The administration team would review the student 's transcripts and his or her exam to decide whether that student is qualified for their requirement or not.
what were the ottomans well known for trading
Economic History of the Ottoman Empire - wikipedia Economic history of the Ottoman Empire covers the period 1299 -- 1923. The economic history falls into two distinctive sub periods. The first is the classic era (enlargement), which comprised a closed agricultural economy, showing regional distinctions within the empire. The Second period was the reformation era that comprised state organized reforms, commencing with administrative and political structures through to state and public functions. Change began with military reforms extending to military associated guilds (Ottoman: لنكا) and public craft guilds. The Ottomans saw military expansion and fiscalism as the main source of wealth, with agriculture seen as more important than manufacture and commerce. (1) Western mercantilists gave more emphasis to manufacture and industry in the wealth - power - wealth equation, moving towards capitalist economics comprising expanding industries and markets whereas the Ottomans continued along the trajectory of territorial expansion, traditional monopolies, conservative land holding and agriculture. The quality of both land and sea transport was driven primarily by the efforts of the Ottoman administration over this time. As a result, the quality of transport infrastructure varied significantly over time depending on the current administration 's efficacy. The story of transport in the empire should not be seen as one of continual improvement. Indeed, the road infrastructure was significantly better in the 16th century than it was in the 18th century. In Anatolia the Ottomans inherited a network of caravanserai (also known as hans) from the Selçuk Turks that preceded them. It was in the interest of the empire to ensure the safety of couriers and convoys and by extension merchant caravans. The caravanserai network was extended into the Balkans and provided safe lodgings for merchants and their animals. The Jelali revolts of the 16th and 17th centuries did much to disrupt the land transport network in Anatolia. The empire could no longer ensure the safety of merchants who then had to negotiate safe passage with the local leader of the area they were travelling through. Only in the 18th century with concerted efforts to improve the safety of the caravanserai network and the reorganization of a corps of pass - guards did land transport in Anatolia improve. The empire did not take an active interest in sea trade preferring a free - market system from which they could draw a tax revenue. However such laissez - faire policies were not always followed. For example, under Hadim Suleyman Pasha 's tenure as Grand Vizier till 1544, the Ottoman administration was directly involved in the spice trade with the aim of increasing revenue. However such policies were often repealed by their successors. The main arenas of maritime activity were: the Aegean and Eastern Mediterranean (main trade: wheat); the Red Sea and Persian Gulf (main trade: spices); the Black Sea (main trade: wheat and lumber); and the Western Mediterranean. During the 19th century, new technologies radically transformed both travel and communications. Through the invention of the steam engine in Britain, water and land transport revolutionised the conduct of trade and commerce. The steam ship meant journeys became predictable, times shrank and large volumes of goods could be carried more cheaply. Quataert cites the Istanbul - Venice route, a main trade artery, taking anything from fifteen to eighty - one days by sail ship, was reduced to ten days by the steam ship. Sail ships would carry 50 to 100 tonnes. In contrast, steamships could now carry 1,000 tonnes. With the advent of the steam ship formerly intraversable routes opened up. Rivers that carried cargoes only in one direction could now be traversed both ways bringing innumerable benefits to certain regions. New routes like the Suez Canal were created, prompted by steamships, changing trade demographics across the Near East as trade was rerouted. Quataert 's research shows that the volume of trade began to rise over the course of the 19th century. By 1900 sailboats accounted for just 5 percent of ships visiting the Istanbul. However, this 5 percent was greater in number than any year of the 19th century. In 1873 Istanbul handled 4.5 million tons of shipping -- this was 10 million tons by 1900. The development of larger ships accelerated the growth of port cities with deep harbours in order to accommodate them. Europeans however owned 0 percent of commercial shipping operating in Ottoman waters. Not all regions benefited from steam ships as rerouting meant trade from Iran, Iraq and Arabia now did not need to go through Istanbul, Aleppo, and even Beirut, leading to losses in these territories. In terms of transport the Ottoman world could be split into two main regions. The European provinces connected by wheeled transport and the non-wheeled transport of Anatolia and the Arab world. Rail - roads revolutionized land transport profoundly, cutting journey times drastically promoting population movements and changing rural - urban relations. Rail - roads offered cheap and regular transport for bulk goods, allowing for the first time the potential of fertile interior regions to be exploited. When rail - roads were built near these regions agriculture developed rapidly with hundreds of thousands of tons of cereals being shipped in this way. Rail - roads had additional benefits for non-commercial passengers who began using them. 8 million passengers using the 1,054 - mile Balkan lines and 7 million using the Anatolian 1,488 miles. Railroads also created a new source of employment for over 13,000 workers by 1911. (149) With low population densities and lack of capital, the Ottomans did not develop extensive rail - road or shipping industries. Most of the capital for rail - roads came from European financiers, which gave them considerable financial control. Older forms of transport did not disappear with the arrival of steam. The businesses and animals used previously to transport goods between regions found new work in moving goods to and from trunk lines. The Aegean areas alone had over 10,000 camels working to supply local rail - roads. Ankara station had a thousand camels at a time waiting to unload goods. Furthermore, additional territories traversed by rail - roads encouraged development and improved agriculture. Like sailing vessels, land transport contributed to and invigorated trade and commerce across the empire. The Ottoman Empire was an agrarian economy, labour scarce, land rich and capital poor. Majority of the population earned their living from small family holdings and this contributed to around 40 percent of taxes for the empire directly as well as indirectly through customs revenues on exports. Cultivator families drew their livelihoods from a complex set of different economic activities and not merely from growing crops. This included growing a variety of crops for their own consumption as well as rearing animals for their milk and wool. Some rural families manufactured goods for sale to others, for instance Balkan villagers travelled to Anatolia and Syria for months to sell their wool cloth. This pattern established for the 18th century had not significantly changed at the beginning of the 20th century. That is not to say that there were no changes in the agrarian sector. Nomads played an important role in the economy, providing animal products, textiles and transportation. They were troublesome for the state and hard to control -- sedentarization programs took place in the 19th century, coinciding with huge influxes of refugees. This dynamic had the effect of a decline in animal rearing by tribes and an increase in cultivation. The rising commercialization of agriculture commencing in the 18th century meant more people began to grow more. With increased urbanisation, new markets created greater demand, easily met with the advent of railroads. State policy requiring a greater portion of taxes to be paid in cash influenced the increased production. Finally, increased demand for consumer goods themselves drove an increase in production to pay for the same. Quataert argues production rose due to a number of factors. An increase in productivity resulted from irrigation projects, intensive agriculture and utilisation of modern agricultural tools increasing in use throughout the 19th century. By 1900, tens of thousands of plows, reapers and other agricultural technologies such as combines were found across the Balkan, Anatolian and Arab lands. However, most of the increases in production came from vast areas of land coming under further cultivation. Families began increasing the amount of time at work, bringing fallow land into use. Sharecropping increased utilising land that had been for animal pasturage. Along with state policy, millions of refugees brought vast tracts of untilled land into production. The empty central Anatolian basin and steppe zone in the Syrian provinces were instances where government agencies parcelled out smallholdings of land to refugees. This was a recurring pattern across the empire, small landholdings the norm. Foreign holdings remained unusual despite Ottoman political weakness -- probably due to strong local and notable resistance and labour shortages. Issawi et al. have argued that division of labour was not possible, being based on religious grounds. Inalcik however demonstrates that division of labour was historically determined and open to change. Agricultural reform programs in the late 19th century saw the state founding agricultural schools, model farms, and education of a self - perpetuating bureaucracy of agrarian specialists focused on increasing agricultural exports. Between 1876 and 1908, the value of agricultural exports just from Anatolia rose by 45 per cent whilst tithe proceeds rose by 79 percent. However, cheap American grain imports undermined agricultural economies across Europe in some cases causing outright economic and political crises. No formal system had emerged to organize manufacturing in medieval Anatolia. The closest such organization that can be identified is the Ahi Brotherhood, a religious organization that followed the Sufi tradition of Islam during the 13th and 14th centuries. Most of the members were merchants and craftsmen and viewed taking pride in their work as part and parcel of their adherence to Islam. However, the organization was not a professional organization and should not be confused with the professional guilds that emerge later. It is not clear when or how various guilds emerged. What is known for sure is that by 1580 guilds had become a well established aspect of contemporary Ottoman society. This is evidenced by the Surname of 1582 which was a description of the procession to celebrate the circumcision of Murad III 's son Mehmed. The guilds were organizations that were responsible for the maintenance of standards, Whilst looking at Ottoman manufacture, a significant area of technology transfer, Quataert argues one must not only look at large factories but also the small workshops: "One will find then find that Ottoman industry was not a "dying, unadaptive, unevolving sector... '' (but) vital, creative, evolving and diverse ''. Over the 19th century, a shift occurred to rural female labour with guild organized urban - based male labour less important. The global markets for Ottoman goods fell somewhat with certain sectors expanding. However, any changes were compensated by an increase in domestic consumption and demand. Mechanized production even at its peak remained an insignificant portion of total output. The lack of capital, as in other areas of the economy, deterred the mechanization of production. Nonetheless, a number of factories did emerge in Istanbul, Ottoman Europe and Anatolia. In the 1830s steam powered silk reeling factories emerged in Salonica, Edirne, West Anatolia and the Lebanon. Under the late 18th century fine textiles, hand - made yarns and leathers were in high demand outside the empire. However, these declined by the early 19th century and half a century later production for export re-emerged in the form of raw silk and oriental carpets. The two industries alone employed 100,000 persons in 1914 two - thirds in carpet - making for European and American buyers. Most workers were women and girls, receiving wages that were amongst the lowest in the manufacturing sector. Much of the manufacturing shifted to the urban areas during the 18th century, in order to benefit from the lower rural costs and wages. Guilds operating prior to the 18th century did see a decline through the 18th and 19th centuries. Guilds provided some form of security in prices, restricting production and controlling quality and provided support to members who hit hard times. However, with market forces driving down prices their importance declined, and with the Janissaries as their backers, being disbanded by Mahmut II in 1826, their fate was sealed. By far the majority of producers targeted the 26 million domestic consumers who often lived in adjacent provinces to the producer. Analysing these producers is difficult, as they did not belong to organizations that left records. Manufacturing through the period 1600 -- 1914 witnessed remarkable continuities in the loci of manufacturing; industrial centers flourishing in the 17th century were often still active in 1914. Manufacturing initially struggled against Asian and then European competition in the 18th and 19th centuries whereby handicraft industries were displaced by cheaper industrially produced imports. However, manufacturing achieved surprising output levels, with the decline of some industries being more than compensated by the rise of new industries. Decline of handicrafts production saw a shift of output move to agricultural commodity production and other manufacturing output. In the early 19th century, Ottoman Egypt had an advanced economy, with a per - capita income comparable to that of leading Western European countries such as France, and higher than the overall average income of Europe and Japan. Economic historian Jean Barou estimated that, in terms of 1960 dollars, Egypt in 1800 had a per - capita income of $232 ($1,025 in 1990 dollars). In comparison, per - capita income in terms of 1960 dollars for France in 1800 was $240 ($1,060 in 1990 dollars), for Eastern Europe in 1800 was $177 ($782 in 1990 dollars), and for Japan in 1800 was $180 ($795 in 1990 dollars). In addition to Egypt, other parts of the Ottoman Empire, particularly Syria and southeastern Anatolia, also had a highly productive manufacturing sector that was evolving in the 19th century. In 1819, Egypt under Muhammad Ali began programs of state - sponsored industrialization, which included setting up factories for weapons production, an iron foundry, large - scale cotton cultivation, mills for ginning, spinning and weaving of cotton, and enterprises for agricultural processing. By the early 1830s, Egypt had 30 cotton mills, employing about 30,000 workers. In the early 19th century, Egypt had the world 's fifth most productive cotton industry, in terms of the number of spindles per capita. The industry was initially driven by machinery that relied on traditional energy sources, such as animal power, water wheels, and windmills, which were also the principle energy sources in Western Europe up until around 1870. While steam power had been experimented with in Ottoman Egypt by engineer Taqi ad - Din Muhammad ibn Ma'ruf in 1551, when he invented a steam jack driven by a rudimentary steam turbine, it was under Muhammad Ali of Egypt in the early 19th century that steam engines were introduced to Egyptian industrial manufacturing. While there was a lack of coal deposits in Egypt, prospectors searched for coal deposits there, and manufactured boilers which were installed in Egyptian industries such as ironworks, textile manufacturing, paper mills and hulling mills. Coal was also imported from overseas, at similar prices to what imported coal cost in France, until the 1830s, when Egypt gained access to coal sources in Lebanon, which had a yearly coal output of 4,000 tons. Compared to Western Europe, Egypt also had superior agriculture and an efficient transport network through the Nile. Economic historian Jean Batou argues that the necessary economic conditions for rapid industrialization existed in Egypt during the 1820s -- 1830s, as well as for the adoption of oil as a potential energy source for its steam engines later in the 19th century. Following the death of Muhammad Ali in 1849, his industrialization programs fell into decline, after which, according to historian Zachary Lockman, "Egypt was well on its way to full integration into a European - dominated world market as supplier of a single raw material, cotton. '' He argues that, had Egypt succeeded in its industrialization programs, "it might have shared with Japan (or the United States) the distinction of achieving autonomous capitalist development and preserving its independence. '' Economic historian Paul Bairoch argues that free trade contributed to deindustrialization in the Ottoman Empire. In contrast to the protectionism of China, Japan, and Spain, the Ottoman Empire had a liberal trade policy, open to foreign imports. This has origins in capitulations of the Ottoman Empire, dating back to the first commercial treaties signed with France in 1536 and taken further with capitulations in 1673 and 1740, which lowered duties to 3 % for imports and exports. The liberal Ottoman policies were praised by British economists such as J.R. McCulloch in his Dictionary of Commerce (1834), but later criticized by British politicians such as Prime Minister Benjamin Disraeli, who cited the Ottoman Empire as "an instance of the injury done by unrestrained competition '' in the 1846 Corn Laws debate: There has been free trade in Turkey, and what has it produced? It has destroyed some of the finest manufactures of the world. As late as 1812 these manufactures existed; but they have been destroyed. That was the consequences of competition in Turkey, and its effects have been as pernicious as the effects of the contrary principle in Spain. Domestic trade vastly exceeded international trade in both value and volume though researchers have little in direct measurements. Much of Ottoman history has been based on European archives that did not document the empire 's internal trade resulting in it being underestimated. Quataert illustrates the size of internal trade by considering some examples. The French Ambassador in 1759 commented that total textile imports into the empire would clothe a maximum of 800,000 of a population of at least 20 million. In 1914 less than a quarter of agricultural produce was being exported the rest being consumed internally. The early 17th century saw trade in Ottoman - made goods in the Damascus province exceeded five times the value of all foreign - made goods sold there. Finally, amongst the sparse internal trade data are some 1890s statistics for three non-leading cities. Their sum value of their interregional trade in the 1890s equalled around 5 percent of total Ottoman international export trade at the time. Given their minor status, cities like Istanbul, Edirne, Salonica, Damascus, Beirut or Aleppo being far greater than all three, this is impressively high. These major trade centres, dozens of medium - sized towns, hundreds of small towns and thousands of villages remains uncounted -- it puts into perspective the size of domestic trade. Two factors that had major impact on both internal and international trade were wars and government policies. Wars had major impact on commerce especially where there were territorial losses that would rip apart Ottoman economic unity, often destroying relationships and patterns that had endured centuries. The role of government policy is more hotly debated -- however most policy - promoted barriers to Ottoman international and internal commerce disappeared or were reduced sharply. However, there appears little to indicate a significant decline in internal trade other than disruption caused by war and ad - hoc territorial losses. Global trade increased around sixty - fourfold in the 19th century whereas for the Ottomans it increased around ten to sixteenfold. Exports of cotton alone doubled between 1750 and 1789. The largest increases were recorded from the ports of Smyrna and Salonica in the Balkans, however they were partially offset by some reductions from Syria and Constantinople. While cotton exports to France and England doubled between the late 17th and late 18th centuries, exports of semi-processed goods to northwest Europe also increased. Whilst the Ottoman market was important to Europe in the 16th century, it was no longer so by 1900. The Ottoman Empire was not shrinking - quite the opposite in fact -- however it was becoming relatively less significant. As regards trade imbalance, only Constantinople ran an import surplus. Both Lampe and McGowan argue that the empire as a whole, and the Balkans in particular, continued to record an export surplus throughout the period. The balance of trade however moved against the Ottomans from the 18th century onwards. They would re-export high value luxury goods, mainly silks from the Far East and exported many of its own goods. Luxury goods began being imported. Through the 18th century, exports moved to unprocessed goods whilst at the same time commodities were imported from European colonies. Most of these commodities were produced by slave labour undercutting domestic production. However, according to most scholars, a favourable balance of trade still existed at the end of the 18th century. 19th century trade increased multi-fold, however exports remained similar to 18th century levels. Foodstuffs and raw materials were the focus with carpets and raw silk appearing in the 1850s. Although the basket of exports remained generally constant, relative importance of the goods would vary considerably. From the 18th century onwards, foreign merchants and Ottoman non-Muslims became dominant in the growing international trade. With increasing affluence, their political significance grew especially in Syria. Muslim merchants however dominated internal trade and trade between the interior and coastal cities. Foreign trade, a minor part of the Ottoman economy, became slightly more important towards the end of the 19th century with the rise of protectionism in Europe and producers looking to new markets. Its growth was seen throughout the period under study, particularly the 19th century. Throughout, the balance of payments was roughly on par with no significant long - term deficits or surpluses. Ottoman bureaucratic and military expenditure was raised by taxation, generally from the agrarian population. Pamuk notes considerable variation in monetary policy and practice in different parts of the empire. Although there was monetary regulation, enforcement was often relaxed and little effort was made to control the activities of merchants, moneychangers, and financiers. During the "price revolution '' of the 16th century, when inflation took off, there were price increases of around 500 percent from the end of the 15th century to the close of the 17th. However, the problem of inflation did not remain and the 18th century did not witness the problem again. The 18th century witnessed increasing expenditure for military related expenditure and the 19th century for both bureaucracy and military. McNeil describes an Ottoman stagnation through centre - periphery relations -- a moderately taxed centre with periphery provinces suffering the burden of costs. Though this analysis may apply to some provinces, like Hungary, recent scholarship has found that most of the financing was through provinces closer to the centre. As the empire modernized itself in line with European powers, the role of the central state grew and diversified. In the past, it had contented itself with raising tax revenues and war making. It increasingly began to address education, health and public works, activities that used to be organised by religious leaders in the communities -- this can be argued as being necessary in a rapidly changing world and was a necessary Ottoman response. At the end of the 18th century, there were around 2,000 civil officials ballooning to 35,000 in 1908. The Ottoman military increasingly adopted western military technologies and methods, increasing army personnel of 120,000 in 1837 to over 120,000 in the 1880s. Other innovations were increasingly being adopted including the telegraph, railroads and photography, utilised against old mediators who were increasingly marginalised. Up to 1850, the Ottoman Empire was the only empire to have never contracted foreign debt and its financial situation was generally sound. As the 19th century increased the state 's financial needs, it knew it could not raise the revenues from taxation or domestic borrowings, so resorted to massive debasement and then issued paper money. It had considered European debt, which had surplus funds available for overseas investment, but avoided it aware of the associated dangers of European control. However, the Crimean war of 1853 - 1856 resulted in the necessity of such debt. The Ottomans had not yet developed their own financial system in line with London and Paris. Since the beginning of the 18th century, the government was aware of the need for a reliable bank. The Galata bankers as well as the Bank of Constantinople did not have the capital or competence for such large undertakings. As such, Ottoman borrowings followed the Heckscher - Ohlin theorem. Borrowing spanned two distinct periods, 1854 - 1876 (see Table 4). The first is the most important resulted in defaults in 1875. Borrowings were normally at 4 to 5 percent of the nominal value of the bond, new issues however being sold at prices well below these values netted of commissions involved in the issue, resulting in a much higher effective borrowing rate -- coupled with a deteriorating financial situation, the borrowing rate rarely went below 10 percent after 1860. European involvement began with the creation of the Public Debt Administration, after which a relatively peaceful period meant no wartime expenditures and the budget could be balanced with lower levels of external borrowing. The semi-autonomous Egyptian province also ran up huge debts in the late 19th century resulting in foreign military intervention. With security from the Debt Administration further European capital entered the empire in railroad, port and public utility projects, increasing foreign capital control of the Ottoman economy. The debt burden increased consuming a sizeable chunk of the Ottoman tax revenues -- by the early 1910s deficits had begun to grow again with military expenditure growing and another default may have occurred had it not been for the outbreak of the First World War. The exact amount of annual income the Ottoman government received, is a matter of considerable debate, due to the scantness and ambiguous nature of the primary sources. The following table contains approximate estimates. This article incorporates text from a publication now in the public domain: Chisholm, Hugh, ed. (1911). "article name needed ''. Encyclopædia Britannica (11th ed.). Cambridge University Press.
the first nuclear device was exploded by indian scientist at
Smiling Buddha - Wikipedia Smiling Buddha (MEA designation: Pokhran - I) was the assigned code name of India 's first successful nuclear bomb test on 18 May 1974. The bomb was detonated on the army base, Pokhran Test Range (PTR), in Rajasthan by the Indian Army under the supervision of several key Indian generals. Pokhran - I was also the first confirmed nuclear weapons test by a nation outside the five permanent members of the United Nations Security Council. Officially, the Indian Ministry of External Affairs (MEA) claimed this test was a "peaceful nuclear explosion '', but it was an accelerated nuclear programme. India started its own nuclear programme in 1944 when Homi J. Bhabha founded the Tata Institute of Fundamental Research. Physicist Raja Ramanna played an essential role in nuclear weapons technology research; he expanded and supervised scientific research on nuclear weapons and was the first directing officer of the small team of scientists that supervised and carried out the test. After Indian independence from the British Empire, Indian Prime Minister Jawaharlal Nehru authorised the development of a nuclear programme headed by Homi Bhabha. The Atomic Energy Act of 1948 focused on peaceful development. India was heavily involved in the development of the Nuclear Non-Proliferation Treaty, but ultimately opted not to sign it. We must develop this atomic energy quite apart from war -- indeed I think we must develop it for the purpose of using it for peaceful purposes... Of course, if we are compelled as a nation to use it for other purposes, possibly no pious sentiments of any of us will stop the nation from using it that way. In 1954, Bhabha steered the nuclear programme in the direction of weapons design and production. Two important infrastructure projects were commissioned. The first established Trombay Atomic Energy Establishment at Mumbai (Bombay). The other created a governmental secretariat, Department of Atomic Energy (DAE), of which Bhabha was the first secretary. From 1954 to 1959, the nuclear programme grew swiftly. By 1958, the DAE had one - third of the defense budget for research purposes. In 1954, India reached a verbal understanding with the United States and Canada under the Atoms for Peace programme; the United States and Canada ultimately agreed to provide and establish the CIRUS research reactor also at Trombay. The acquisition of CIRUS was a watershed event in nuclear proliferation with the understanding between India and the United States that the reactor would be used for peaceful purposes only. CIRUS was an ideal facility to develop a plutonium device, and therefore Nehru refused to accept nuclear fuel from Canada and started the programme to develop an indigenous nuclear fuel cycle. In July 1958, Nehru authorised "Project Phoenix '' to build a reprocessing plant with a capacity of 20 tons of fuel a year -- a size to match the production capacity of CIRUS. The plant used the PUREX process and was designed by the American firm Vitro International. Construction of the plutonium plant began at Trombay on 27 March 1961, and it was commissioned in mid-1964. The nuclear programme continued to mature, and by 1960, Nehru made the critical decision to move the programme into production. At about the same time, Nehru held discussions with the American firm Westinghouse Electric to construct India 's first nuclear power plant in Tarapur, Maharashtra. Kenneth Nichols, a US Army engineer, recalls from a meeting with Nehru, "it was that time when Nehru turned to Bhabha and asked Bhabha for the timeline of the development of a nuclear weapon ''. Bhabha estimated he would need about a year to accomplish the task. By 1962, the nuclear programme was still developing, but at a slow rate. Nehru was distracted by the Sino - Indian War, during which India lost territory to China. Nehru turned to the Soviet Union for help, but the Soviet Union was preoccupied with the Cuban Missile Crisis. The Soviet Politburo turned down Nehru 's request for arms and continued backing the Chinese. India concluded that the Soviet Union was an unreliable ally, and this conclusion strengthened India 's determination to create a nuclear deterrent. Design work began in 1965 under Bhabha and proceeded under Raja Ramanna who took over the programme after the former 's death. Bhabha was now aggressively lobbying for nuclear weapons and made several speeches on Indian radio. In 1964, Bhabha told the Indian public via Indian radio that "such nuclear weapons are remarkably cheap '' and supported his arguments by referring to the economical cost of American nuclear testing programme (Plowshare). Bhabha stated to the politicians that a 10 kt device would cost around $350,000, and $600,000 for a 2 Mt. From this, he estimated that "a stockpile '' of around 50 atomic bombs would cost under $21 million and a stockpile of 50 two - megaton hydrogen bombs would cost around $31.5 million. '' Bhabha did not realise, however, that the U.S. Plowshare cost - figures were produced by a vast industrial complex costing tens of billions of dollars, which had already manufactured nuclear weapons numbering in the tens of thousands. The delivery systems for nuclear weapons typically cost several times as much as the weapons themselves. The nuclear programme was partially slowed down when Lal Bahadur Shastri became the prime minister. In 1965, Shastri faced another war with Pakistan. Shastri appointed physicist Vikram Sarabhai as the head of the nuclear programme but because of his Gandhian beliefs Sarabhai directed the programme toward peaceful purposes rather than military development. In 1967, Indira Gandhi became the prime minister and work on the nuclear programme resumed with renewed vigour. Homi Sethna, a chemical engineer, played a significant role in the development of weapon - grade plutonium while Ramanna designed and manufactured the whole nuclear device. The first nuclear bomb project did not employ more than 75 scientists because of its sensitivity. The nuclear weapons programme was now directed towards the production of plutonium rather than uranium. In 1968 -- 69, P.K. Iyengar visited the Soviet Union with three colleagues and toured the nuclear research facilities at Dubna, Russia. During his visit, Iyengar was impressed by the plutonium - fueled pulsed fast reactor. Upon his return to India, Iyengar set about developing plutonium reactors approved by the Indian political leadership in January 1969. The secret plutonium plant was known as Purnima, and construction began in March 1969. The plant 's leadership included Iyengar, Ramanna, Homi Sethna, and Sarabhai. Sarabhai 's presence indicates that, with or without formal approval, the work on nuclear weapons at Trombay had been commenced. India continued to harbour ambivalent feelings about nuclear weapons, and accorded low priority to their production until the Indo - Pakistani War of 1971. In December 1971, Richard Nixon sent a carrier battle group led by the USS Enterprise (CVN - 65) into the Bay of Bengal in an attempt to intimidate India. The Soviet Union responded by sending a submarine armed with nuclear missiles from Vladivostok to trail the US task force. The Soviet response demonstrated the deterrent value and significance of nuclear weapons and ballistic missile submarines to Indira Gandhi. India gained the military and political initiative over Pakistan after acceding to the treaty that divided Pakistan into two different political entities. On 7 September 1972, near the peak of her post-war popularity, Indira Gandhi authorised the Bhabha Atomic Research Centre (BARC) to manufacture a nuclear device and prepare it for a test. Although the Indian Army was not fully involved in the nuclear testing, the army 's highest command was kept fully informed of the test preparations. The preparations were carried out under the watchful eyes of the Indian political leadership, with civilian scientists assisting the Indian Army. The device was formally called the "Peaceful Nuclear Explosive '', but it was usually referred to as the Smiling Buddha. The device was detonated on 18 May 1974, Buddha Jayanti (a festival day in India marking the birth of Gautama Buddha). Indira Gandhi maintained tight control of all aspects of the preparations of the Smiling Buddha test, which was conducted in extreme secrecy; besides Gandhi, only advisers Parmeshwar Haksar and Durga Dhar were kept informed. Scholar Raj Chengappa asserts the Indian Defence Minister Jagjivan Ram was not provided with any knowledge of this test and came to learn of it only after it was conducted. Swaran Singh, the Minister of External Affairs, was given 48 hours advance notice. The Indira Gandhi administration employed no more than 75 civilian scientists while General G.G. Bewoor, Indian army chief, and the commander of Indian Western Command were the only military commanders kept informed. The head of this entire nuclear bomb project was the director of the BARC, Raja Ramanna. In later years, his role in the nuclear programme would be more deeply integrated as he remained head of the nuclear programme most of his life. The designer and creator of the bomb was P.K. Iyengar, who was the second in command of this project. Iyengar 's work was further assisted by the chief metallurgist, R. Chidambaram, and by Nagapattinam Sambasiva Venkatesan of the Terminal Ballistics Research Laboratory, who developed and manufactured the high explosive implosion system. The explosive materials and the detonation system were developed by Waman Dattatreya Patwardhan of the High Energy Materials Research Laboratory. The overall project was supervised by chemical engineer Homi Sethna, Chairman of the Atomic Energy Commission of India. Chidambaram, who would later coordinate work on the Pokhran - II tests, began work on the equation of state of plutonium in late 1967 or early 1968. To preserve secrecy, the project employed no more than 75 scientists and engineers from 1967 -- 74. It is theorised that Abdul Kalam also arrived at the test site as the representative of the DRDO, although he had no role whatsoever in the development of the nuclear bomb or even in the nuclear programme. The device was of the implosion - type design and had a close resemblance to the American nuclear bomb called the Fat Man. The implosion system was assembled at the Terminal Ballistics Research Laboratory (TBRL) of the DRDO in Chandigarh. The detonation system was developed at the High Energy Materials Research Laboratory (HEMRL) of the DRDO in Pune, Maharashtra State. The 6 kg of plutonium came from the CIRUS reactor at BARC. The neutron initiator was of the polonium -- beryllium type and code - named Flower. The complete nuclear bomb was engineered and finally assembled by Indian engineers at Trombay before transportation to the test site. The fully assembled device had a hexagonal cross section, 1.25 metres in diameter, and weighed 1400 kg. The device was mounted on a hexagonal metal tripod, and was transported to the shaft on rails which the army kept covered with sand. The device was detonated when Dastidar pushed the firing button at 8.05 a.m.; it was in a shaft 107 m under the army Pokhran test range in the Thar Desert (or Great Indian Desert), Rajasthan. Coordinates of the crater are 27 ° 05 ′ 42 '' N 71 ° 45 ′ 11 '' E  /  27.095 ° N 71.753 ° E  / 27.095; 71.753 Coordinates: 27 ° 05 ′ 42 '' N 71 ° 45 ′ 11 '' E  /  27.095 ° N 71.753 ° E  / 27.095; 71.753. The nuclear yield of this test still remains controversial, with unclear data provided by Indian sources, although Indian politicians have given the country 's press a range from 2 kt to 20 kt. The official yield was initially set at 12 kt; post-Operation Shakti claims have raised it to 13 kt. Independent seismic data from outside and analysis of the crater features indicate a lower figure. Analysts usually estimate the yield at 4 to 6 kt, using conventional seismic magnitude - to - yield conversion formulas. In recent years, both Homi Sethna and P.K. Iyengar have conceded the official yield to be an exaggeration. Iyengar has variously stated that the yield was 8 -- 10 kt, that the device was designed to yield 10 kt, and that the yield was 8 kt "exactly as predicted ''. Although seismic scaling laws lead to an estimated yield range between 3.2 kt and 21 kt, an analysis of hard rock cratering effects suggests a narrow range of around 8 kt for the yield, which is within the uncertainties of the seismic yield estimate. Indian Prime Minister Indira Gandhi had already gained much popularity and publicity after her successful military campaign against Pakistan in the 1971 war. The test caused an immediate revival of Indira Gandhi 's popularity, which had flagged considerably from its high after the 1971 war. The overall popularity and image of the Congress Party was enhanced and the Congress Party was well received in the Indian Parliament. In 1975, Homi Sethna, a chemical engineer and the chairman of the Indian Atomic Energy Commission (AECI), Raja Ramanna of BARC, and Basanti Nagchaudhuri of DRDO, all were honoured with the Padma Vibhushan, India 's second highest civilian award. Five other project members received the Padma Shri, India 's fourth highest civilian award. India consistently maintained that this was a peaceful nuclear bomb test and that it had no intentions of militarising its nuclear programme. However, according to independent monitors, this test was part of an accelerated Indian nuclear programme. In 1997 Raja Ramanna, speaking to the Press Trust of India, maintained: The Pokhran test was a bomb, I can tell you now... An explosion is an explosion, a gun is a gun, whether you shoot at someone or shoot at the ground... I just want to make clear that the test was not all that peaceful. While India continued to state that the test was for peaceful purposes, it encountered opposition from many quarters. The Nuclear Suppliers Group (NSG) was formed in reaction to the Indian tests to check international nuclear proliferation. The NSG decided in 1992 to require full - scope IAEA safeguards for any new nuclear export deals, which effectively ruled out nuclear exports to India, but in 2008 it waived this restriction on nuclear trade with India as part of the Indo - US civilian nuclear agreement. Pakistan did not view the test as a "peaceful nuclear explosion '', and cancelled talks scheduled for 10 June on normalisation of relations. Pakistan 's Prime Minister Zulfikar Ali Bhutto vowed in June 1974 that he would never succumb to "nuclear blackmail '' or accept "Indian hegemony or domination over the subcontinent ''. The chairman of the Pakistan Atomic Energy Commission, Munir Ahmed Khan, said that the test would force Pakistan to test its own nuclear bomb. Pakistan 's leading nuclear physicist, Pervez Hoodbhoy, stated in 2011 that he believed the test "pushed (Pakistan) further into the nuclear arena ''. The plutonium used in the test was created in the CIRUS reactor supplied by Canada and using heavy water supplied by the United States. Both countries reacted negatively, especially in light of then ongoing negotiations on the Nuclear Non-Proliferation Treaty and the economic aid both countries had provided to India. Canada concluded that the test violated a 1971 understanding between the two states, and froze nuclear energy assistance for the two heavy water reactors then under construction. The United States concluded that the test did not violate any agreement and proceeded with a June 1974 shipment of enriched uranium for the Tarapur reactor. France sent a congratulatory telegram to India but later withdrew it. Despite many proposals, India did not carry out further nuclear tests until 1998. After the 1998 general elections, Operation Shakti (also known as Pokhran - II) was carried out at the Pokhran test site, using devices designed and built over the preceding two decades.
the family-ness you'll never find a nessie in the zoo
The family - Ness - wikipedia The Family - Ness is a British cartoon series produced in 1983. It was first broadcast on BBC One from 5 October 1984 to 5 April 1985, with repeats airing on CBeebies on BBC Two for a very short time in early 2002. It was created by Peter Maddocks of Maddocks Cartoon Productions. Maddocks later went on to produce Penny Crayon and Jimbo and the Jet Set in a similar style. Family - Ness was about the adventures of a family of Loch Ness Monsters and the MacTout family, particularly siblings Elspeth and Angus. The ' Nessies ' could be called from the loch by the two children by means of their "thistle whistles ''. The series was followed with a large collection of merchandising including annuals, story books, character models and even a record. The single "You 'll Never Find a Nessie in the Zoo '' was written by Roger and Gavin Greenaway, but never made it into the Top 40. Most of the Nessies are named after their main trait, like the Mr. Men, the Smurfs and the dwarfs from Disney 's Snow White and the Seven Dwarfs. With one exception, the Nessies bear little likeness to the stereotypical plesiosaurus / serpent appearance; most of them appear as very fat, yellow dinosaurs with bulbous noses. In the middle of 1985 after the final episode of The Family Ness was broadcast. BBC Video released one video with 14 episodes of the entire show. In the mid-1990s Hallmark and Carlton Home Entertainment released one video with the first 9 episodes of 1984. In 2002 Right Entertainment (distributed through Universal Pictures (UK)) released two videos with eight episodes on each one and then in Summer 2004 they were both re-released on 2 DVD releases.
who became the first president of the bhartiya janta party
List of Presidents of the Bharatiya Janata Party - wikipedia This is a List of Presidents of the Bharatiya Janata Party, one of two major parties in the Indian political system.
what is the normal resolution for a computer
Display resolution - Wikipedia The display resolution or display modes of a digital television, computer monitor or display device is the number of distinct pixels in each dimension that can be displayed. It can be an ambiguous term especially as the displayed resolution is controlled by different factors in cathode ray tube (CRT) displays, flat - panel displays (including liquid - crystal displays) and projection displays using fixed picture - element (pixel) arrays. It is usually quoted as width × height, with the units in pixels: for example, "1024 × 768 '' means the width is 1024 pixels and the height is 768 pixels. This example would normally be spoken as "ten twenty - four by seven sixty - eight '' or "ten twenty - four by seven six eight ''. One use of the term "display resolution '' applies to fixed - pixel - array displays such as plasma display panels (PDP), liquid - crystal displays (LCD), Digital Light Processing (DLP) projectors, OLED displays, and similar technologies, and is simply the physical number of columns and rows of pixels creating the display (e.g. 1920 × 1080). A consequence of having a fixed - grid display is that, for multi-format video inputs, all displays need a "scaling engine '' (a digital video processor that includes a memory array) to match the incoming picture format to the display. Note that for device displays such as phones, tablets, monitors and televisions, the use of the word resolution as defined above is a misnomer, though common. The term "display resolution '' is usually used to mean pixel dimensions, the number of pixels in each dimension (e.g. 1920 × 1080), which does not tell anything about the pixel density of the display on which the image is actually formed: resolution properly refers to the pixel density, the number of pixels per unit distance or area, not total number of pixels. In digital measurement, the display resolution would be given in pixels per inch (PPI). In analog measurement, if the screen is 10 inches high, then the horizontal resolution is measured across a square 10 inches wide. For television standards, this is typically stated as "lines horizontal resolution, per picture height; '' for example, analog NTSC TVs can typically display about 340 lines of "per picture height '' horizontal resolution from over-the - air sources, which is equivalent to about 440 total lines of actual picture information from left edge to right edge. Some commentators also use display resolution to indicate a range of input formats that the display 's input electronics will accept and often include formats greater than the screen 's native grid size even though they have to be down - scaled to match the screen 's parameters (e.g. accepting a 1920 × 1080 input on a display with a native 1366 × 768 pixel array). In the case of television inputs, many manufacturers will take the input and zoom it out to "overscan '' the display by as much as 5 % so input resolution is not necessarily display resolution. The eye 's perception of display resolution can be affected by a number of factors -- see image resolution and optical resolution. One factor is the display screen 's rectangular shape, which is expressed as the ratio of the physical picture width to the physical picture height. This is known as the aspect ratio. A screen 's physical aspect ratio and the individual pixels ' aspect ratio may not necessarily be the same. An array of 1280 × 720 on a 16: 9 display has square pixels, but an array of 1024 × 768 on a 16: 9 display has oblong pixels. An example of pixel shape affecting "resolution '' or perceived sharpness: displaying more information in a smaller area using a higher resolution makes the image much clearer or "sharper ''. However, most recent screen technologies are fixed at a certain resolution; making the resolution lower on these kinds of screens will greatly decrease sharpness, as an interpolation process is used to "fix '' the non-native resolution input into the display 's native resolution output. While some CRT - based displays may use digital video processing that involves image scaling using memory arrays, ultimately "display resolution '' in CRT - type displays is affected by different parameters such as spot size and focus, astigmatic effects in the display corners, the color phosphor pitch shadow mask (such as Trinitron) in color displays, and the video bandwidth. Most television display manufacturers "overscan '' the pictures on their displays (CRTs and PDPs, LCDs etc.), so that the effective on - screen picture may be reduced from 720 × 576 (480) to 680 × 550 (450), for example. The size of the invisible area somewhat depends on the display device. HD televisions do this as well, to a similar extent. Computer displays including projectors generally do not overscan although many models (particularly CRT displays) allow it. CRT displays tend to be underscanned in stock configurations, to compensate for the increasing distortions at the corners. Televisions are of the following resolutions: Computer monitors have traditionally possessed higher resolutions than most televisions. As of July 2002, 1024 × 768 eXtended Graphics Array was the most common display resolution. Many web sites and multimedia products were re-designed from the previous 800 × 600 format to the layouts optimized for 1024 × 768. The availability of inexpensive LCD monitors has made the 5: 4 aspect ratio resolution of 1280 × 1024 more popular for desktop usage during the first decade of the 21st century. Many computer users including CAD users, graphic artists and video game players ran their computers at 1600 × 1200 resolution (UXGA) or higher such as 2048 × 1536 QXGA if they had the necessary equipment. Other available resolutions included oversize aspects like 1400 × 1050 SXGA+ and wide aspects like 1280 × 800 WXGA, 1440 × 900 WXGA+, 1680 × 1050 WSXGA+, and 1920 × 1200 WUXGA; monitors built to the 720p and 1080p standard are also not unusual among home media and video game players, due to the perfect screen compatibility with movie and video game releases. A new more - than - HD resolution of 2560 × 1600 WQXGA was released in 30 - inch LCD monitors in 2007. As of March 2012, 1366 × 768 was the most common display resolution. In 2010, 27 - inch LCD monitors with the 2560 × 1440 - pixel resolution were released by multiple manufacturers including Apple, and in 2012, Apple introduced a 2880 × 1800 display on the MacBook Pro. Panels for professional environments, such as medical use and air traffic control, support resolutions of up to 4096 × 2160 pixels. When a computer display resolution is set higher than the physical screen resolution (native resolution), some video drivers make the virtual screen scrollable over the physical screen thus realizing a two dimensional virtual desktop with its viewport. Most LCD manufacturers do make note of the panel 's native resolution as working in a non-native resolution on LCDs will result in a poorer image, due to dropping of pixels to make the image fit (when using DVI) or insufficient sampling of the analog signal (when using VGA connector). Few CRT manufacturers will quote the true native resolution, because CRTs are analog in nature and can vary their display from as low as 320 × 200 (emulation of older computers or game consoles) to as high as the internal board will allow, or the image becomes too detailed for the vacuum tube to recreate (i.e., analog blur). Thus, CRTs provide a variability in resolution that fixed resolution LCDs can not provide. In recent years the 16: 9 aspect ratio has become more common in notebook displays. 1366 × 768 (HD) has become popular for most notebook sizes, while 1600 × 900 (HD+) and 1920 × 1080 (FHD) are available for larger notebooks. As far as digital cinematography is concerned, video resolution standards depend first on the frames ' aspect ratio in the film stock (which is usually scanned for digital intermediate post-production) and then on the actual points ' count. Although there is not a unique set of standardized sizes, it is commonplace within the motion picture industry to refer to "nK '' image "quality '', where n is a (small, usually even) integer number which translates into a set of actual resolutions, depending on the film format. As a reference consider that, for a 4: 3 (around 1.33: 1) aspect ratio which a film frame (no matter what is its format) is expected to horizontally fit in, n is the multiplier of 1024 such that the horizontal resolution is exactly 1024 n points. For example, 2K reference resolution is 2048 × 1536 pixels, whereas 4K reference resolution is 4096 × 3072 pixels. Nevertheless, 2K may also refer to resolutions like 2048 × 1556 (full - aperture), 2048 × 1152 (HDTV, 16: 9 aspect ratio) or 2048 × 872 pixels (Cinemascope, 2.35: 1 aspect ratio). It is also worth noting that while a frame resolution may be, for example, 3: 2 (720 × 480 NTSC), that is not what you will see on - screen (i.e. 4: 3 or 16: 9 depending on the orientation of the rectangular pixels). Many personal computers introduced in the late 1970s and the 1980s were designed to use television receivers as their display devices, making the resolutions dependent on the television standards in use, including PAL and NTSC. Picture sizes were usually limited to ensure the visibility of all the pixels in the major television standards and the broad range of television sets with varying amounts of over scan. The actual drawable picture area was, therefore, somewhat smaller than the whole screen, and was usually surrounded by a static - colored border (see image to right). Also, the interlace scanning was usually omitted in order to provide more stability to the picture, effectively halving the vertical resolution in progress. 160 × 200, 320 × 200 and 640 × 200 on NTSC were relatively common resolutions in the era (224, 240 or 256 scanlines were also common). In the IBM PC world, these resolutions came to be used by 16 - color EGA video cards. One of the drawbacks of using a classic television is that the computer display resolution is higher than the television could decode. Chroma resolution for NTSC / PAL televisions are bandwidth - limited to a maximum 1.5 megahertz, or approximately 160 pixels wide, which led to blurring of the color for 320 - or 640 - wide signals, and made text difficult to read (see second image to right). Many users upgraded to higher - quality televisions with S - Video or RGBI inputs that helped eliminate chroma blur and produce more legible displays. The earliest, lowest cost solution to the chroma problem was offered in the Atari 2600 Video Computer System and the Apple II+, both of which offered the option to disable the color and view a legacy black - and - white signal. On the Commodore 64, the GEOS mirrored the Mac OS method of using black - and - white to improve readability. The 640 × 400i resolution (720 × 480i with borders disabled) was first introduced by home computers such as the Commodore Amiga and, later, Atari Falcon. These computers used interlace to boost the maximum vertical resolution. These modes were only suited to graphics or gaming, as the flickering interlace made reading text in word processor, database, or spreadsheet software difficult. (Modern game consoles solve this problem by pre-filtering the 480i video to a lower resolution. For example, Final Fantasy XII suffers from flicker when the filter is turned off, but stabilizes once filtering is restored. The computers of the 1980s lacked sufficient power to run similar filtering software.) The advantage of a 720 × 480i overscanned computer was an easy interface with interlaced TV production, leading to the development of Newtek 's Video Toaster. This device allowed Amigas to be used for CGI creation in various news departments (example: weather overlays), drama programs such as NBC 's seaQuest, WB 's Babylon 5, and early computer - generated animation by Disney for The Little Mermaid, Beauty and the Beast, and Aladdin. In the PC world, the IBM PS / 2 VGA (multi-color) on - board graphics chips used a non-interlaced (progressive) 640 × 480 × 16 color resolution that was easier to read and thus more useful for office work. It was the standard resolution from 1990 to around 1996. The standard resolution was 800x600 until around 2000. Microsoft Windows XP, released in 2001, was designed to run at 800 × 600 minimum, although it is possible to select the original 640 × 480 in the Advanced Settings window. Programs designed to mimic older hardware such as Atari, Sega, or Nintendo game consoles (emulators) when attached to multiscan CRTs, routinely use much lower resolutions, such as 160 × 200 or 320 × 400 for greater authenticity, though other emulators have taken advantage of pixelation recognition on circle, square, triangle and other geometric features on a lesser resolution for a more scaled vector rendering. The list of common display resolutions article lists the most commonly used display resolutions for computer graphics, television, films, and video conferencing.
what is the average size of a sea otter
Sea otter - wikipedia The sea otter (Enhydra lutris) is a marine mammal native to the coasts of the northern and eastern North Pacific Ocean. Adult sea otters typically weigh between 14 and 45 kg (31 and 99 lb), making them the heaviest members of the weasel family, but among the smallest marine mammals. Unlike most marine mammals, the sea otter 's primary form of insulation is an exceptionally thick coat of fur, the densest in the animal kingdom. Although it can walk on land, the sea otter is capable of living exclusively in the ocean. The sea otter inhabits nearshore environments, where it dives to the sea floor to forage. It preys mostly on marine invertebrates such as sea urchins, various molluscs and crustaceans, and some species of fish. Its foraging and eating habits are noteworthy in several respects. First, its use of rocks to dislodge prey and to open shells makes it one of the few mammal species to use tools. In most of its range, it is a keystone species, controlling sea urchin populations which would otherwise inflict extensive damage to kelp forest ecosystems. Its diet includes prey species that are also valued by humans as food, leading to conflicts between sea otters and fisheries. Sea otters, whose numbers were once estimated at 150,000 -- 300,000, were hunted extensively for their fur between 1741 and 1911, and the world population fell to 1,000 -- 2,000 individuals living in a fraction of their historic range. A subsequent international ban on hunting, conservation efforts, and reintroduction programs into previously populated areas have contributed to numbers rebounding, and the species occupies about two - thirds of its former range. The recovery of the sea otter is considered an important success in marine conservation, although populations in the Aleutian Islands and California have recently declined or have plateaued at depressed levels. For these reasons, the sea otter remains classified as an endangered species. Pteronura (giant otter) Lontra (4 species) Enhydra (sea otter) Hydrictis (spotted - necked otter) Lutra (2 species) Aonyx (African clawless) Amblonyx (Asian small - clawed) Lutrogale (smooth - coated) The first scientific description of the sea otter is contained in the field notes of Georg Steller from 1751, and the species was described by Linnaeus in his Systema Naturae of 1758. Originally named Lutra marina, it underwent numerous name changes before being accepted as Enhydra lutris in 1922. The generic name Enhydra, derives from the Ancient Greek en / εν "in '' and hydra / ύδρα "water '', meaning "in the water '', and the Latin word lutris, meaning "otter ''. The sea otter was formerly sometimes referred to as the "sea beaver '', being the marine fur - bearer similar in commercial value to the terrestrial beaver. Rodents (of which the beaver is one) are not closely related to otters, which are carnivorans. It is not to be confused with the marine otter, a rare otter species native to the southern west coast of South America. A number of other otter species, while predominantly living in fresh water, are commonly found in marine coastal habitats. The extinct sea mink of northeast North America is another mustelid that had adapted to a marine environment. The sea otter is the heaviest (the giant otter is longer, but significantly slimmer) member of the family Mustelidae, a diverse group that includes the 13 otter species and terrestrial animals such as weasels, badgers, and minks. It is unique among the mustelids in not making dens or burrows, in having no functional anal scent glands, and in being able to live its entire life without leaving the water. The only member of the genus Enhydra, the sea otter is so different from other mustelid species that, as recently as 1982, some scientists believed it was more closely related to the earless seals. Genetic analysis indicates the sea otter and its closest extant relatives, which include the African speckle - throated otter, European otter, African clawless otter and oriental small - clawed otter, shared an ancestor approximately 5 Mya (million years ago). Fossil evidence indicates the Enhydra lineage became isolated in the North Pacific approximately 2 Mya, giving rise to the now - extinct Enhydra macrodonta and the modern sea otter, Enhydra lutris. One related species has been described, Enhydra reevei, from the Pleistocene of East Anglia. The modern sea otter evolved initially in northern Hokkaidō and Russia, and then spread east to the Aleutian Islands, mainland Alaska, and down the North American coast. In comparison to cetaceans, sirenians, and pinnipeds, which entered the water approximately 50, 40, and 20 Mya, respectively, the sea otter is a relative newcomer to a marine existence. In some respects, though, the sea otter is more fully adapted to water than pinnipeds, which must haul out on land or ice to give birth. The full genome of the northern sea otter (Enhydra lutris kenyoni) was sequenced in 2017 and may allow for examination of the sea otter 's evolutionary divergence from terrestrial mustelids. Three subspecies of the sea otter are recognized with distinct geographical distributions. Enhydra lutris lutris (nominate), the Asian sea otter, ranges from the Kuril Islands north of Japan to Russia 's Commander Islands in the western Pacific Ocean. In the eastern Pacific Ocean, E. l. kenyoni, the northern sea otter, is found from Alaska 's Aleutian Islands to Oregon and E. l. nereis, the southern sea otter, is native to central and southern California. The Asian sea otter is the largest subspecies and has a slightly wider skull and shorter nasal bones than both other subspecies. Northern sea otters possess longer mandibles (lower jaws) while southern sea otters have longer rostrums and smaller teeth. The sea otter is one of the smallest marine mammal species, but it is the heaviest mustelid. Male sea otters usually weigh 22 to 45 kg (49 to 99 lb) and are 1.2 to 1.5 m (3 ft 11 in to 4 ft 11 in) in length, though specimens to 54 kg (119 lb) have been recorded. Females are smaller, weighing 14 to 33 kg (31 to 73 lb) and measuring 1.0 to 1.4 m (3 ft 3 in to 4 ft 7 in) in length. For its size, the male otter 's baculum is very large, massive and bent upwards, measuring 150 mm (5.9 in) in length and 15 mm (0.59 in) at the base. Unlike most other marine mammals, the sea otter has no blubber and relies on its exceptionally thick fur to keep warm. With up to 150,000 strands of hair per square centimeter (nearly one million per sq in), its fur is the densest of any animal. The fur consists of long, waterproof guard hairs and short underfur; the guard hairs keep the dense underfur layer dry. Cold water is kept completely away from the skin and heat loss is limited. The fur is thick year - round, as it is shed and replaced gradually rather than in a distinct molting season. As the ability of the guard hairs to repel water depends on utmost cleanliness, the sea otter has the ability to reach and groom the fur on any part of its body, taking advantage of its loose skin and an unusually supple skeleton. The coloration of the pelage is usually deep brown with silver - gray speckles, but it can range from yellowish or grayish brown to almost black. In adults, the head, throat, and chest are lighter in color than the rest of the body. The sea otter displays numerous adaptations to its marine environment. The nostrils and small ears can close. The hind feet, which provide most of its propulsion in swimming, are long, broadly flattened, and fully webbed. The fifth digit on each hind foot is longest, facilitating swimming while on its back, but making walking difficult. The tail is fairly short, thick, slightly flattened, and muscular. The front paws are short with retractable claws, with tough pads on the palms that enable gripping slippery prey. The bones show osteosclerosis, increasing their density to reduce buoyancy. The sea otter propels itself underwater by moving the rear end of its body, including its tail and hind feet, up and down, and is capable of speeds of up to 9 km / h (5.6 mph). When underwater, its body is long and streamlined, with the short forelimbs pressed closely against the chest. When at the surface, it usually floats on its back and moves by sculling its feet and tail from side to side. At rest, all four limbs can be folded onto the torso to conserve heat, whereas on particularly hot days, the hind feet may be held underwater for cooling. The sea otter 's body is highly buoyant because of its large lung capacity -- about 2.5 times greater than that of similar - sized land mammals -- and the air trapped in its fur. The sea otter walks with a clumsy, rolling gait on land, and can run in a bounding motion. Long, highly sensitive whiskers and front paws help the sea otter find prey by touch when waters are dark or murky. Researchers have noted when they approach in plain view, sea otters react more rapidly when the wind is blowing towards the animals, indicating the sense of smell is more important than sight as a warning sense. Other observations indicate the sea otter 's sense of sight is useful above and below the water, although not as good as that of seals. Its hearing is neither particularly acute nor poor. An adult 's 32 teeth, particularly the molars, are flattened and rounded for crushing rather than cutting food. Seals and sea otters are the only carnivores with two pairs of lower incisor teeth rather than three; the adult dental formula is 3.1. 3.1 2.1. 3.2 The sea otter has a metabolic rate two or three times that of comparatively sized terrestrial mammals. It must eat an estimated 25 to 38 % of its own body weight in food each day to burn the calories necessary to counteract the loss of heat due to the cold water environment. Its digestive efficiency is estimated at 80 to 85 %, and food is digested and passed in as little as three hours. Most of its need for water is met through food, although, in contrast to most other marine mammals, it also drinks seawater. Its relatively large kidneys enable it to derive fresh water from sea water and excrete concentrated urine. The sea otter is diurnal. It has a period of foraging and eating in the morning, starting about an hour before sunrise, then rests or sleeps in mid-day. Foraging resumes for a few hours in the afternoon and subsides before sunset, and a third foraging period may occur around midnight. Females with pups appear to be more inclined to feed at night. Observations of the amount of time a sea otter must spend each day foraging range from 24 to 60 %, apparently depending on the availability of food in the area. Sea otters spend much of their time grooming, which consists of cleaning the fur, untangling knots, removing loose fur, rubbing the fur to squeeze out water and introduce air, and blowing air into the fur. To casual observers, it appears as if the animals are scratching, but they are not known to have lice or other parasites in the fur. When eating, sea otters roll in the water frequently, apparently to wash food scraps from their fur. The sea otter hunts in short dives, often to the sea floor. Although it can hold its breath for up to five minutes, its dives typically last about one minute and no more than four. It is the only marine animal capable of lifting and turning over rocks, which it often does with its front paws when searching for prey. The sea otter may also pluck snails and other organisms from kelp and dig deep into underwater mud for clams. It is the only marine mammal that catches fish with its forepaws rather than with its teeth. Under each foreleg, the sea otter has a loose pouch of skin that extends across the chest. In this pouch (preferentially the left one), the animal stores collected food to bring to the surface. This pouch also holds a rock, unique to the otter, that is used to break open shellfish and clams. There, the sea otter eats while floating on its back, using its forepaws to tear food apart and bring it to its mouth. It can chew and swallow small mussels with their shells, whereas large mussel shells may be twisted apart. It uses its lower incisor teeth to access the meat in shellfish. To eat large sea urchins, which are mostly covered with spines, the sea otter bites through the underside where the spines are shortest, and licks the soft contents out of the urchin 's shell. The sea otter 's use of rocks when hunting and feeding makes it one of the few mammal species to use tools. To open hard shells, it may pound its prey with both paws against a rock on its chest. To pry an abalone off its rock, it hammers the abalone shell using a large stone, with observed rates of 45 blows in 15 seconds. Releasing an abalone, which can cling to rock with a force equal to 4,000 times its own body weight, requires multiple dives. Although each adult and independent juvenile forages alone, sea otters tend to rest together in single - sex groups called rafts. A raft typically contains 10 to 100 animals, with male rafts being larger than female ones. The largest raft ever seen contained over 2000 sea otters. To keep from drifting out to sea when resting and eating, sea otters may wrap themselves in kelp. A male sea otter is most likely to mate if he maintains a breeding territory in an area that is also favored by females. As autumn is the peak breeding season in most areas, males typically defend their territory only from spring to autumn. During this time, males patrol the boundaries of their territories to exclude other males, although actual fighting is rare. Adult females move freely between male territories, where they outnumber adult males by an average of five to one. Males that do not have territories tend to congregate in large, male - only groups, and swim through female areas when searching for a mate. The species exhibits a variety of vocal behaviors. The cry of a pup is often compared to that of a seagull. Females coo when they are apparently content; males may grunt instead. Distressed or frightened adults may whistle, hiss, or in extreme circumstances, scream. Although sea otters can be playful and sociable, they are not considered to be truly social animals. They spend much time alone, and each adult can meet its own needs in terms of hunting, grooming, and defense. Sea otters are polygynous: males have multiple female partners. However, temporary pair - bonding occurs for a few days between a female in estrus and her mate. Mating takes place in the water and can be rough, the male biting the female on the muzzle -- which often leaves scars on the nose -- and sometimes holding her head under water. Births occur year - round, with peaks between May and June in northern populations and between January and March in southern populations. Gestation appears to vary from four to twelve months, as the species is capable of delayed implantation followed by four months of pregnancy. In California, sea otters usually breed every year, about twice as often as those in Alaska. Birth usually takes place in the water and typically produces a single pup weighing 1.4 to 2.3 kg (3 to 5 lb). Twins occur in 2 % of births; however, usually only one pup survives. At birth, the eyes are open, ten teeth are visible, and the pup has a thick coat of baby fur. Mothers have been observed to lick and fluff a newborn for hours; after grooming, the pup 's fur retains so much air, the pup floats like a cork and can not dive. The fluffy baby fur is replaced by adult fur after about 13 weeks. Nursing lasts six to eight months in Californian populations and four to twelve months in Alaska, with the mother beginning to offer bits of prey at one to two months. The milk from a sea otter 's two abdominal nipples is rich in fat and more similar to the milk of other marine mammals than to that of other mustelids. A pup, with guidance from its mother, practices swimming and diving for several weeks before it is able to reach the sea floor. Initially, the objects it retrieves are of little food value, such as brightly colored starfish and pebbles. Juveniles are typically independent at six to eight months, but a mother may be forced to abandon a pup if she can not find enough food for it; at the other extreme, a pup may nurse until it is almost adult size. Pup mortality is high, particularly during an individual 's first winter -- by one estimate, only 25 % of pups survive their first year. Pups born to experienced mothers have the highest survival rates. Females perform all tasks of feeding and raising offspring, and have occasionally been observed caring for orphaned pups. Much has been written about the level of devotion of sea otter mothers for their pups -- a mother gives her infant almost constant attention, cradling it on her chest away from the cold water and attentively grooming its fur. When foraging, she leaves her pup floating on the water, sometimes wrapped in kelp to keep it from floating away; if the pup is not sleeping, it cries loudly until she returns. Mothers have been known to carry their pups for days after the pups ' deaths. Females become sexually mature at around three or four years of age and males at around five; however, males often do not successfully breed until a few years later. A captive male sired offspring at age 19. In the wild, sea otters live to a maximum age of 23 years, with average lifespans of 10 -- 15 years for males and 15 -- 20 years for females. Several captive individuals have lived past 20 years, and a female at the Seattle Aquarium died at the age of 28 years. Sea otters in the wild often develop worn teeth, which may account for their apparently shorter lifespans. There are several documented cases in which male sea otters have forcibly copulated with juvenile harbor seals, sometimes resulting in death. Similarly, forced copulation by sea otters involving animals other than Pacific harbor seals has occasionally been reported. Sea otters live in coastal waters 15 to 23 meters (50 to 75 ft) deep, and usually stay within a kilometer (2⁄3 mi) of the shore. They are found most often in areas with protection from the most severe ocean winds, such as rocky coastlines, thick kelp forests, and barrier reefs. Although they are most strongly associated with rocky substrates, sea otters can also live in areas where the sea floor consists primarily of mud, sand, or silt. Their northern range is limited by ice, as sea otters can survive amidst drift ice but not land - fast ice. Individuals generally occupy a home range a few kilometers long, and remain there year - round. The sea otter population is thought to have once been 150,000 to 300,000, stretching in an arc across the North Pacific from northern Japan to the central Baja California Peninsula in Mexico. The fur trade that began in the 1740s reduced the sea otter 's numbers to an estimated 1,000 to 2,000 members in 13 colonies. In about two - thirds of its former range, the species is at varying levels of recovery, with high population densities in some areas and threatened populations in others. Sea otters currently have stable populations in parts of the Russian east coast, Alaska, British Columbia, Washington, and California, with reports of recolonizations in Mexico and Japan. Population estimates made between 2004 and 2007 give a worldwide total of approximately 107,000 sea otters. Currently, the most stable and secure part of the sea otter 's range is Russia. Before the 19th century, around 20,000 to 25,000 sea otters lived near the Kuril Islands, with more near Kamchatka and the Commander Islands. After the years of the Great Hunt, the population in these areas, currently part of Russia, was only 750. By 2004, sea otters had repopulated all of their former habitat in these areas, with an estimated total population of about 27,000. Of these, about 19,000 are at the Kurils, 2,000 to 3,500 at Kamchatka and another 5,000 to 5,500 at the Commander Islands. Growth has slowed slightly, suggesting the numbers are reaching carrying capacity. Alaska is the heartland of the sea otter 's range. In 1973, the population in Alaska was estimated at between 100,000 and 125,000 animals. By 2006, though, the Alaska population had fallen to an estimated 73,000 animals. A massive decline in sea otter populations in the Aleutian Islands accounts for most of the change; the cause of this decline is not known, although orca predation is suspected. The sea otter population in Prince William Sound was also hit hard by the Exxon Valdez oil spill, which killed thousands of sea otters in 1989. Along the North American coast south of Alaska, the sea otter 's range is discontinuous. A remnant population survived off Vancouver Island into the 20th century, but it died out despite the 1911 international protection treaty, with the last sea otter taken near Kyuquot in 1929. From 1969 to 1972, 89 sea otters were flown or shipped from Alaska to the west coast of Vancouver Island. This population increased to over 5,600 in 2013 with an estimated annual growth rate of 7.2 %, and their range on the island 's west coast extended north to Cape Scott and across the Queen Charlotte Straight to the Broughton Archipelago and south to Clayoquot Sound and Tofino. In 1989, a separate colony was discovered in the central British Columbia coast. It is not known if this colony, which numbered about 300 animals in 2004, was founded by transplanted otters or was a remnant population that had gone undetected. By 2013, this population exceeded 1,100 individuals, was increasing at an estimated 12.6 % annual rate, and its range included Aristazabal Island, and Milbanke Sound south to Calvert Island. In 2008, Canada determined the status of sea otters to be "special concern ''. In 1969 and 1970, 59 sea otters were translocated from Amchitka Island to Washington, and released near La Push and Point Grenville. The translocated population is estimated to have declined to between 10 and 43 individuals before increasing, reaching 208 individuals in 1989. As of 2017, the population was estimated at over 2,000 individuals, and their range extends from Point Grenville in the south to Cape Flattery in the north and east to Pillar Point along the Strait of Juan de Fuca. In Washington, sea otters are found almost exclusively on the outer coasts. They can swim as close as six feet off shore along the Olympic coast. Reported sightings of sea otters in the San Juan Islands and Puget Sound almost always turn out to be North American river otters, which are commonly seen along the seashore. However, biologists have confirmed isolated sightings of sea otters in these areas since the mid-1990s. The last native sea otter in Oregon was probably shot and killed in 1906. In 1970 and 1971, a total of 95 sea otters were transplanted from Amchitka Island, Alaska to the Southern Oregon coast. However, this translocation effort failed and otters soon again disappeared from the state. In 2004, a male sea otter took up residence at Simpson Reef off of Cape Arago for six months. This male is thought to have originated from a colony in Washington, but disappeared after a coastal storm. On 18 February 2009, a male sea otter was spotted in Depoe Bay off the Oregon Coast. It could have traveled to the state from either California or Washington. The historic population of California sea otters was estimated at 16,000 before the fur trade decimated the population, leading to their assumed extinction. Today 's population of California sea otters are the descendants of a single colony of about 50 sea otters located near Bixby Creek Bridge in March 1938 by Howard G. Sharpe, owner of the nearby Rainbow Lodge on Bixby Bridge in Big Sur. Their principal range has gradually expanded and extends from Pigeon Point in San Mateo County to Santa Barbara County. Sea otters were once numerous in San Francisco Bay. Historical records revealed the Russian - American Company sneaked Aleuts into San Francisco Bay multiple times, despite the Spanish capturing or shooting them while hunting sea otters in the estuaries of San Jose, San Mateo, San Bruno and around Angel Island. The founder of Fort Ross, Ivan Kuskov, finding otters scarce on his second voyage to Bodega Bay in 1812, sent a party of Aleuts to San Francisco Bay, where they met another Russian party and an American party, and caught 1,160 sea otters in three months. By 1817, sea otters in the area were practically eliminated and the Russians sought permission from the Spanish and the Mexican governments to hunt further and further south of San Francisco. Remnant sea otter populations may have survived in the bay until 1840, when the Rancho Punta de Quentin was granted to Captain John B.R. Cooper, a sea captain from Boston, by Mexican Governor Juan Bautista Alvarado along with a license to hunt sea otters, reportedly then prevalent at the mouth of Corte Madera Creek. In the late 1980s, the USFWS relocated about 140 southern sea otters to San Nicolas Island in southern California, in the hope of establishing a reserve population should the mainland be struck by an oil spill. To the surprise of biologists, the majority of the San Nicolas sea otters swam back to the mainland. Another group of twenty swam 74 miles (119 km) north to San Miguel Island, where they were captured and removed. By 2005, only 30 sea otters remained at San Nicolas, although they were slowly increasing as they thrived on the abundant prey around the island. The plan that authorized the translocation program had predicted the carrying capacity would be reached within five to 10 years. The spring 2016 count at San Nicolas Island was 104 sea otters, continuing a 5 - year positive trend of over 12 % per year. Sea otters were observed twice in Southern California in 2011, once near Laguna Beach and once at Zuniga Point Jetty, near San Diego. These are the first documented sightings of otters this far south in 30 years. When the USFWS implemented the translocation program, it also attempted to implement "zonal management '' of the Californian population. To manage the competition between sea otters and fisheries, it declared an "otter - free zone '' stretching from Point Conception to the Mexican border. In this zone, only San Nicolas Island was designated as sea otter habitat, and sea otters found elsewhere in the area were supposed to be captured and relocated. These plans were abandoned after many translocated otters died and also as it proved impractical to capture the hundreds of otters which ignored regulations and swam into the zone. However, after engaging in a period of public commentary in 2005, the Fish and Wildlife Service failed to release a formal decision on the issue. Then, in response to lawsuits filed by lthe Santa Barbara - based Environmental Defense Center and the Otter Project, on 19 December 2012 the USFWS declared that the "no otter zone '' experiment was a failure, and will protect the otters re-colonizing the coast south of Point Conception as threatened species. Although the southern sea otter 's range has continuously expanded from the remnant population of about 50 individuals in Big Sur since protection in 1911, however from 2007 to 2010, the otter population and its range contracted and since 2010 has made little progress. As of spring 2010, the northern boundary had moved from about Tunitas Creek to a point 2 km southeast of Pigeon Point, and the southern boundary has moved from approximately Coal Oil Point to Gaviota State Park. Recently, a toxin called microcystin, produced by a type of cyanobacteria (Microcystis), seems to be concentrated in the shellfish the otters eat, poisoning them. Cyanobacteria are found in stagnant freshwater enriched with nitrogen and phosphorus from septic tank and agricultural fertilizer runoff, and may be flushed into the ocean when streamflows are high in the rainy season. A record number of sea otter carcasses were found on California 's coastline in 2010, with increased shark attacks an increasing component of the mortality. Great white sharks do not consume relatively fat - poor sea otters but shark - bitten carcasses have increased from 8 % in the 1980s to 15 % in the 1990s and to 30 % in 2010 and 2011. For southern sea otters to be considered for removal from threatened species listing, the U.S. Fish and Wildlife Service (USFWS) determined that the population should exceed 3,090 for three consecutive years. In response to recovery efforts, the population climbed steadily from the mid-20th century through the early 2000s, then remained relatively flat from 2005 -- 2014 at just under 3,000. There was some contraction from the northern (now Pigeon Point) and southern limits of the sea otter 's range during the end of this period, circumstantially related to an increase in lethal shark bites, raising concerns that the population had reached a plateau. However, the population increased markedly from 2015 -- 2016, with the United States Geological Survey (USGS) California sea otter survey 3 - year average reaching 3,272 in 2016, the first time it exceeded the threshold for delisting from the Endangered Species Act (ESA). If populations continued to grow and ESA delisting occurred, southern sea otters would still be fully protected by state regulations and the Marine Mammal Protection Act, which set higher thresholds for protection, at approximately 8,400 individuals. However, ESA delisting seems unlikely due to a precipitous population decline recorded in the spring 2017 USGS sea otter survey count, from the 2016 high of 3,615 individuals to 2,688, a loss of 25 % of the California sea otter population. Sea otters consume over 100 prey species. In most of its range, the sea otter 's diet consists almost exclusively of marine benthic invertebrates, including sea urchins, fat innkeeper worms, a variety of bivalves such as clams and mussels, abalone, other mollusks, crustaceans, and snails. Its prey ranges in size from tiny limpets and crabs to giant octopuses. Where prey such as sea urchins, clams, and abalone are present in a range of sizes, sea otters tend to select larger items over smaller ones of similar type. In California, they have been noted to ignore Pismo clams smaller than 3 inches (7 cm) across. In a few northern areas, fish are also eaten. In studies performed at Amchitka Island in the 1960s, where the sea otter population was at carrying capacity, 50 % of food found in sea otter stomachs was fish. The fish species were usually bottom - dwelling and sedentary or sluggish forms, such as Hemilepidotus hemilepidotus and family Tetraodontidae. However, south of Alaska on the North American coast, fish are a negligible or extremely minor part of the sea otter 's diet. Contrary to popular depictions, sea otters rarely eat starfish, and any kelp that is consumed apparently passes through the sea otter 's system undigested. The individuals within a particular area often differ in their foraging methods and prey types, and tend to follow the same patterns as their mothers. The diet of local populations also changes over time, as sea otters can significantly deplete populations of highly preferred prey such as large sea urchins, and prey availability is also affected by other factors such as fishing by humans. Sea otters can thoroughly remove abalone from an area except for specimens in deep rock crevices, however, they never completely wipe out a prey species from an area. A 2007 Californian study demonstrated, in areas where food was relatively scarce, a wider variety of prey was consumed. Surprisingly, though, the diets of individuals were more specialized in these areas than in areas where food was plentiful. Sea otters are a classic example of a keystone species; their presence affects the ecosystem more profoundly than their size and numbers would suggest. They keep the population of certain benthic (sea floor) herbivores, particularly sea urchins, in check. Sea urchins graze on the lower stems of kelp, causing the kelp to drift away and die. Loss of the habitat and nutrients provided by kelp forests leads to profound cascade effects on the marine ecosystem. North Pacific areas that do not have sea otters often turn into urchin barrens, with abundant sea urchins and no kelp forest. Kelp forests are extremely productive ecosystems. Kelp forests sequester (absorb and capture) CO2 from the atmosphere through photosynthesis. Sea otters may help mitigate effects of climate change by their cascading trophic influence Reintroduction of sea otters to British Columbia has led to a dramatic improvement in the health of coastal ecosystems, and similar changes have been observed as sea otter populations recovered in the Aleutian and Commander Islands and the Big Sur coast of California However, some kelp forest ecosystems in California have also thrived without sea otters, with sea urchin populations apparently controlled by other factors. The role of sea otters in maintaining kelp forests has been observed to be more important in areas of open coast than in more protected bays and estuaries. Sea otters effect rocky ecosystems that are dominated by mussel beds by removing mussels from rocks. This allows space for competing species and increases species diversity. Sea otter predation is not common as many predators find the otter 's pungent scent glands distasteful. Leading mammalian predators of this species include orcas and sea lions, and bald eagles may grab pups from the surface of the water. Young predators may kill an otter and not eat it. On land, young sea otters may face attack from bears and coyotes. In California, great white sharks are their primary predator but there is no evidence that the sharks eat them. Urban runoff transporting cat feces into the ocean brings Toxoplasma gondii, an obligate parasite, which has killed sea otters. Parasitic infections of Sarcocystis neurona are also associated with human activity. According to the U.S. Geological Survey and the CDC, northern sea otters off Washington have been infected with the H1N1 flu virus and "may be a newly identified animal host of influenza viruses ''. Sea otters have the thickest fur of any mammal. Their beautiful fur is a main target for many hunters. Archaeological evidence indicates that for thousands of years, indigenous peoples have hunted sea otters for food and fur. Large - scale hunting, part of the Maritime Fur Trade, which would eventually kill approximately one million sea otters, began in the 18th century when hunters and traders began to arrive from all over the world to meet foreign demand for otter pelts, which were one of the world 's most valuable types of fur. In the early 18th century, Russians began to hunt sea otters in the Kuril Islands and sold them to the Chinese at Kyakhta. Russia was also exploring the far northern Pacific at this time, and sent Vitus Bering to map the Arctic coast and find routes from Siberia to North America. In 1741, on his second North Pacific voyage, Bering was shipwrecked off Bering Island in the Commander Islands, where he and many of his crew died. The surviving crew members, which included naturalist Georg Steller, discovered sea otters on the beaches of the island and spent the winter hunting sea otters and gambling with otter pelts. They returned to Siberia, having killed nearly 1,000 sea otters, and were able to command high prices for the pelts. Thus began what is sometimes called the "Great Hunt '', which would continue for another hundred years. The Russians found the sea otter far more valuable than the sable skins that had driven and paid for most of their expansion across Siberia. If the sea otter pelts brought back by Bering 's survivors had been sold at Kyakhta prices they would have paid for one tenth the cost of Bering 's expedition. In 1775 at Okhotsk, sea otter pelts were worth 50 -- 80 rubles as opposed to 2.5 rubles for sable. Russian fur - hunting expeditions soon depleted the sea otter populations in the Commander Islands, and by 1745, they began to move on to the Aleutian Islands. The Russians initially traded with the Aleuts inhabitants of these islands for otter pelts, but later enslaved the Aleuts, taking women and children hostage and torturing and killing Aleut men to force them to hunt. Many Aleuts were either murdered by the Russians or died from diseases the hunters had introduced. The Aleut population was reduced, by the Russians ' own estimate, from 20,000 to 2,000. By the 1760s, the Russians had reached Alaska. In 1799, Emperor Paul I consolidated the rival fur - hunting companies into the Russian - American Company, granting it an imperial charter and protection, and a monopoly over trade rights and territorial acquisition. Under Aleksandr I, the administration of the merchant - controlled company was transferred to the Imperial Navy, largely due to the alarming reports by naval officers of native abuse; in 1818, the indigenous peoples of Alaska were granted civil rights equivalent to a townsman status in the Russian Empire. Other nations joined in the hunt in the south. Along the coasts of what is now Mexico and California, Spanish explorers bought sea otter pelts from Native Americans and sold them in Asia. In 1778, British explorer Captain James Cook reached Vancouver Island and bought sea otter furs from the First Nations people. When Cook 's ship later stopped at a Chinese port, the pelts rapidly sold at high prices, and were soon known as "soft gold ''. As word spread, people from all over Europe and North America began to arrive in the Pacific Northwest to trade for sea otter furs. Russian hunting expanded to the south, initiated by American ship captains, who subcontracted Russian supervisors and Aleut hunters in what are now Washington, Oregon, and California. Between 1803 and 1846, 72 American ships were involved in the otter hunt in California, harvesting an estimated 40,000 skins and tails, compared to only 13 ships of the Russian - American Company, which reported 5,696 otter skins taken between 1806 and 1846. In 1812, the Russians founded an agricultural settlement at what is now Fort Ross in northern California, as their southern headquarters. Eventually, sea otter populations became so depleted, commercial hunting was no longer viable. It had stopped in the Aleutian Islands, by 1808, as a conservation measure imposed by the Russian - American Company. Further restrictions were ordered by the Company in 1834. When Russia sold Alaska to the United States in 1867, the Alaska population had recovered to over 100,000, but Americans resumed hunting and quickly extirpated the sea otter again. Prices rose as the species became rare. During the 1880s, a pelt brought $105 to $165 in the London market, but by 1903, a pelt could be worth as much as $1,125. In 1911, Russia, Japan, Great Britain (for Canada) and the United States signed the Treaty for the Preservation and Protection of Fur Seals, imposing a moratorium on the harvesting of sea otters. So few remained, perhaps only 1,000 -- 2,000 individuals in the wild, that many believed the species would become extinct. During the 20th century, sea otter numbers rebounded in about two - thirds of their historic range, a recovery considered one of the greatest successes in marine conservation. However, the IUCN still lists the sea otter as an endangered species, and describes the significant threats to sea otters as oil pollution, predation by orcas, poaching, and conflicts with fisheries -- sea otters can drown if entangled in fishing gear. The hunting of sea otters is no longer legal except for limited harvests by indigenous peoples in the United States. Poaching was a serious concern in the Russian Far East immediately after the collapse of the Soviet Union in 1991; however, it has declined significantly with stricter law enforcement and better economic conditions. The most significant threat to sea otters is oil spills, to which they are particularly vulnerable, since they rely on their fur to keep warm. When their fur is soaked with oil, it loses its ability to retain air, and the animals can quickly die from hypothermia. The liver, kidneys, and lungs of sea otters also become damaged after they inhale oil or ingest it when grooming. The Exxon Valdez oil spill of 24 March 1989 killed thousands of sea otters in Prince William Sound, and as of 2006, the lingering oil in the area continues to affect the population. Describing the public sympathy for sea otters that developed from media coverage of the event, a U.S. Fish and Wildlife Service spokesperson wrote: As a playful, photogenic, innocent bystander, the sea otter epitomized the role of victim... cute and frolicsome sea otters suddenly in distress, oiled, frightened, and dying, in a losing battle with the oil. The small geographic ranges of the sea otter populations in California, Washington, and British Columbia mean a single major spill could be catastrophic for that state or province. Prevention of oil spills and preparation to rescue otters if one happens is a major focus for conservation efforts. Increasing the size and range of sea otter populations would also reduce the risk of an oil spill wiping out a population. However, because of the species ' reputation for depleting shellfish resources, advocates for commercial, recreational, and subsistence shellfish harvesting have often opposed allowing the sea otter 's range to increase, and there have even been instances of fishermen and others illegally killing them. In the Aleutian Islands, a massive and unexpected disappearance of sea otters has occurred in recent decades. In the 1980s, the area was home to an estimated 55,000 to 100,000 sea otters, but the population fell to around 6,000 animals by 2000. The most widely accepted, but still controversial, hypothesis is that killer whales have been eating the otters. The pattern of disappearances is consistent with a rise in predation, but there has been no direct evidence of orcas preying on sea otters to any significant extent. Another area of concern is California, where recovery began to fluctuate or decline in the late 1990s. Unusually high mortality rates amongst adult and subadult otters, particularly females, have been reported. In 2017 the US Geological Survey found a 3 % drop in the sea otter population of the California coast. This number still keeps them on track for removal from the endangered species list, although just barely. Necropsies of dead sea otters indicate diseases, particularly Toxoplasma gondii and acanthocephalan parasite infections, are major causes of sea otter mortality in California. The Toxoplasma gondii parasite, which is often fatal to sea otters, is carried by wild and domestic cats and may be transmitted by domestic cat droppings flushed into the ocean via sewage systems. Although disease has clearly contributed to the deaths of many of California 's sea otters, it is not known why the California population is apparently more affected by disease than populations in other areas. Sea otter habitat is preserved through several protected areas in the United States, Russia and Canada. In marine protected areas, polluting activities such as dumping of waste and oil drilling are typically prohibited. An estimated 1,200 sea otters live within the Monterey Bay National Marine Sanctuary, and more than 500 live within the Olympic Coast National Marine Sanctuary. Some of the sea otter 's preferred prey species, particularly abalone, clams, and crabs, are also food sources for humans. In some areas, massive declines in shellfish harvests have been blamed on the sea otter, and intense public debate has taken place over how to manage the competition between sea otters and humans for seafood. The debate is complicated because sea otters have sometimes been held responsible for declines of shellfish stocks that were more likely caused by overfishing, disease, pollution, and seismic activity. Shellfish declines have also occurred in many parts of the North American Pacific coast that do not have sea otters, and conservationists sometimes note the existence of large concentrations of shellfish on the coast is a recent development resulting from the fur trade 's near - extirpation of the sea otter. Although many factors affect shellfish stocks, sea otter predation can deplete a fishery to the point where it is no longer commercially viable. Scientists agree that sea otters and abalone fisheries can not exist in the same area, and the same is likely true for certain other types of shellfish, as well. Many facets of the interaction between sea otters and the human economy are not as immediately felt. Sea otters have been credited with contributing to the kelp harvesting industry via their well - known role in controlling sea urchin populations; kelp is used in the production of diverse food and pharmaceutical products. Although human divers harvest red sea urchins both for food and to protect the kelp, sea otters hunt more sea urchin species and are more consistently effective in controlling these populations. The health of the kelp forest ecosystem is significant in nurturing populations of fish, including commercially important fish species. In some areas, sea otters are popular tourist attractions, bringing visitors to local hotels, restaurants, and sea otter - watching expeditions. Left: Aleut sea otter amulet in the form of a mother with pup. Above: Aleut carving of a sea otter hunt on a whalebone spear. Both items are on display at the Peter the Great Museum of Anthropology and Ethnography in St. Petersburg. Articles depicting sea otters were considered to have magical properties. For many maritime indigenous cultures throughout the North Pacific, especially the Ainu in the Kuril Islands, the Koryaks and Itelmen of Kamchatka, the Aleut in the Aleutian Islands, the Haida of Haida Gwaii and a host of tribes on the Pacific coast of North America, the sea otter has played an important role as a cultural, as well as material, resource. In these cultures, many of which have strongly animist traditions full of legends and stories in which many aspects of the natural world are associated with spirits, the sea otter was considered particularly kin to humans. The Nuu - chah - nulth, Haida, and other First Nations of coastal British Columbia used the warm and luxurious pelts as chiefs ' regalia. Sea otter pelts were given in potlatches to mark coming - of - age ceremonies, weddings, and funerals. The Aleuts carved sea otter bones for use as ornaments and in games, and used powdered sea otter baculum as a medicine for fever. Among the Ainu, the otter is portrayed as an occasional messenger between humans and the creator. The sea otter is a recurring figure in Ainu folklore. A major Ainu epic, the Kutune Shirka, tells the tale of wars and struggles over a golden sea otter. Versions of a widespread Aleut legend tell of lovers or despairing women who plunge into the sea and become otters. These links have been associated with the many human - like behavioral features of the sea otter, including apparent playfulness, strong mother - pup bonds and tool use, yielding to ready anthropomorphism. The beginning of commercial exploitation had a great impact on the human, as well as animal, populations the Ainu and Aleuts have been displaced or their numbers are dwindling, while the coastal tribes of North America, where the otter is in any case greatly depleted, no longer rely as intimately on sea mammals for survival. Since the mid-1970s, the beauty and charisma of the species have gained wide appreciation, and the sea otter has become an icon of environmental conservation. The round, expressive face and soft, furry body of the sea otter are depicted in a wide variety of souvenirs, postcards, clothing, and stuffed toys. Sea otters can do well in captivity, and are featured in over 40 public aquariums and zoos. The Seattle Aquarium became the first institution to raise sea otters from conception to adulthood with the birth of Tichuk in 1979, followed by three more pups in the early 1980s. In 2007, a YouTube video of two sea otters holding paws drew 1.5 million viewers in two weeks, and had over 20 million views as of January 2015. Filmed five years previously at the Vancouver Aquarium, it was YouTube 's most popular animal video at the time, although it has since been surpassed. The lighter - colored otter in the video is Nyac, a survivor of the 1989 Exxon Valdez oil spill. Nyac died in September 2008, at the age of 20. Milo, the darker one, died of lymphoma in January 2012
who was the first superhero in dc comics
List of superhero debuts - wikipedia The following is a list of the first known appearances of various superhero fictional characters and teams. A superhero (also known as a super hero or super-hero) is a fictional character "of unprecedented physical prowess dedicated to acts of derring - do in the public interest. '' Since the debut of Superman in 1938 by Jerry Siegel and Joe Shuster, stories of superheroes -- ranging from brief episodic adventures to continuing years - long sagas -- have dominated American comic books and crossed over into other media. A female superhero is sometimes called a superheroine or super heroine. By most definitions, characters need not have actual superhuman powers to be deemed superheroes, although sometimes terms such as costumed crimefighters are used to refer to those without such powers who have many other common traits of superheroes. For a list of comic book supervillain debuts, see List of comic book supervillain debuts. The folkloric Spring - heeled Jack came to be featured in a series of Penny Dreadfuls as a villain. However, in the late 1970s and early 1980s, he was reinvented as a crime - fighter with a disguise, secret lair, and gadgets, hallmarks of superheroes. Despite its short run, it 's widely seen as the earliest superhero fiction comic. Often cited as perhaps the earliest superhero akin to those to become popularized through American comic books. Evelyn Gaines (Strong Man, Cuckoo Man, Tornado Man, Rope Man, Diaper Man) William Hanna & Joseph Barbera William Hanna & Joseph Barbera
write one feature of the french society of the 18th century
18th century - wikipedia The 18th century lasted from January 1, 1701 to December 31, 1800 in the Gregorian calendar. During the 18th century, the Enlightenment culminated in the French and American revolutions. Philosophy and science increased in prominence. Philosophers dreamed of a brighter age. This dream turned into a reality with the French Revolution of 1789, though later compromised by the excesses of the Reign of Terror (1793 -- 1794) under Maximilien Robespierre. At first, many monarchies of Europe embraced Enlightenment ideals, but with the French Revolution they feared losing their power and formed broad coalitions for the counter-revolution. The Ottoman Empire experienced an unprecedented period of peace and economic expansion, taking part in no European wars from 1740 to 1768. As a consequence the empire did not share in Europe 's military improvements during the Seven Years ' War (1756 -- 1763), causing its military to fall behind and suffer defeats against Russia in the second half of the century. Music was a key part of the 18th century. The first half known as the Baroque period saw the likes of Johan Sebastian Bach and George Frederick Handel. The later half was known as the classical period where new forms like the symphony came to be. That period saw the likes of Joseph Haydn and Wolfgang Amadeus Mozart. The 18th century also marked the end of the Polish -- Lithuanian Commonwealth as an independent state. The once - powerful and vast kingdom, which had once conquered Moscow and defeated great Ottoman armies, collapsed under numerous invasions. Its semi-democratic government system was not robust enough to rival the neighboring monarchies of the Kingdom of Prussia, the Russian Empire and the Austrian Empire which divided the Commonwealth territories between themselves, changing the landscape of Central European politics for the next hundred years. European colonization of the Americas and other parts of the world intensified and associated mass migrations of people grew in size as the Age of Sail continued. Great Britain became a major power worldwide with the defeat of France in North America in the 1760s and the conquest of large parts of India. However, Britain lost many of its North American colonies after the American Revolution, which resulted in the formation of the newly independent United States. The Industrial Revolution started in Britain in the 1770s with the production of the improved steam engine. Despite its modest beginnings in the 18th century, steam - powered machinery would radically change human society and the environment. Western historians have occasionally defined the 18th century otherwise for the purposes of their work. For example, the "short '' 18th century may be defined as 1715 -- 1789, denoting the period of time between the death of Louis XIV of France and the start of the French Revolution, with an emphasis on directly interconnected events. To historians who expand the century to include larger historical movements, the "long '' 18th century may run from the Glorious Revolution of 1688 to the Battle of Waterloo in 1815 or even later.
when did the dallas cowboys win their first super bowl
History of the Dallas Cowboys - wikipedia The Dallas Cowboys were the NFL 's first modern - era expansion team. The NFL was late in awarding Dallas; after Lamar Hunt was rebuffed in his efforts to acquire an NFL franchise for Dallas, he became part of a group of owners that formed the American Football League with Hunt 's AFL franchise in Dallas known as the Texans (later to become the Kansas City Chiefs). In an effort not to cede the South to the AFL, the NFL awarded Dallas a franchise, but not until after the 1960 college draft had been held. As a result, the NFL 's first ever expansion team played its inaugural season without the benefit of a college draft. Originally, the formation of an NFL expansion team in Texas was met with strong opposition by Washington Redskins owner, George Preston Marshall. This was no surprise, because despite being located in the nation 's capital, Marshall 's Redskins had enjoyed a monopoly as the only NFL team to represent the American South for several decades. This came as little surprise to would - be team owners, Clint Murchison Jr. and Bedford Wynne, so to ensure the birth of their expansion team, the men bought the rights to the Redskins fight song, "Hail to the Redskins '' and threatened to refuse to allow bob to play the song at games. Needing the song, which had become a staple for his "professional football team of Dixie '', Marshall changed his mind, and the city of Dallas was granted an NFL franchise on January 28, 1960. This early confrontation between the two franchises helped to trigger what would become one of the more heated National Football League rivalries, which continues to this day. The team was first known as the Dallas Steers, then the Dallas Rangers. On March 19, 1960, the organization announced that it would be called the Cowboys to avoid confusion with the American Association Dallas Rangers baseball team. The founding investors of the Dallas Cowboys were Clint Murchison, Jr. (45 %), John D. Murchison (45 %), along with minority shareholders, Toddie Lee and Bedford Wynne (Director and Secretary) (5 %) and William R. Hawn (5 %). The new owners subsequently hired Tex Schramm as general manager, Gil Brandt as player personnel director, and Tom Landry as head coach. The Cowboys began play in 1960, and played their home games a few miles east of Downtown Dallas at the Cotton Bowl. For their first three seasons, they shared this stadium with the Dallas Texans (now the Kansas City Chiefs franchise), who began play in the American Football League that same year. The 1960 Cowboys finished their inaugural campaign 0 - 11 - 1 with a roster largely made up of sub-par players (many well past their prime). The following year, the Cowboys made their first college draft selection, taking TCU Horned Frogs defensive tackle Bob Lilly with the 13th pick in the draft (although the Cowboys finished with the league 's worst record in 1960, the first overall selection in the 1961 draft was given to the expansion team Minnesota Vikings). The 1961 season also saw the Cowboys pick up their first victory in franchise history, a win over the Pittsburgh Steelers in the first game of the season. The Cowboys, interestingly enough, had played the Steelers in their first ever regular season game only the year before. The Cowboys finished their second campaign with an overall 4 - 9 - 1 record. In 1962, Dallas improved slightly, going 5 - 8 - 1. After the season, the Cowboys became the sole professional football franchise in the Dallas - Fort Worth area, as the AFL 's Dallas Texans, despite winning the 1962 AFL championship by a score of 20 -- 17 in double overtime, moved to Kansas City and became the Kansas City Chiefs. The Chiefs would eventually join the NFL as part of the 1970 AFL -- NFL merger. In 1963, Dallas fell back to 4 -- 10. In 1964, they posted another 5 - 8 - 1 campaign. During this period, the Cowboys had the misfortune of being associated as the city where President Kennedy was assassinated. The Cowboys success later in the decade, largely contributed to restoring civic pride in Dallas after the assassination. During the early and mid-1960s, the Cowboys gradually built a contender. Quarterback Don Meredith was acquired in 1960, running back Don Perkins, linebacker Chuck Howley and Lilly were acquired in 1961, linebacker Lee Roy Jordan in 1963, cornerback Mel Renfro in 1964, and wide receiver Bob Hayes and running back Dan Reeves in 1965. In 1965, the Cowboys went 7 -- 7, achieving a. 500 record for the first time. In 1966, the Cowboys posted their first winning season, finishing atop the Eastern Conference with a 10 - 3 - 1 record. Dallas sent eight players to the Pro Bowl, including Howley, Meredith, Perkins, and future Pro Football Hall of Fame members Hayes, Lilly, and Renfro. In their first - ever postseason appearance, the Cowboys faced the Green Bay Packers in the 1966 NFL Championship Game, with a trip to the first ever Super Bowl on the line. Green Bay defeated Dallas in a 34 -- 27 thriller by stopping the Cowboys on a goal line stand with 28 seconds remaining and went on to win Super Bowl I 35 - 10 against the Kansas City Chiefs. Despite this disappointment, 1966 marked the start of an NFL - record - setting eight consecutive postseason appearances for the Cowboys. (Dallas later broke its own record with nine consecutive trips to the playoffs between 1975 and 1983, a record that would be tied by the Indianapolis Colts when they reached the playoffs every year from 2002 to 2010, inclusive.) It was also the beginning of a still NFL record streak of 20 consecutive winning seasons that would extend all the way through 1985. In 1967, the Cowboys finished with a 9 -- 5 record and had their first playoff victory, a 52 -- 14 rout of the Cleveland Browns. They went on to face the Packers in the 1967 NFL Championship game, a rematch of the 1966 NFL championship game, with the winner advancing to Super Bowl II. The game, which happened on December 31, 1967, at Lambeau Field in Green Bay, turned out to be the coldest NFL game in history (about - 13 ° F with a - 40 ° wind chill). The Cowboys lost 21 -- 17 on a one - yard quarterback sneak by Packers quarterback Bart Starr with 16 seconds remaining. The game would later become known as the "Ice Bowl. '' Green Bay would go on to win the Super Bowl again, this time against the Oakland Raiders. Dallas remained one of the NFL 's top teams for the remainder of the 1960s, easily winning their division in 1968 (with a 12 -- 2 record) and in 1969 (with an 11 -- 2 -- 1 mark). Each season, however, ended with a disappointing, decisive loss to the Cleveland Browns. The Browns would in turn lose in the NFL championship game to the Baltimore Colts and Minnesota Vikings in the 1968 and 1969 seasons respectively. The 1968 Colts and 1969 Vikings subsequently lost Super Bowl III and Super Bowl IV to the New York Jets and Kansas City Chiefs, respectively. Repeated failures to achieve their ultimate goal earned the Cowboys the nickname "Next Year 's Champions '' and a reputation for not being able to "win the big one. '' Peter Gent, a wide receiver with Dallas from 1964 to 1968, later wrote a book called North Dallas Forty based on his observations and experiences with the team. The book was later made into a movie of the same name in 1979. The book and movie depicted many of the team 's players as carousing, drug - abusing partiers callously used by the team and then tossed aside when they became too injured to continue playing productively. In 1969, ground was broken on a new stadium for the Cowboys to replace the Cotton Bowl. Texas Stadium in Irving, a Dallas suburb, would be completed during the 1971 season. At the end of the decade, the historians Robert A. Calvert, Donald E. Chipman, and Randolph Campbell wrote The Dallas Cowboys and the NFL, an inside study of the organization and financing of the team. A reviewer describes the Cowboys as a vital cog of "an industry that occupies an important segment of American time and attention... a sophisticated industry that has worked out complex statistics to select the best thrower of a forward pass... (and) has reformed television habits... '' In the 1970s, the NFL underwent many changes as it absorbed the AFL and became a unified league, but the Cowboys also underwent many changes. Meredith and Perkins retired in 1969 and new players were joining the organization, like Cliff Harris, and Pro Football Hall of Famers Rayfield Wright, Mike Ditka, Herb Adderley and Roger Staubach. Led by quarterback Craig Morton, the Cowboys had a 10 -- 4 season in 1970. A 38 - 0 shutout by the Cardinals was the low point of the year, but the team recovered to make it to the playoffs. They defeated Detroit 5 -- 0 in the lowest - scoring playoff game in NFL history and then defeated San Francisco 17 -- 10 in the first - ever NFC Championship Game to qualify for their first Super Bowl appearance in franchise history, a mistake - filled Super Bowl V, where they lost 16 -- 13 to the Baltimore Colts courtesy of a field goal by Colts ' kicker Jim O'Brien with five seconds remaining in the contest. Despite the loss, linebacker Chuck Howley was named the Super Bowl MVP, the first and only time in Super Bowl history that the game 's MVP did not come from the winning team. The Cowboys moved from the Cotton Bowl to Texas Stadium in week six of the 1971 season. Although the first game in their new home was a 44 -- 21 victory over New England, Dallas stumbled out of the gate by going 4 -- 3 in the first half of the season, including losses to the mediocre New Orleans Saints and Chicago Bears. Landry named Staubach as the permanent starting quarterback to start the second half of the season, and Dallas was off and running. The Cowboys won their last seven regular season games before dispatching of the Minnesota Vikings and San Francisco 49ers in the playoffs to return to the Super Bowl. In Super Bowl VI, behind an MVP performance from Staubach and a then Super Bowl record 252 yards rushing, the Cowboys crushed the upstart Miami Dolphins, 24 - 3, to finally bury the "Next Year 's Champions '' stigma. That game remains the only Super Bowl to date where one of the teams involved did not score a touchdown. The Cowboys rushed for 252 yards, while holding the Dolphins, who went 17 - 0 in 1972, to 185 total yards. The 1972 season was another winning one for the Cowboys, but their 10 -- 4 record was only good for them to make the playoffs as a wild - card team. In the divisional playoffs they faced the San Francisco 49ers. The 49ers had a 28 -- 13 lead and seemed to have avenged their playoff losses to Dallas in the two previous seasons. But after Landry benched Morton, Staubach threw two touchdown passes with less than two minutes remaining -- including the game - winner to Ron Sellers -- for a miraculous 30 - 28 Dallas win, the first of several dramatic comebacks led by Staubach during the 1970s. However, they were defeated by their archrival Washington Redskins 26 -- 3 in the NFC Championship Game. The Cowboys were now beginning to grow in popularity not just in Dallas, but nationwide. Their televised appearances on Thanksgiving Day games beginning in 1966 helped bring the Cowboys to a nationwide audience. Under Coach Landry, the so - called "Doomsday Defense '' became a powerful and dominating force in the NFL and their offense was also exciting to watch. Dallas had also established itself as the most innovative franchise off the field. It was the first to use computers in scouting, the first to have a modern cheerleading squad performing sophisticated choreographed routines, and the first to broadcast games in Spanish. General manager Schramm became the most powerful GM in the NFL; it was he who pushed the league to adopt changes such as relocating the goal posts to the back of the end zone and (after the 1974 season) the use of instant replay. While Pittsburgh would win more Super Bowls in the 1970s, Dallas emerged as the "glamour '' team of the decade. The 1979 film North Dallas Forty, based on a book written by former Cowboys wide receiver Peter Gent, presented a veiled portrayal of the team 's on - and - off field culture during this time. In 1973, Dallas finished 10 -- 4 and won the NFC East. In the playoffs, they defeated the Los Angeles Rams 27 -- 16 before losing in the NFC Championship Game to the Minnesota Vikings, 27 - 10. The Cowboys faltered slightly in 1974, finishing 8 -- 6 and missing the playoffs for the first time in nine years. Bob Lilly retired following the season, capping his 14 - year Hall of Fame career. After missing the playoffs in 1974, the team drafted well the following year, adding defensive lineman Randy White (a future Hall of Fame member) and linebacker Thomas "Hollywood '' Henderson. The fresh influx of talent helped the Cowboys back to the playoffs in 1975 as a wild card, where they beat the Minnesota Vikings and Los Angeles Rams to advance to Super Bowl X, where they lost to the Pittsburgh Steelers, 21 - 17. In 1976, the team went 11 -- 3, winning the NFC East. However, they were quickly eliminated from the playoffs with a 14 -- 12 loss to the Rams. The Cowboys began the 1977 season 8 - 0 before losing in consecutive weeks to the St. Louis Cardinals in a Monday night home game and the Steelers in Pittsburgh. After the losses, however, the Cowboys won their final four regular season games. Dallas had both the # 1 defense and # 1 offense in the NFL. In the postseason, the Cowboys routed the Chicago Bears 37 - 7 and Minnesota Vikings 23 - 6 before defeating the Denver Broncos 27 - 10 in Super Bowl XII in New Orleans. As a testament to Doomsday 's dominance in the hard - hitting game, defensive linemen Randy White and Harvey Martin were named co-Super Bowl MVPs, the first and only time multiple players have received the award. After a slow start in 1978, Dallas won its final six regular season games to finish the season at 12 -- 4. After an unexpectedly close divisional playoff game against the Atlanta Falcons at Texas Stadium, the Cowboys traveled to Los Angeles and shut out the Rams in the NFC Championship Game 28 -- 0 to return to the Super Bowl. In Super Bowl XIII, Dallas would face the Steelers at the Orange Bowl in Miami. The Steelers outlasted the Cowboys 35 -- 31, despite a furious comeback that saw Dallas score two touchdowns late in the fourth quarter; the game was not decided until the final 22 seconds, when a Dallas onside kick failed. Bob Ryan, an NFL films editor, would dub the Cowboys "America 's Team '' following this season, a nickname that has earned derision from non-Cowboys fans but has stuck through both good times and bad. Dallas finished the 1979 season 11 - 5. The team slumped in November, but rallied to win its next two games. This set the stage for the regular season finale against Washington; the winner would capture the NFC East title while the loser would miss the playoffs. In the game, Texas Stadium fans were treated to one of Staubach 's greatest comebacks -- which would also turn out to be his last. The Cowboys trailed 17 -- 0, but then scored three touchdowns to take the lead. Led by running back John Riggins, the Redskins came back to build a 34 -- 21 lead, but the Cowboys scored 2 touchdowns in the final five minutes -- including a Staubach touchdown pass to Tony Hill with less than a minute remaining -- for an amazing 35 -- 34 victory. The season ended with a whimper, however, as two weeks later, the underdog Rams traveled to Dallas and upset the Cowboys 21 -- 19 in the divisional round of the playoffs. The Cowboys had a chance to win in the final two minutes after the Rams scored their final touchdown, but the Rams ' defense stopped Staubach from making a miracle comeback like the kinds that he had become famous for. The Rams would go on to win the NFC Championship Game against the Tampa Bay Buccaneers 9 - 0 and reach Super Bowl XIV, which they lost to the defending champion, Pittsburgh, by a score of 31 - 19. This game marked the end of an era, as repeated concussions compelled Staubach to announce his retirement a few months later in an emotional press conference at Texas Stadium. All told, from 1970 through 1979, the Cowboys won 105 regular season games, more than any other NFL franchise during that span. Danny White became the Cowboys ' starting quarterback in 1980. Without Staubach, not much was expected of the Cowboys, but they surprised everyone with a 12 -- 4 regular season. Philadelphia also finished 12 -- 4, but got the division title on a close tiebreaker. The Cowboys won the wildcard game at home against the Rams, then White engineered a late comeback to win the divisional playoff game in Atlanta. Dallas faced the Eagles in the NFC Championship Game, but suffered a highly embarrassing 20 -- 7 loss to their division rival in Veterans Stadium. Dallas started the 1981 season 4 - 0, and captured the NFC East crown with another 12 -- 4 record. Dallas dismantled the Tampa Bay Buccaneers in the divisional playoff 38 -- 0. They then traveled to San Francisco to face the 49ers in the NFC Championship Game. It would be one of the most famous in NFL history. Dallas led 27 -- 21 with just under five minutes to play in the fourth quarter and appeared to be headed to their sixth Super Bowl appearance in franchise history. However, San Francisco quarterback Joe Montana led a long 49er drive that was capped by a Joe Montana touchdown pass to Dwight Clark with 51 seconds remaining. Dallas was not finished just yet, needing only a field goal to win. A White completion to Drew Pearson moved the ball into 49er territory and almost went for a touchdown. Two plays later, though, White fumbled after being hit, and San Francisco recovered to seal their 28 -- 27 victory. San Francisco would go on to win Super Bowl XVI. Clark 's leaping grab in the end zone would come to be famous as "The Catch '', and represented a changing of the guard in the NFC from the dominant Cowboys teams of the 1970s to the dominant 49ers teams of the 1980s. Dallas finished the strike - shortened 1982 season with a record of 6 -- 3. The Cowboys held a one - game lead over the Redskins with two games to play in the regular season, but fell at home to Philadelphia, then lost a Monday night match in Minnesota (a game best known for Tony Dorsett 's NFL record 99 - yard touchdown run). Dallas played two home games in the unusual postseason "Super Bowl Tournament '', defeating Tampa Bay and Green Bay. In the NFC Championship Game, Washington defeated Dallas 31 -- 17 at RFK Stadium. This finished a remarkable run that saw the Cowboys play in 10 of 13 conference championship games. The Cowboys opened the 1983 season in impressive fashion, erasing a 23 -- 3 deficit at Washington to defeat the Super Bowl champion Redskins 31 - 30, then winning their next six games. When Dallas and Washington squared off again in Week 15 at Texas Stadium, both teams had 12 -- 2 records. However, the Redskins beat the Cowboys handily in that game, and Dallas subsequently lost its next two games to end its season (a rout by the 49ers in the regular season finale and an upset home loss to the Rams in the wild card playoff game). Change and controversy marked the Cowboys ' 1984 season (its 25th, which Schramm commemorated as the "Silver Season ''). Despite leading Dallas to the playoffs in each of his four seasons as starting quarterback, Danny White began to draw criticism for "not being able to win the big game '', and several players privately expressed their preference for backup quarterback Gary Hogeboom. Landry decided to start Hogeboom, and while Dallas started the season 4 -- 1, Hogeboom 's inconsistency eventually led to White regaining the starting job. It would not be enough, though. The Cowboys suffered an embarrassing Week 12 loss to the winless Bills in Buffalo, and needing a win in their final two games to secure a playoff spot, lost both. Dallas finished the 1984 season 9 -- 7, and missed the postseason for the first time in a decade. An important off - field change also took place in 1984. Clint Murchison, in dire financial straits because of a collapse in oil prices, sold the Cowboys to Dallas oilman H.R. "Bum '' Bright in May. Bright 's ownership coincided with a decline in the Cowboys ' fortunes. The 1985 season saw a somewhat uneven string of wins and losses, the worst being in Week 11 when they were annihilated 44 -- 0 by the unstoppable Chicago Bears, the team 's first shutout since 1970. With a 10 -- 6 record, the Cowboys won the division, but were blanked by the Rams 20 -- 0 in the playoffs. This was the franchise 's final winning season and postseason appearance with Tom Landry as coach. The 1986 campaign started optimistically, with highly regarded offensive coordinator Paul Hackett and Heisman Trophy - winning Herschel Walker having joined the team. The Cowboys ran their record to 6 -- 2, but White 's wrist was broken in a mid-season loss to the Giants, and the team only managed to win one of its final seven games. Dallas finished with a 7 -- 9 record, ending the franchise 's streak of 20 consecutive winning seasons that had dated back to its first - ever winning season in 1966. To this date, no other NFL team has successfully matched this feat. Dallas started the 1987 season 1 - 1 before NFL players went on strike and management responded by hiring replacement players. Schramm, having anticipated the strike, assembled one of the better replacement teams, which was soon bolstered by several starters who crossed the picket line (including Dorsett, Danny White, and Randy White). However, the "Counterfeit Cowboys '' suffered an embarrassing home loss to a Redskins team composed entirely of replacement players, and once the strike ended, Dallas ' regular squad lost six of its next eight games to finish 7 -- 8. The Cowboys went into a free - fall in 1988. After starting the season 2 -- 2, a last - second loss in New Orleans started a 10 - game Cowboy losing streak. Among the few bright spots in the season were the team 's first - round draft pick, wide receiver Michael Irvin (whom Schramm had predicted would spur the team 's "return from the dead ''), and a Week 15 victory against the Redskins in Washington, Tom Landry 's last. Bright sold the Cowboys to Arkansas businessman Jerry Jones on February 25, 1989. Jones ' first act as owner was to fire Tom Landry, who was until then the only head coach the franchise had ever known. Tom Landry 's abrupt termination attracted considerable criticism. He admitted to becoming more forgetful with play calling and clock management as he passed his 60th birthday, and to being a bit unwilling to adapt his offense for the NFL of the 1980s, although he was not totally to blame for the Cowboys ' problems, which included years of poor drafts. Schramm, Brandt, and other longtime personnel were soon gone as well. Jones replaced Landry with University of Miami head coach Jimmy Johnson. With the worst record of 1988, Dallas gained the # 1 draft pick for 1989, UCLA QB Troy Aikman (Tom Landry had expressed interest in Aikman just before being fired). After Dallas opened the 1989 season 0 -- 5, Johnson traded away Herschel Walker to the Minnesota Vikings for five veteran players and eight draft choices. (A total of 18 players or draft choices were involved in what was the largest trade in NFL history at the time.) The Cowboys finished the 1989 season with a 1 -- 15 record, their worst record since the team 's inception. Rookie quarterback Steve Walsh, starting in place of an injured Aikman, led the team to its lone victory in a midseason Sunday night game in Washington. The two games with Philadelphia in 1989 (which became known as the Bounty Bowls) were marked by particularly strong hostility between the staff and fans of both teams. Eagles coach Buddy Ryan insulted Jimmy Johnson, saying that he did nothing in his tenure at the University of Miami except run the score up on bad teams and also made fun of his weight. Ryan reputedly wanted his players to injure Cowboys kicker Roger Ruzek, who had been cut from the Eagles early in the season, and in the season - ender in Philadelphia, the Cowboys were pelted with snowballs. Dallas 's 1 - 15 season of 1989 gave them the league 's worst record for the second consecutive year. However, they did not get the # 1 draft pick again, as they had forfeited their first round pick the previous year when they took Steve Walsh in the Supplemental Draft. Johnson quickly returned the Cowboys to the NFL 's elite with a series of skillful drafts. Having picked Aikman, fullback Daryl Johnston and center Mark Stepnoski in 1989, Johnson added running back Emmitt Smith in 1990, defensive tackle Russell Maryland and offensive tackle Erik Williams in 1991, and safety Darren Woodson in 1992. The young talent joined holdovers from the Landry era such as wide receiver Michael Irvin, guard Nate Newton, linebacker Ken Norton, Jr., and offensive lineman Mark Tuinei, and veteran pickups such as tight end Jay Novacek and defensive end Charles Haley. In 1990, the Cowboys finished 7 -- 9, with Smith being named NFC Offensive Rookie of the Year and Johnson earning Coach of the Year honors. In 1991, Dallas finished with an 11 -- 5 record, making the playoffs for the first time since 1985. The Cowboys beat the Chicago Bears 17 - 13 in the wild card round. In the divisional round, they faced the Lions, who had beaten them earlier in the season. Detroit went in for a repeat performance, dismembering the Cowboys 38 -- 6 for their first, and to date only, postseason victory since 1957. The 1991 Cowboys also became the first team to feature the league leaders in rushing yards (Smith) and receiving yards (Irvin). The 1991 season also marked Dallas 's return to Monday Night Football after an absence of two years. In 1992, the Cowboys finished with a 13 -- 3 record (second best in the league), boasted the league 's # 1 defense, reached their peak in popularity (many road fans were cheering for the Cowboys), and finally avenged their 1981 NFC Championship Game loss to San Francisco by defeating the 49ers in the conference title game, 30 -- 20, in a muddy Candlestick Park. The Cowboys went on to crush the Buffalo Bills in Super Bowl XXVII, 52 -- 17, forcing a Super Bowl record 9 turnovers. QB Troy Aikman was named MVP after completing 73.3 % of his passes for 273 yards, 4 touchdowns, and 0 interceptions for a passer rating of 140.7, and even out rushed Bills running back Thurman Thomas 28 yards to 19 yards. Emmitt Smith rushed for 108 yards, and became the first NFL rushing champion to win a Super Bowl in the same season. Coach Johnson became the first coach to claim a National Championship in college football and a Super Bowl victory in professional football. The following season, the Cowboys finished with a 12 -- 4 record in the regular season. They again defeated the 49ers in the NFC Championship, this time by a score of 38 -- 21 at Texas Stadium, and again defeated the Buffalo Bills in the Super Bowl, this time by a score of 30 -- 13. The Cowboys sent an NFL record 11 players to the Pro Bowl: Troy Aikman, Emmitt Smith, Michael Irvin, Thomas Everett, Daryl Johnston, Russell Maryland, Nate Newton, Ken Norton Jr, Jay Novacek, Mark Stepnoski and Erik Williams. Emmitt Smith won his third rushing title despite missing the first two games of the season, which the Cowboys went 0 -- 2 in, over a contract dispute, and was named both NFL and Super Bowl MVP. Smith is one of only six players to win both the NFL MVP award and Super Bowl MVP award in the same season, and is the only one of those six players who was not a quarterback. Only weeks after Super Bowl XXVIII, however, friction between Johnson and Jones culminated in Johnson stunning the football world by announcing his resignation. The next day Jones hired former University of Oklahoma head coach Barry Switzer to replace Johnson. Norton and guard Kevin Gogan departed via free agency, but Dallas drafted offensive lineman Larry Allen, who would be a mainstay on the line for the next decade. In 1994, the Cowboys played before the largest crowd to ever attend an NFL game when 112,376 in Mexico City turned out for a preseason match against the Houston Oilers. The Cowboys cruised to another NFC East title in 1994. They finished the regular season 12 -- 4, with their four losses coming by a combined 20 points. The team suffered key injuries, however, when Erik Williams was lost for the year after a mid-season auto accident and Emmitt Smith was hobbled for the final month with a pulled hamstring. Dallas advanced to the NFC Championship Game, where they faced San Francisco for the third consecutive year. Dallas fell behind 21 -- 0 in the first quarter, and despite a valiant comeback, the 49ers held on to win 38 -- 28, thus denying the Cowboys their chance of winning a record three consecutive Super Bowls. In 1995, Jones made a huge free agent splash by signing All - Pro corner Deion Sanders away from San Francisco. Dallas posted another two catch touchdown regular season record and NFC East crown. Emmitt Smith won his Third rushing title and scored a then NFL record 25 rushing touchdowns. After crushing the Eagles 30 - 11 in the divisional playoffs, the Cowboys earned their 8th NFC Championship title by defeating the Green Bay Packers 38 -- 27 at Texas Stadium. The Cowboys then topped the Pittsburgh Steelers in Super Bowl XXX 27 -- 17, avenging two four - point losses to Pittsburgh in Super Bowls X and XIII. Coach Switzer followed Johnson to become the second coach to claim a National Championship in college football and a Super Bowl victory in professional football. Only one other coach, Pete Carroll, has since accomplished this feat. To date, this is the most recent Super Bowl appearance for the Cowboys. Injuries and off - field incidents deviled the 1996 Cowboys. Novacek, possibly Aikman 's most trusted target, suffered an off - season back injury that ended his career. Irvin was convicted of narcotics possession and suspended for the first five games of the season. In December defensive tackle Leon Lett was given a one - year suspension for failing a narcotics test. Late in the season Irvin and Williams drew national attention when they were accused of assaulting a Dallas woman, although the allegations were later recanted. Haley and Emmitt Smith were also plagued by injuries during the season. Yet Dallas still managed to earn its fifth consecutive NFC East title with a 10 -- 6 record. The Cowboys thumped the Vikings 40 - 15 in the first round of the playoffs, then traveled to Carolina, where they lost to the upstart Panthers 26 - 17 after Irvin and Sanders left the game with injuries. Preseason pundits again put the Cowboys at the top of the NFC in 1997. However, Dallas finished the season with a disappointing 6 -- 10 record as continued discipline and off - field problems became major distractions. Switzer was arrested during the preseason after a handgun was found in his luggage at an airport metal detector. The team collapsed down the stretch, losing its final 5 games. Switzer resigned as head coach in January 1998 and was replaced by former Steelers offensive coordinator Chan Gailey. After missing the playoffs in 1997, Gailey led the team to a 10 -- 6 record in 1998 as Dallas became the first NFC East team to sweep the division. The Cowboys suffered a humiliating first - round playoff exit, however, when the Arizona Cardinals defeated them at Texas Stadium 20 -- 7 for their first postseason victory in half a century. Jones raised hopes in the off - season, though, signing fleet - footed wide receiver Raghib "Rocket '' Ismail. The Cowboys started the 1999 campaign in impressive fashion, erasing a 21 - point deficit in Washington on opening day for a 41 -- 35 overtime victory. In their fourth game of the season, however, Dallas lost Irvin to a neck injury that ended his career. Darryl Johnston also suffered a career - ending injury early in the season, and Aikman, Allen, Sanders and cornerback Kevin Smith missed time as well. Dallas sputtered to an 8 -- 8 finish in 1999. They gained a wild - card berth in their final regular season game, but lost in Minnesota in the first round of the playoffs, 27 - 10. Key players were now grumbling about Gailey, and Jones fired him in January 2000. Defensive coordinator Dave Campo was promoted to head coach, but he could only post three consecutive 5 -- 11 seasons. Instability plagued the quarterback position after several concussions; the first suffered on opening day against the Eagles (known as the Pickle Juice Game because Eagle players drank pickle juice at halftime Source), finally ended Aikman 's career late in the 2000 season; five different quarterbacks started between 2001 and 2002. The lowest point of the Campo era was an embarrassing and humiliating loss on opening night of the 2002 season to the brand - new Houston Texans. One of the only highlights of this era occurred on October 27, 2002, when running back Emmitt Smith broke Walter Payton 's all - time career yardage rushing record during a 17 - 14 home loss to the Seattle Seahawks. Many fans and media blamed Jerry Jones for the team 's ills, noting that he refused to hire a strong coach, preferring to hire coaches who did not want to be involved with personnel duties so that Jones himself could manage them. However, Jones proved them wrong in 2003 by luring Bill Parcells out of retirement to coach the Cowboys. The Cowboys became the surprise team of the 2003 season, posting a 10 -- 6 record and a playoff berth by leading the NFL in sacks, turnovers and having the best overall defense in the NFL. However, they lost to the eventual conference champion Carolina Panthers in the Wild Card round, 29 - 10. However, the 2004 season was one of turmoil. Injuries and persistent penalty problems plagued the Cowboys, who were shaken early in training camp when starting quarterback Quincy Carter was suddenly released, allegedly for drug use. 40 - year - old veteran Vinny Testaverde, recently brought in by former coach Parcells to be the back - up, became the starter. They had only a 3 -- 5 record at midseason, but injured rookie running back Julius Jones returned in late November, and in consecutive games logged two of the best single - game performances in franchise history. Dallas went 1 -- 3 down the stretch, though, finishing the season 6 -- 10. In November 2004, a vote was passed by the City of Arlington in Tarrant County to build a new stadium adjacent to the existing Ameriquest Field in Arlington. The team began playing at the new site in 2009 after thirty - eight years playing in the city of Irving, and forty - nine years in Dallas County. The Cowboys improved their defense before the 2005 season, adding DeMarcus Ware, Marcus Spears, Kevin Burnett, and Chris Canty through the draft. Parcells hoped to jumpstart the team 's transition from the traditional 4 -- 3 defense, which had been the Cowboys ' base defense for the past 20 years, to his preferred 3 -- 4 defense, which he believes favors the talents of the current lineup (speed and athleticism over power). Jerry Jones also added a number of veterans, including nose tackle Jason Ferguson and cornerback Anthony Henry via free agency. On offense, the Cowboys tried to upgrade their passing game by signing free agent quarterback Drew Bledsoe. Bledsoe had a solid year and gave the Cowboys stability at the QB position, which had been lacking since Troy Aikman 's retirement 5 years earlier. The Cowboys endured an up - and - down 2005 season. Entering Thanksgiving the Cowboys had a 7 -- 3 record, Dallas would go on to finish the season 2 -- 4 and miss the playoffs. An injury to kicker Billy Cundiff led to inconsistency at that position, and costly misses contributed to close losses against Seattle and Denver. Shortly before the regular season finale, the Cowboys learned that they had been eliminated from the playoff chase, and turned in a listless performances against the St Louis Rams on Sunday Night Football to finish the season 9 -- 7, 3rd place in the NFC East. The Cowboys entered the season with high hopes but got off to a mediocre 3 -- 2 start before an important Monday Night Football game against division rivals, the New York Giants. The Cowboys suffered a tough 36 -- 22 loss despite "a changing of guard '' at the QB position from Drew Bledsoe to Tony Romo. With the next three games on the road, speculation grew that the Giants would run away with the division for a second straight year. Romo won his first game as a starter the following week against the Carolina Panthers with an outstanding 4th quarter comeback to win (35 -- 14). The Cowboys ' chance to challenge the Giants seemingly fizzled when they lost to the Washington Redskins at FedExField on a last second field goal (the "Hand of God '' game) However, the Giants entered a slump, and Tony Romo impressed the media as a quarterback, revitalizing the Cowboys with a 27 -- 10 win over the Cardinals, a well - earned (21 -- 14) victory over the previously unbeaten Colts, and a thorough routing of Tampa Bay (38 -- 10) on Thanksgiving Day. During that home game, Romo solidified his position as QB and quieted any remaining skeptics by completing 22 - of - 29 passes for 306 yards and five touchdowns (tying a franchise record). Furthermore, the Cowboys took a two - game lead of the NFC East by beating the Giants in a Week 13 rematch. The success of the new quarterback surprised much of the nation and helped Romo receive much air - time on sports shows. The Cowboys then self - destructed in the last four games of the season, losing to the Saints in a battle for second - best record in the league, to the Philadelphia Eagles in a game that would have earned them the division championship, and to the 2 -- 13 Detroit Lions in a game where Tony Romo 's four fumbles cast significant doubt on his ability to successfully lead his team in the playoffs. The Cowboys played a wild card matchup at Seattle to start the playoffs. Leading 20 -- 13 with 6: 42 left in the game with the ball at their own 1 - yard line, Romo threw a short pass to Terry Glenn where he fumbled it and it went out of bounds in the endzone resulting in a safety. The Seahawks got the ball back and Matt Hasselbeck threw a touchdown to Jerramy Stevens to take a 21 -- 20 lead after missing the two - point conversion. With 1: 19 left in the game, the Cowboys had a chance win the game on a 19 - yard field goal, but the hold was fumbled by Romo, who continued to serve as field goal holder even after ascending to the starting quarterback 's role (the backup quarterback is traditionally the holder on field goals). He picked up the loose ball and tried to run it to the 1 - yard line for a first down, but was tackled at the 2. As the game came to a close, the Cowboys managed to get the ball back with two seconds left, but Romo 's hail mary pass attempt to the endzone fell incomplete. On January 22, at the conclusion of the Cowboys ' season, head coach Bill Parcells retired. On February 8, after a replacement search that included Mike Singletary, Jason Garrett, Jim Caldwell, Ron Rivera and Norv Turner, San Diego defensive coordinator Wade Phillips was hired as the new head coach. Jerry Jones eventually hired Garrett as offensive coordinator (even before hiring Phillips). Phillips has since hired his son Wes Phillips, and former linebacker Dat Nguyen to his new list of assistant coaches. (1) During the 2007 offseason, the Cowboys signed offensive lineman Leonard Davis and quarterback Brad Johnson to back up Tony Romo and have also resigned center Andre Gurode and kicker Martin Gramatica. They have also released two players; quarterback Drew Bledsoe and tight end Ryan Hannam. Tony Romo also received a $67.5 million contract for six years with the Dallas Cowboys on October 30, 2007, making Romo the third highest paid quarterback in the NFL, after Peyton Manning of the Indianapolis Colts and Carson Palmer of the Cincinnati Bengals. The Cowboys tied a franchise record in 2007 with by winning 13 regular season games, a feat that had been accomplished 15 years earlier by their 1992 Super Bowl winning squad. Terrell Owens had arguably the most productive year in his career and franchise history. He tied the franchise record for most scoring receptions in a single game (four). When Owens caught a touchdown pass in Week 16 against the Panthers, he set the franchise record for most touchdown receptions in a single season (15). Tony Romo set team records in Touchdowns (36) and Passing Yards (4,211) in one season. After receiving the top NFC playoff seed, and getting a first - round bye, they lost to the Giants in the divisional round of the playoffs to end one of the most exciting seasons of the decade. In week 5 against the Buffalo Bills on Monday Night Football, the Cowboys turned the ball over six times, including five interceptions thrown by Romo. Despite having a - 5 turnover margin, the Cowboys managed to defeat the Bills. Trailing 24 -- 22 in the final moments of the game, Dallas sent rookie kicker Nick Folk out to attempt a potential 53 - yard game winner. His first kick sailed through the uprights, but did n't count because Buffalo called a timeout immediately before the snap. Folk had to attempt the kick a second time, and hammered it through the goalposts again to finish a dramatic victory. The season would go on to produce some of the team 's most memorable games of the decade as well. In Week 9, tight end Jason Witten, who already had a reputation as a tough and intense player, caught a 25 - yard pass from Romo. Immediately after the catch, two Eagles delivered a hit that knocked Witten 's helmet off. Unfazed by the contact, Witten ran another 30 yards without a helmet. When he was finally dragged down at the Eagles ' 6, he walked to the sideline with a bloody nose. Thanks to their dramatic victory over the Bills in Week 5, Dallas started with a 5 -- 0 record. They were the last team in the NFC to remain undefeated up to that point heading into their next game against the 5 - 0 New England Patriots. The Patriots won 48 -- 27, handing the Cowboys their first loss of the season, and would remain undefeated until they lost that season 's Super Bowl. Besides the loss to the Patriots in Week 6, their only other losses came in Weeks 15 and 17 against their division rivals Philadelphia and Washington. When wide receiver Terrell Owens went down with an ankle injury against the Panthers in Week 16, and missed Week 17 against the Redskins, the offense became stagnant. Another highlight of that season was a Week 13 match with Green Bay, which Dallas won 37 -- 27, that was reminiscent of the 1990s Cowboys - Packers duels. Like the Cowboys, the Packers finished the regular season 13 -- 3, but Dallas got the # 1 seed due to a better conference record and head to head win. This gave them both a first - round bye and home field advantage throughout the NFC playoffs. However, they lost their first playoff game to the eventual Super Bowl champion New York Giants, a team that they had defeated in their two regular - season matchups. Dallas was the first number one seed in the NFC to lose in the divisional round since 1990. Meanwhile, the Giants went on to topple Green Bay in the NFC Championship and upset the 18 - 0 New England Patriots in the Super Bowl. A record thirteen members of the Cowboys were named to the Pro Bowl, while five were named All - Pro by the Associated Press. The Cowboys began the 2008 season by defeating the Cleveland Browns 28 - 10, Coming off their commanding road win over the Browns, the Cowboys played their Week 2 home opener under the MNF spotlight. In the last MNF game at Texas Stadium, Dallas would duel with their NFC East foe, the Philadelphia Eagles. In the first quarter, the Cowboys trailed early as Eagles kicker David Akers got a 34 - yard field goal. Dallas would answer in their first possession with QB Tony Romo completing a 72 - yard TD pass to WR Terrell Owens In a game that had 9 lead changes, it also set different point records, including most first half points in MNF history (54) and most combined points in the rivalry 's history (78). Dallas would hold on to win 41 -- 37. After starting 4 -- 1 the Cowboys flew to the University of Phoenix Stadium for a Week 6 Sunday Night Football showdown with the Arizona Cardinals. The Cardinals would return the opening kick return for a touchdown but Dallas would tie the game 7 -- 7 at halftime. With only 3: 17 left in the 4th quarter Tony Romo would complete a 70 - yard pass to Marion Barber III and the kicker Nick Folk hit a 52 - yard field goal as time expired to send the game into overtime. In overtime the Cowboys punter Mat McBriar had a punt blocked and returned for a touchdown also in overtime QB Tony Romo broke his right pinkie finger. In the week following the game Tony Romo was listed as questionable and would go on to miss 3 games. In addition, Matt McBriar and Sam Hurd were placed in injured reserve, and Felix Jones was listed as out for 2 -- 3 weeks with a hamstring injury. Furthermore, Adam (Pacman) Jones was suspended by the NFL for a minimum of 4 weeks after an altercation with his bodyguard. Lastly The Cowboys traded for WR Roy Williams with the Detroit Lions, in exchange for their first, third, and sixth - round picks. After the bye week, they won another four victories, Dallas would finish the season 1 -- 3 losing its final game in Texas Stadium to the Baltimore Ravens 33 -- 24 and a disastrous 44 -- 6 loss to Philadelphia. With a 9 -- 7 record, the team finished third in the division and failed to qualify for the playoffs. After the season ended highly controversial and productive receiver Terrell Owens was released after receiving a $34 million extension the previous June that supposedly would allow the wide receiver to retire a Cowboy. Jones said he released Owens because of production and "In this particular case, we have an outstanding player in Roy Williams, and it was a significant factor in the decision I made to release Terrell. '' In three seasons with the Cowboys, Owens had 235 receptions for 3,587 yards and 38 touchdowns, but his numbers declined last season, when he had 69 receptions for 1,052 yards and 10 touchdowns. The 2009 season marked the 50th season of play for the Cowboys. In May of that year, the new Cowboys Stadium was completed in Arlington, Texas. It was widely criticized for its appearance, cost (over $1 billion) and high energy use. The first game played in the team 's new home was a preseason match with the Tennessee Titans on August 21, and the first regular season one was a three - point loss to the Giants on September 20. By Week 9, the Cowboys stood at 6 -- 2 after defeating their archrival Eagles in Philadelphia. Then followed a loss to the Packers, and victories over Washington and Oakland, the latter being on Thanksgiving. The team then fell to the Giants a second time, and lost at home to the Chargers. By this point, the Cowboys ' playoff chances were doubtful, and the old talk of the "December curse '' reappeared. The next game was a surprise upset of the 13 - 0 New Orleans Saints, followed by a shutout of Washington. Combined with Giants defeats, the Cowboys now found themselves guaranteed a wild card spot at the minimum. On January 3, they hosted the Eagles, who had won their last five games. Philadelphia 's offense completely folded, and the team suffered a 24 -- 0 shutout, making this the first time in franchise history that Dallas blanked two consecutive opponents. This gave them the division title and the # 3 NFC seed, but also gave their opponent a wild card, which meant that they had to play in Dallas again the following week. The rematch saw the Eagles score 17 points, but their defense, which had been considered one of the NFL 's best a few weeks earlier, again performed poorly and the Cowboys put up 34 points, to beat them for the third time in one season. Having won their first playoff game since 1996, the Cowboys traveled to the Hubert H. Humphrey Metrodome to face Brett Favre and the Vikings. Their season quickly ended after Minnesota scored four touchdowns and limited them to a single field goal. Linebacker Keith Brooking criticized the last touchdown pass of the game, arguing that it served no purpose other than to run up the score in a game the Vikings already had won. The Cowboys opened 2010 in Washington against a revitalized Redskins team that now featured Donovan McNabb (traded from the Eagles in April) and former Broncos coach Mike Shanahan. Although neither team performed well, Dallas lost unexpectedly when on the last play of the game Tony Romo threw a pass into the end zone that was nullified by a holding penalty. The Redskins thus won 13 -- 10. In Week 2, Dallas fell to the Bears 27 -- 20 in their second straight home opener loss. This was also the team 's first 0 -- 2 start since 2001. Desperate to win, they headed to Houston for the third "Battle of Texas '' with the Texans (the first and second were in 2002 and 2006) and beat them 27 -- 13. After coming back from their bye week, the Cowboys suffered another home embarrassment, this time against Tennessee. Dallas 's fortunes continued to slide in Week 6 as they lost to Minnesota 24 -- 21. Things steadily got worse the next week as Tony Romo was knocked out with a fractured collarbone while playing the Giants on MNF. Filling in for him was 38 - year - old QB Jon Kitna, who had n't started in two years. Although rusty, he managed two touchdown passes and the Cowboys scored 35 points. But their division rival edged them out 41 -- 35 to win in Cowboys Stadium for the second straight year. By Week 8, the Cowboys found themselves at 1 -- 6 after losing at home to Jacksonville after four Jon Kitna interceptions. After a disastrous 45 -- 7 loss in Green Bay, Wade Phillips was fired (thus breaking Jerry Jones 's policy of not changing head coaches during the season) and replaced by offensive coordinator Jason Garrett. Now largely eliminated from playoff contention, the Cowboys headed to the Meadowlands for a rematch with New York. This time things would be different as Jon Kitna passed for 327 yards and three touchdowns. An all - around better performance by the team allowed them to win 33 -- 20. After beating Detroit at home, the Cowboys lost a close Thanksgiving game to New Orleans. They next defeated the Colts in Indianapolis 38 -- 35 on an OT field goal to retain faint playoff hopes. However, a home loss to Philadelphia mathematically eliminated them from playoff contention. Week 15 saw them beat the hapless Redskins 33 -- 30. The Cowboys then lost a meaningless game in Arizona and followed that with a meaningless win over the Eagles to end their season at 6 -- 10. After going 5 -- 3 as interim head coach during the last eight games of the previous season, Jason Garrett took the head coaching position full - time. With Tony Romo back in action, the Cowboys headed to the Meadowlands to take on the Jets for a Sunday Night game commemorating the 10 - year anniversary of the September 11, 2001 terrorist attacks. An early Dallas lead soon led to a 24 -- 24 tie, but Romo threw an interception in the 4th quarter that allowed New York to get into the red zone and score a game - winning field goal. The next week, the Cowboys headed to San Francisco and won 27 -- 24 after Romo made a valiant overtime comeback effort despite playing through a painful rib injury. Despite this and a punctured lung, Romo started in Week 3 as the Cowboys hosted Washington on MNF. They won 18 -- 16 in a bizarre game with six field goals from rookie kicker Dan Bailey. Dallas 's offense struggled the entire night with Romo handicapped by pain, multiple dropped passes, and several botched snaps from rookie center Kevin Kowalski. They went 1 -- 3 in the month of October including an embarrassing Week 7 34 - 7 loss by the division rival Eagles. They would turn things around by going undefeated in November. That run included a Week 10 44 - 7 blowout over Buffalo. Week 11 was a 27 - 24 OT victory over the Redskins on the road. They entered December with a 7 -- 4 record vying for first place of their division with the Giants, whom the Cowboys were yet to play. After losing in OT against Arizona in Week 13, they headed to a Sunday Night primetime game against the Giants. Despite leading New York 34 -- 22 with less than 6 minutes to play, the Cowboys lost 37 -- 34 after a game - tying field goal by Dan Bailey was blocked by Jason Pierre - Paul. They rebounded by the next week by defeating Tampa Bay 31 -- 15. Week 16 brought another loss to Philadelphia. Despite entering Week 17 with an 8 -- 7 record, Dallas was in a position to win the division. However, so was their final regular season opponent the New York Giants. However, Dallas would lose 31 -- 14 to finish the season 8 -- 8 while New York would go on to win Super Bowl XLVI. The Cowboys started the 2012 season on a high note by defeating defending Super Bowl Champion the New York Giants 24 -- 17. They were unable to capitalize on their momentum as they would drop 3 out of their next 4 games. One of those losses was a tight 31 -- 29 game to eventual Super Bowl XLVII Champions the Baltimore Ravens. They entered their annual Thanksgiving home game with a 5 -- 5 record. Their opponent on Turkey Day this year was upstart division rival Washington whom was being led by rookie QB Robert Griffin III. The Redskins got off to a quick start leading 21 -- 3 at halftime. Dallas fought back in the second half but it was n't enough which led to 38 -- 31 loss. The Cowboys would win their next 3 games before losing on Week 16 to New Orleans. Going into Week 17 they were once again one win away from winning their division but so were their opponents that week the Redskins. The game was tight throughout. With less than 3 minutes to go, Dallas was down 21 -- 18 but had the ball and were starting to drive down the field. Their playoff hopes were destroyed after Tony Romo threw a game ending interception. Dallas lost 28 -- 18, ending their season with an 8 -- 8 record and placing in 3rd in the NFC East for the second straight year. For the second straight year Dallas opened the season by defeating their division rival New York Giants 36 -- 31. This win was punctuated after Giants QB Eli Manning threw an interception to Brandon Carr whom returned it for a touchdown. This was the first time Dallas had defeated New York at AT&T Stadium which had opened in 2009. Just like the previous year though the Cowboys would lose 3 of their next 4 games. The 3rd loss of that run including a shootout to eventual AFC Champions the Denver Broncos 51 -- 48. Numbers wise Tony Romo had an excellent day throwing for over 500 yards with 5 touchdowns. However, late in the 4th quarter with the game tied at 48 he would throw an interception to Danny Trevathan which would set up a Broncos winning field goal. They rebounded in Week 6 by decisively defeating division rival Washington 31 -- 16 whom has dashed the Cowboys playoff hopes the previous season with a Week 17 win over Dallas. Week 7 brought another division game, this time against the Eagles in Philadelphia. Dallas was firing on all cylinders that day winning 17 -- 3. However, the next week they lost to Detroit in a tight 31 -- 30 loss to end the month of October. They went 3 -- 1 in the month of November which included another win over the Giants. Their sole loss that month was a Week 10 49 - 17 blowout loss to the Saints in New Orleans. They began December with 2 crucial losses to Chicago and Green Bay. The Week 15 loss to the Packers at home was especially bitter since Dallas had a 26 -- 3 halftime lead but would lose 37 -- 36. Talk of the annual "December Curse '' was in full affect. They came into their Week 16 division matchup in Washington with a 7 -- 7 record but still in the hunt with the Eagles for the NFC East crown. In the game vs the Redskins they started the 4th quarter down 23 -- 14. They would battle back to win 24 -- 23. The win was highlighted by a 4th and goal touchdown pass from Romo to DeMarco Murray from the 12yd line with 1: 15 left. However, Romo received a serious back injury during the 4th quarter. While he was able to finish the game, the injury prematurely ended his season. For the 3rd consecutive year Dallas entered Week 17 one win away from the division title along with their opponents the Eagles. Dallas came into the game with a 5 -- 0 division record. With Romo out backup QB Kyle Orton started. With less than 2 minutes left and being down 24 -- 22, Orton threw a game - ending interception. Dallas finished 8 -- 8 for the third straight year, but this time they finished in 2nd place in their division instead of in 3rd place in their division. After starting 2014 with an embarrassing 28 -- 17 loss to San Francisco, Dallas went on a roll, winning their next 6 games. It was their longest winning streak since 2007. The highlight of this streak was defeating the defending Super Bowl XLVIII Champion Seahawks 30 -- 23 in Seattle. The Seahawks ' loss in that game was only their 2nd home loss in the past three seasons. The streak was ended by their division rival Redskins in overtime 20 -- 17. However, Romo once again injured his back. He was out the next game against Arizona where backup Brandon Weeden started. The Cardinals won 28 -- 17. Romo returned in Week 10 as Dallas played in London, England for the first time ever against Jacksonville in NFL International Series. The Cowboys beat the Jaguars 31 -- 17. Dallas entered their annual Thanksgiving home game with an 8 -- 3, identical with their opponents the Eagles. Philadelphia got off to a fast start and Dallas was never able to catch up. The Eagles won 33 -- 10. They would rebound the next week over the Bears in Chicago 41 -- 28 for win number 9 to clinch their first winning season since 2009. Week 15 was a rematch against the Eagles this time in Philadelphia. This time Dallas got off to a hot start going up 21 -- 0. However, Philadelphia then scored 24 unanswered points. Dallas came back with a 78 - yard drive capped with a DeMarco Murray touchdown. The next Eagles drive ended with Mark Sanchez getting intercepted by Dallas defender J.J. Wilcox. Dallas took full advantage of that, with Dez Bryant scoring his third touchdown of the evening, the most in one game in his career. The Eagles would have 2 more turnovers, including another Sanchez interception to end the game. Dallas would win 38 -- 27 to take over first place. Week 16 started with Eagles losing to the Redskins, meaning the Cowboys could clinch their division if they were able to beat the Colts. They responded by blowing out Indianapolis 42 -- 7 to win their first division title since 2009. Their final regular - season game was on the road vs Washington, where the Cowboys won 44 -- 17 to finish the season at 12 -- 4, the Number 3 NFC Seed, and undefeated in away games. They also finished December 4 -- 0 which was huge for Dallas after they had struggled in the month of December during recent years. In the Wild Card round of the 2014 -- 15 NFL playoffs Dallas as the number 3 seed hosted the number 6 seed Detroit Lions. The Lions got off to a hot start going up 14 -- 0 in the first quarter. Dallas initially struggled on both sides of the ball. However, in towards the end of the 2nd quarter Romo connected to Terrance Williams for a 76yd touchdown pass. The Lions would hit a field goal before halftime to go up 17 -- 7. Dallas came out swinging to start the second half by picking off Detroit QB Matthew Stafford on the first play of the 3rd quarter. Dan Bailey would miss a field goal during Dallas 's ensuing drive. Detroit would then kick another field goal to make the score 20 -- 7. A DeMarco Murray touchdown later in that quarter closed the gap to 20 -- 14. A 51 - yard Dallas field goal almost 3 minutes into the 4th quarter put Dallas down by 3. The Lions got the ball back and started driving down the field. A 3rd and 1 pass 17 yard pass from Stafford to Lions TE Brandon Pettigrew was initially flagged as defensive pass interference against Dallas defender Anthony Hitchens. The penalty was then nullified by the officiating crew. Dallas got the ball back on their 41yd line and had a successful 59yd drive which was capped off by a 8yd touchdown pass from Romo to Williams to give Dallas its first lead of the game 24 -- 20. The Lions got the ball back with less than 2 and 1 / 2 minutes to play. Stafford fumbled the ball at the 2 - minute mark and was picked up by Dallas defender DeMarcus Lawrence who then fumbled the ball which gave the Lions the ball back. Lawrence would redeem himself by sacking Stafford on a 4th and 3 play which lead to Stafford fumbling the ball again which Lawrence recovered to end the game. Dallas won 24 -- 20. This was the first time in franchise playoff history that Dallas had been down by 10 + points at halftime to come back and win the game. They would travel to Green Bay for the divisional round to face the Packers. It was the first Cowboys - Packers postseason matchup at Lambeau Field since the Ice Bowl. There was a lot of hype in the week leading up to the game, as during the regular season, Green Bay had gone 8 -- 0 at home while Dallas had gone 8 -- 0 on the road. At halftime Dallas had a 14 -- 10 and would go up 21 -- 13 in the 3rd quarter. However, the Packers would come back to take a 26 -- 21 lead in the 4th quarter. With less than 5 minutes in the game on a 4th and 3 play Romo threw a 30 pass to Bryant to put them at the 1yd line. However, Packers head coach Mike McCarthy would challenge the catch and the ruling was overturned. The explanation given was that Bryant did n't maintain control of the ball as he came down with it. The ball would be turned over to Green Bay on downs and the Packers would run the clock out to win the game, thus ending the Cowboys season. Two days after the game, Dallas head coach Jason Garrett signed a 5 - year, $30 million contract along with defensive coordinator Rod Marinelli, who signed a 3 - year contract. Marinelli was largely credited for greatly improving the Dallas defense, which was one of the worst - ranked defenses the previous year under Monte Kiffin. After their successful 2014 season, Dallas began the season with high expectations. During Week 1, Tony Romo managed to led the team to an astonishing comeback against the New York Giants, despite the loss of Dez Bryant. The next week against Philadelphia, Tony Romo was taken out with a collarbone injury, but Dallas won 20 - 10 to give them a 2 - 0 start. Romo missed the next seven games, and the Cowboys lost all of those games to drop to 2 - 7. Romo finally returned in Week 11 to lead the Cowboys to a victory against Miami, but in the Cowboys ' next game on Thanksgiving, Tony Romo injured his collarbone again and was ruled out for the rest of the season, and the team was slaughtered by the then - undefeated Carolina Panthers, dropping their record to 3 - 8. In the next game on Monday Night Football, Dallas defeated Washington 19 - 16, which proved to be their last win of the season and their only win without Romo as starter. In Week 14, Dallas lost handily against Green Bay. In Week 15, the Cowboys were eliminated from playoff contention with a loss to the New York Jets in a Saturday night edition of Thursday Night Football. After that, the Cowboys lost to Buffalo 16 - 6 and then lost against Washington 34 - 23 to finish the regular season with a record of 4 - 12 and a last place finish in the NFC East. The Cowboys went 3 - 1 with Romo as the starting quarterback, but they were 1 - 11 in all of their other games. The Cowboys drafted running back Ezekiel Elliott from Ohio State in the first round and quarterback Dak Prescott from Mississippi State in the third round of the 2016 NFL Draft. During a preseason game against the Seahawks, Tony Romo suffered a back injury, allowing Prescott to become starter for the 2016 season, though Romo played in one series in Week 17 at Philadelphia. With Prescott 's stellar play, along with Elliott 's running game, the Dak and Zeke - led Cowboys finished with a 13 - 3 record, earning them a first - round bye in the playoffs. But Dallas ' promising season ended in heartbreak as they lost 34 - 31 to Aaron Rodgers and the Packers. After the season, Romo retired after 14 seasons with the Cowboys. Prescott was named Offensive Rookie of the year. January 1, 1967, NFL Championship Game vs. Green Bay Packers December 31, 1967, NFL Championship Game at Green Bay Packers January 17, 1971, Super Bowl V vs. Baltimore Colts December 23, 1972, at San Francisco 49ers, 1972 NFC Divisional Playoff Game November 28, 1974, vs. Washington Redskins December 28, 1975, at Minnesota Vikings, 1975 NFC Divisional Playoff Game December 16, 1979, vs. Washington Redskins January 4, 1981, at Atlanta Falcons, 1980 NFC Divisional Playoff Game January 3, 1983, at Minnesota Vikings September 5, 1983, at Washington Redskins January 31, 1993, vs. Buffalo Bills, Super Bowl XXVII November 25, 1993, vs. Miami Dolphins January 2, 1994, at New York Giants November 18, 1996, vs. Green Bay Packers September 12, 1999, at Washington Redskins September 24, 2000, vs. San Francisco 49ers September 19, 2005, vs. Washington Redskins January 6, 2007, at Seattle Seahawks, NFC wild card playoff game, "The Bobble '' October 8, 2007, vs. Buffalo Bills. December 19, 2009, vs. New Orleans Saints. December 23, 2013, vs. Washington Redskins October 12, 2014, vs. Seattle Seahawks The Dallas Cowboys team / franchise has been "first '' in the record books for a whole host of accomplishments, a few of which include: Buffalo Bills Miami Dolphins New England Patriots New York Jets Baltimore Ravens Cincinnati Bengals Cleveland Browns Pittsburgh Steelers Houston Texans Indianapolis Colts Jacksonville Jaguars Tennessee Titans Denver Broncos Kansas City Chiefs Los Angeles Chargers Oakland Raiders Dallas Cowboys New York Giants Philadelphia Eagles Washington Redskins Chicago Bears Detroit Lions Green Bay Packers Minnesota Vikings Atlanta Falcons Carolina Panthers New Orleans Saints Tampa Bay Buccaneers Arizona Cardinals Los Angeles Rams San Francisco 49ers Seattle Seahawks
how much voltage does a neutral wire have
Ground and neutral - wikipedia As the neutral point of an electrical supply system is often connected to earth ground, ground and neutral are closely related. Under certain conditions, a conductor used to connect to a system neutral is also used for grounding (earthing) of equipment and structures. Current carried on a grounding conductor can result in objectionable or dangerous voltages appearing on equipment enclosures, so the installation of grounding conductors and neutral conductors is carefully defined in electrical regulations. Where a neutral conductor is used also to connect equipment enclosures to earth, care must be taken that the neutral conductor never rises to a high voltage with respect to local ground. Ground or earth in a mains (AC power) electrical wiring system is a conductor that provides a low - impedance path to the earth to prevent hazardous voltages from appearing on equipment (high voltage spikes). (The terms "ground '' and "earth '' are used synonymously here. "Ground '' is more common in North American English, and "earth '' is more common in British English.) Under normal conditions, a grounding conductor does not carry current. Grounding is an integral path for home wiring also because it causes circuit breakers to trip more quickly (ie, GFI), which is safer. Adding new grounds requires a qualified electrician with information particular to a power company distribution region. Neutral is a circuit conductor that normally carries current back to the source. Neutral is usually connected to ground (earth) at the main electrical panel, street drop, or meter, and also at the final step - down transformer of the supply. That is for simple single panel installations, for multiple panels the situation is more complex. In the electrical trade, the conductor of a 2 - wire circuit connected to the supply neutral point and earth ground is referred to as the "neutral ''. In a polyphase (usually three - phase) AC system, the neutral conductor is intended to have similar voltages to each of the other circuit conductors, but may carry very little current if the phases are balanced. The United States ' National Electrical Code and Canadian electrical code only define neutral as the grounded, not the polyphase common connection. In North American use, the polyphase definition is used in less formal language but not in official specifications. In the United Kingdom the Institution of Engineering and Technology defines a neutral conductor as one connected to the supply system neutral point, which includes both these uses. As per Indian CEAR "neutral conductor '' means that conductor of a multi-wire system, the voltage of which is normally intermediate between the voltages of the other conductors of the system and shall also include return wire of the single phase system. All neutral wires of the same earthed (grounded) electrical system should have the same electrical potential, because they are all connected through the system ground. Neutral conductors are usually insulated for the same voltage as the line conductors, with interesting exceptions. Neutral wires are usually connected at a neutral bus within panelboards or switchboards, and are "bonded '' to earth ground at either the electrical service entrance, or at transformers within the system. For electrical installations with split - phase (three - wire single - phase service), the neutral point of the system is at the center - tap on the secondary side of the service transformer. For larger electrical installations, such as those with polyphase service, the neutral point is usually at the common connection on the secondary side of delta / wye connected transformers. Other arrangements of polyphase transformers may result in no neutral point, and no neutral conductors. The IEC standard (IEC 60364) codifies methods of installing neutral and ground conductors in a building, where these earthing systems are designated with letter symbols. The letter symbols are common in countries using IEC standards, but North American practices rarely refer to the IEC symbols. The differences are that the conductors may be separate over their entire run from equipment to earth ground, or may be combined over all or part of their length. Different systems are used to minimize the voltage difference between neutral and local earth ground. Current flowing in a grounding conductor will produce a voltage drop along the conductor, and grounding systems seek to ensure this voltage does not reach unsafe levels. In the TN - S system, separate neutral and protective earth conductors are installed between the equipment and the source of supply (generator or electric utility transformer). Normal circuit currents flow only in the neutral, and the protective earth conductor bonds all equipment cases to earth to intercept any leakage current due to insulation failure. The neutral conductor is connected to earth at the building point of supply, but no common path to ground exists for circuit current and the protective conductor. In the TN - C system, a common conductor provides both the neutral and protective grounding. The neutral conductor is connected to earth ground at the point of supply, and equipment cases are connected to the neutral. The danger exists that a broken neutral connection will allow all the equipment cases to rise to a dangerous voltage if any leakage or insulation fault exists in any equipment. This can be mitigated with special cables but the cost is then higher. In the TN - C-S system, each piece of electrical equipment has both a protective ground connection to its case, and a neutral connection. These are all brought back to some common point in the building system, and a common connection is then made from that point back to the source of supply and to the earth. In a TT system, no lengthy common protective ground conductor is used, instead each article of electrical equipment (or building distribution system) has its own connection to earth ground. As per Indian CEAR, Rule 41, the grounding system to follow: - Neutral conductor of a 3 - phase, 4 - wire system and the middle conductor of a 2 - phase, 3 - wire system to have minimum two separate and distinct earth connections with a minimum of two different earth electrodes to have the earth resistance to a satisfactory value. - The earth electrodes to be inter-connected to reduce earth resistance. - Neutral conductor shall also be earthed at one or more points along the distribution system or service line in addition to any connection at user end. Stray voltages created in grounding (earthing) conductors by currents flowing in the supply utility neutral conductors can be troublesome. For example, special measures may be required in barns used for milking dairy cattle. Very small voltages, not usually perceptible to humans, may cause low milk yield, or even mastitis (inflammation of the udder). So - called "tingle voltage filters '' may be required in the electrical distribution system for a milking parlour. Connecting the neutral to the equipment case provides some protection against faults, but may produce a dangerous voltage on the case if the neutral connection is broken. Combined neutral and ground conductors are commonly used in electricity supply companies ' wiring and occasionally for fixed wiring in buildings and for some specialist applications where there is little alternative, such as railways and trams. Since normal circuit currents in the neutral conductor can lead to objectionable or dangerous differences between local earth potential and the neutral, and to protect against neutral breakages, special precautions such as frequent rodding down to earth (multiple ground rod connections), use of cables where the combined neutral and earth completely surrounds the phase conductor (s), and thicker than normal equipotential bonding must be considered to ensure the system is safe. In the United States, the cases of some kitchen stoves (ranges, ovens), cook tops, clothes dryers and other specifically listed appliances were grounded through their neutral wires as a measure to conserve copper from copper cables during World War II. This practice was removed from the NEC in the 1996 edition, but existing installations (called "old work '') may still allow the cases of such listed appliances to be connected to the neutral conductor for grounding. (Canada did not adopt this system and instead during this time and into the present uses separate neutral and ground wires.) This practice arose from the three - wire system used to supply both 120 volt and 240 volt loads. Because these listed appliances often have components that use either 120, or both 120 and 240 volts, there is often some current on the neutral wire. This differs from the protective grounding wire, which only carries current under fault conditions. Using the neutral conductor for grounding the equipment enclosure was considered safe since the devices were permanently wired to the supply and so the neutral was unlikely to be broken without also breaking both supply conductors. Also, the unbalanced current due to lamps and small motors in the appliances was small compared to the rating of the conductors and therefore unlikely to cause a large voltage drop in the neutral conductor. In North American and European practice, small portable equipment connected by a cord set is permitted under certain conditions to have merely two conductors in the attachment plug. A polarized plug can be used to maintain the identity of the neutral conductor into the appliance but neutral is never used as a chassis / case ground. The small cords to lamps, etc., often have one or more molded ridges or embedded strings to identify the neutral conductor, or may be identified by colour. Portable appliances never use the neutral conductor for case grounding, and often feature "double - insulated '' construction. In places where the design of the plug and socket can not ensure that a system neutral conductor is connected to particular terminals of the device ("unpolarized '' plugs), portable appliances must be designed on the assumption that either pole of each circuit may reach full main voltage with respect to the ground. In North American practice, equipment connected by a cord set must have three wires, if supplied exclusively by 240 volts, or must have four wires (including neutral and ground), if supplied by 120 / 240 volts. There are special provisions in the NEC for so - called technical equipment, mainly professional grade audio and video equipment supplied by so - called "balanced '' 120 volt circuits. The center tap of a transformer is connected to ground, and the equipment is supplied by two line wires each 60 volts to ground (and 120 volts between line conductors). The center tap is not distributed to the equipment and no neutral conductor is used. These cases generally use a grounding conductor which is separated from the safety grounding conductor specifically for the purposes of noise and "hum '' reduction. Another specialized distribution system was formerly specified in patient care areas of hospitals. An isolated power system was furnished, from a special isolation transformer, with the intention of minimizing any leakage current that could pass through equipment directly connected to a patient (for example, an electrocardiograph for monitoring the heart). The neutral of the circuit was not connected to ground. The leakage current was due to the distributed capacitance of the wiring and capacitance of the supply transformer. Such distribution systems were monitored by permanently installed instruments to give an alarm when high leakage current was detected. A shared neutral is a connection in which a plurality of circuits use the same neutral connection. This is also known as a common neutral, and the circuits and neutral together are sometimes referred to as an Edison circuit. In a three - phase circuit, a neutral is shared between all three phases. Commonly the system neutral is connected to the star point on the feeding transformer. This is the reason that the secondary side of most three - phase distribution transformers is wye - or star - wound. Three - phase transformers and their associated neutrals are usually found in industrial distribution environments. A system could be made entirely ungrounded. In this case a fault between one phase and ground would not cause any significant current. In fact, this is not a good scheme. Commonly the neutral is grounded (earthed) through a bond between the neutral bar and the earth bar. It is common on larger systems to monitor any current flowing through the neutral - to - earth link and use this as the basis for neutral fault protection. The connection between neutral and earth allows any phase - to - earth fault to develop enough current flow to "trip '' the circuit overcurrent protection device. In some jurisdictions, calculations are required to ensure the fault loop impedance is low enough so that fault current will trip the protection (In Australia, this is referred to in AS3000: 2007 Fault loop impedance calculation). This may limit the length of a branch circuit. In the case of two phases sharing one neutral, the worst - case current draw is one side has zero load and the other has full load, or when both sides have full load. The latter case results in 1 + 1@120deg = 1@60deg, i.e. the magnitude of the current in the neutral equals that of the other two wires. In a three - phase linear circuit with three identical resistive or reactive loads, the neutral carries no current. The neutral carries current if the loads on each phase are not identical. In some jurisdictions, the neutral is allowed to be reduced in size if no unbalanced current flow is expected. If the neutral is smaller than the phase conductors, it can be overloaded if a large unbalanced load occurs. The current drawn by non-linear loads, such as fluorescent & HID lighting and electronic equipment containing switching power supplies, often contains harmonics. Triplen harmonic currents (odd multiples of the third harmonic) are additive, resulting in more current in the shared neutral conductor than in any of the phase conductors. In the absolute worst case, the current in the shared neutral conductor can be triple that in each phase conductor. Some jurisdictions prohibit the use of shared neutral conductors when feeding single - phase loads from a three - phase source; others require that the neutral conductor be substantially larger than the phase conductors. It is good practice to use four - pole circuit breakers (as opposed to the standard three - pole) where the fourth pole is the neutral phase, and is hence protected against overcurrent on the neutral conductor. In split - phase wiring, for example a duplex receptacle in a North American kitchen, devices may be connected with a cable that has three conductors, in addition to ground. The three conductors are usually coloured red, black, and white. The white serves as a common neutral, while the red and black each feed, separately, the top and bottom hot sides of the receptacle. Typically such receptacles are supplied from two circuit breakers in which the handles of two poles are tied together for a common trip. If two large appliances are used at once, current passes through both and the neutral only carries the difference in current. The advantage is that only three wires are required to serve these loads, instead of four. If one kitchen appliance overloads the circuit, the other side of the duplex receptacle will be shut off as well. This is called a multiwire branch circuit. Common trip is required when the connected load uses more than one phase simultaneously. The common trip prevents overloading of the shared neutral if one device draws more than rated current. A ground connection that is missing or of inadequate capacity may not provide the protective functions as intended during a fault in the connected equipment. Extra connections between ground and circuit neutral may result in circulating current in the ground path, stray current introduced in the earth or in a structure, and stray voltage. Extra ground connections on a neutral conductor may bypass the protection provided by a ground - fault circuit interrupter. Signal circuits that rely on a ground connection will not function or will have erratic function if the ground connection is missing.
where does the last name chin come from
Chin (disambiguation) - wikipedia The chin is the lowermost part of the human face. Chin may also refer to:
if you are born in brazil are you a citizen
Brazilian nationality law - wikipedia Brazilian nationality law is based on both the principles of jus soli and of jus sanguinis. As a general rule, any person born in Brazil acquires Brazilian nationality at birth, irrespective of status of parents. Nationality law is regulated by Article 12 of the Brazilian Federal Constitution. A person born in Brazil acquires Brazilian nationality at birth. The only exception applies to children of persons in the service of a foreign government (such as a foreign diplomats). This means that parents and siblings of the Brazilian child are eligible to apply for permanent residency in Brazil, regardless of their nationality or where subsequent siblings were / are born. Brazilian law considers as Brazilian citizens people born abroad in two cases: Between 1994 and 2007, registration with a Brazilian consular office did not confer Brazilian nationality. In September 2007, a constitutional amendment reinstituted consular registration as a means of acquiring Brazilian nationality. Foreigners may apply for Brazilian nationality if they meet the following criteria: The residence requirement may be reduced in certain circumstances: Those who have lived in Brazil for more than 15 years and have no criminal conviction do not have to satisfy any other condition for naturalization. There are also lower requirements for those who moved to Brazil as minors. Since 10 May 2016, Brazil does not require naturalized citizens to renounce their previous nationality. According to the Brazilian constitution, Brazilian citizens who acquire another nationality may lose Brazilian nationality. However, since 1994 a constitutional amendment allows two exceptions where Brazilians may maintain Brazilian nationality while acquiring another one. The first exception is in the case of recognition of "originary nationality '' by foreign law. Contrary to a popular misconception, this term does not refer to recognition of original Brazilian nationality by the other country, but to cases where the other nationality is acquired by origin (by birth or descent, as opposed to naturalization). Whether the other country recognizes dual nationality is irrelevant. The second exception is in case the other country requires naturalization for the person to remain residing or to exercise civil rights there. Although the government has the power to revoke Brazilian nationality of those who voluntarily naturalized in another country and did not satisfy one of the exceptions, it tends to apply these exceptions very broadly, and in practice it only revokes Brazilian nationality if the person formally requests so, or very rarely in exceptional circumstances. For example, in 2013 the Brazilian government revoked the nationality of a Brazilian citizen who had naturalized in the United States, to extradite her to that country (the Brazilian constitution does not allow extradition of its own citizens). The decision was confirmed by the Supreme Federal Court in 2017, and she was extradited in 2018. Those who lose Brazilian nationality due to naturalization in another country may apply for its reacquisition after requesting renunciation of the other nationality. Naturalized Brazilians are allowed to also keep their previous nationality. They may lose Brazilian nationality if convicted of activity considered "harmful to the national interest ''. In general, Brazilian citizens must enter and leave Brazil on a Brazilian passport or Brazilian identity card, even if they also hold a passport of another country. However, in exceptional circumstances, such as dual citizens who work in foreign government jobs that prohibit the use of a Brazilian passport, Brazil accepts issuing visitor visas on their foreign passports for travel to Brazil. Visa requirements for Brazilian citizens are administrative entry restrictions by the authorities of other states placed on citizens of Brazil. As of 1 January 2017, Brazilian citizens had visa - free or visa on arrival access to 156 countries and territories, ranking the Brazilian passport 18th in terms of travel freedom according to the Henley visa restrictions index. Male Brazilian citizens have a 12 - month military service obligation, unless the citizen has a disqualifying physical or psychological condition, or if the citizen does not wish to serve and the military finds enough volunteers to support its needs. Therefore, although registering for the military is mandatory, about 95 % of those who register receive an exemption. Male citizens between 18 and 45 years of age are required to present a military registration certificate when applying for a Brazilian passport. Voting in Brazil is mandatory for citizens between 18 and 70 years of age. Those who do not vote in an election and do not later present an acceptable justification (such as being away from their voting location at the time) must pay a fine of 3.51 BRL. Citizens between 18 and 70 years of age are required to present proof of voting compliance (by having voted, justified absence or paid the fine) when applying for a Brazilian passport. Open border with Schengen Area. Russia is a transcontinental country in Eastern Europe and Northern Asia. The vast majority of its population (80 %) lives in European Russia, therefore Russia as a whole is included as a European country here. Turkey is a transcontinental country in the Middle East and Southeast Europe. Has a small part of its territory (3 %) in Southeast Europe called Turkish Thrace. Azerbaijan and Georgia (Abkhazia; South Ossetia) are transcontinental countries. Both have a small part of their territories in the European part of the Caucasus. Kazakhstan is a transcontinental country. Has a small part of its territories located west of the Urals in Eastern Europe. Armenia (Artsakh) and Cyprus (Northern Cyprus) are entirely in Southwest Asia but having socio - political connections with Europe. Egypt is a transcontinental country in North Africa and the Middle East. Has a small part of its territory in the Middle East called Sinai Peninsula. Partially recognized.
what city would you visit to see canada’s parliament buildings
Parliament Hill - wikipedia National Capital Commission Parliament Hill (French: Colline du Parlement), colloquially known as The Hill, is an area of Crown land on the southern banks of the Ottawa River in downtown Ottawa, Ontario, Canada. Its Gothic revival suite of buildings is the home of the Parliament of Canada and has architectural elements of national symbolic importance. Parliament Hill attracts approximately 3 million visitors each year. Law enforcement on parliament hill and in the parliamentary precinct is the responsibility of the Parliamentary Protective Service (PPS). Originally the site of a military base in the 18th and early 19th centuries, development of the area into a governmental precinct began in 1859, after Queen Victoria chose Ottawa as the capital of the Province of Canada. Following a number of extensions to the parliament and departmental buildings and a fire in 1916 that destroyed the Centre Block, Parliament Hill took on its present form with the completion of the Peace Tower in 1927. Since 2002, an extensive $1 billion renovation and rehabilitation project has been underway throughout all of the precinct 's buildings; work is not expected to be complete until after 2020. Parliament Hill is a limestone outcrop with a gently sloping top that was originally covered in primeval forest of beech and hemlock. For hundreds of years, the hill served as a landmark on the Ottawa River for First Nations and, later, European traders, adventurers, and industrialists, to mark their journey to the interior of the continent. After Ottawa -- then called Bytown -- was founded, the builders of the Rideau Canal used the hill as a location for a military base, naming it Barrack Hill. A large fortress was planned for the site, but was never built, and by the mid 19th century the hill had lost its strategic importance. In 1858, Queen Victoria selected Ottawa as the capital of the Province of Canada, and Barrack Hill was chosen as the site for the new parliament buildings, given its prominence over both the town and the river, as well as the fact that it was already owned by the Crown. On 7 May, the Department of Public Works issued a call for design proposals for the new parliament buildings to be erected on Barrack Hill, which was answered with 298 submitted drawings. After the entries were narrowed down to three, Governor General Sir Edmund Walker Head was approached to break the stalemate, and the winners were announced on August 29, 1859. The Centre Block, departmental buildings, and a new residence for the governor general were each awarded separately, the team of Thomas Fuller and Chilion Jones, under the pseudonym of Semper Paratus, winning the prize for the first category with their Victorian High Gothic scheme of a formal, symmetrical front facing a quadrangle, and a more rustic, picturesque back facing the escarpment overlooking the Ottawa River. The team of Thomas Stent and Augustus Laver, under the pseudonym of Stat nomen in umbra, won the prize for the second category, which included the East and West Blocks. These proposals were selected for their sophisticated use of Gothic architecture, which was thought to remind people of parliamentary democracy 's history, would contradict the republican Neoclassicism of the United States ' capital, and would be suited to the rugged surroundings while also being stately. $300,000 was allocated for the main building, and $120,000 for each of the departmental buildings. Ground was broken on December 20, 1859, and the first stones laid on April 16 of the following year, and Prince Albert Edward, Prince of Wales (later King Edward VII), laid the cornerstone of the Centre Block on September 1. The construction of Parliament Hill became the largest project undertaken in North America to that date. However, workers hit bedrock earlier than expected, necessitating blasting in order to complete the foundations, which had also been altered by the architects in order to sit 5.25 metres (17 ft) deeper than originally planned. By early 1861, Public Works reported that $1,424,882.55 had been spent on the venture, leading to the site being closed in September and the unfinished structures covered in tarpaulins until 1863, when construction resumed following a commission of inquiry. Two years later, the unfinished site hosted a celebration of Queen Victoria 's birthday, further cementing the area 's position as the central place for national outpouring. The site was still incomplete when three of the British North American colonies (now the provinces of Ontario, Quebec, Nova Scotia, and New Brunswick) entered Confederation in 1867, with Ottawa remaining the capital of the new country. Within four years Manitoba, British Columbia, Prince Edward Island, and the North - West Territories (now Alberta, Saskatchewan, Yukon, Northwest Territories, and Nunavut) were added and, along with the associated bureaucracy, the first three required representation be added in parliament. Thus, the offices of parliament spread to buildings beyond Parliament Hill even at that early date. The British military gave a nine - pound naval cannon to the British army garrison stationed in Ottawa in 1854. It was purchased by the Canadian government in 1869 and fired on Parliament Hill for many years as the "Noonday Gun ''. By 1876, the structures of Parliament Hill were finished, along with the surrounding fence and gates. However, the grounds had yet to be properly designed; Governor General the Marquess of Dufferin and Ava sent chief architect Thomas Scott to New York City to meet with Calvert Vaux and view Central Park. Vaux completed a layout for the landscape of Parliament Hill, including the present day driveways, terraces, and main lawn, while Scott created the more informal grounds to the sides of and behind the buildings. In 1901 they were the site of both mourning for, and celebration of, Queen Victoria, when the Queen 's death was mourned in official ceremonies in January of that year, and when, in late September, Victoria 's grandson, Prince George, Duke of Cornwall (later King George V), dedicated the large statue that stands on the hill in the late queen 's honour. Fire destroyed the Centre Block on February 3, 1916. Despite the ongoing war, the original cornerstone was re-laid by Governor General Prince Arthur, Duke of Connaught, on September 1, 1916; exactly fifty - six years after his brother, the future King Edward VII, had first set it. Eleven years later, the new tower was completed and dedicated as the Peace Tower, in commemoration of the Canadians who had lost their lives during the First World War. Thereafter, The Hill hosted a number of significant events in Canadian history, including the first visit of the reigning Canadian sovereign -- King George VI, with his consort, Queen Elizabeth -- to his parliament, on May 19, 1939. VE Day was marked with a huge celebration on May 8, 1945, the first raising of the country 's new national flag took place on February 15, 1965, the centennial of Confederation was celebrated on July 1, 1967, and the Silver Jubilee of Queen Elizabeth II was marked on October 18, 1977. The Queen was back on Parliament Hill on April 17, 1982, to issue a royal proclamation of the enactment of the Constitution Act that year. In April 1989, a Greyhound Lines bus with 11 passengers on board travelling to New York City from Montreal was hijacked by an armed man and driven onto the lawn in front of the Centre Block. A standoff with police ensued and lasted eight hours; though three shots were fired, there were no injuries. After a second incident in September 1996 where an individual forcibly drove his car into the Centre Block doors and proceeded to attack RCMP officers standing guard, it was decided in the interests of national security that Parliament Hill, which up to that time had been open to limited public traffic on the lower lawn, would be restricted to government and media vehicles only. Crowds marked the beginning of the third millennium with a large ceremony on the quadrangle and the "largest single vigil '' ever seen in the nation 's capital took place in 2001, when 100,000 people gathered on the main lawn to honour the victims of the September 11 attacks on the United States that year. The following year, Queen Elizabeth II 's Golden Jubilee was marked on October 13, as was her Diamond Jubilee on February 6 (Accession Day) 2012. On October 22, 2014, several shooting incidents occurred around Parliament Hill. A gunman, after fatally shooting a Canadian Army soldier mounting the ceremonial guard at the National War Memorial, moved to the Centre Block of the parliament buildings. There, he was killed in a shootout with RCMP officers and the Sergeant - At - Arms, Kevin Vickers. The gunman also injured one House of Commons constable, who was shot in the foot. The 88,480 square metres (952,391 sq ft) area, maintained by the National Capital Commission, is named by the Parliament of Canada Act as Parliament Hill and defined as resting between the Ottawa River on the north, the Rideau Canal on the east, Wellington Street on the south, and a service road (Kent Street) near the Supreme Court on the west. The south front of the property is demarcated by a Victorian High Gothic wrought iron fence, named the Wellington Wall and in the centre of which, on axis with the Peace Tower to the north, sits the formal entrance to Parliament Hill: the Queen 's Gates, forged by Ives & Co. of Montreal. At each southern corner of the quadrangle are also smaller gates for every - day access. The main outdoor area of The Hill is the quadrangle, formed by the arrangement of the parliament and departmental buildings on the site, and laid out in a formal garden fashion. This expanse is the site of major celebrations, demonstrations, and traditional shows, such as the changing of the guard, or the annual Canada Day celebrations. To the sides of the buildings, the grounds are set in the English garden style, dotted with statues, memorials, and, at the northwest corner, a Carpenter Gothic structure called the Summer Gazebo, a 1995 reconstruction of an earlier gazebo, Summer House, built for the Speaker of the House of Commons in 1877 by Thomas Seton Scott and demolished in 1956. Beyond the edges of these landscaped areas, the escarpment remains in its natural state. Though Parliament Hill remains the heart of the parliamentary precinct, expansion beyond the bounded area described above began in 1884, with the construction of the Langevin Block across Wellington Street. After land to the east, across the canal, was purchased by private interests (to build the Château Laurier hotel), growth of the parliamentary infrastructure moved westward along Wellington, with the erection in the 1930s of the Confederation and Justice Buildings on the north side, and then further construction to the south. By the 1970s, the Crown began purchasing other structures or leasing space deeper within the downtown, civic area of Ottawa. In 1973, the Crown expropriated the entire block between Wellington and Sparks Streets with the intent of constructing a south block for Parliament Hill. However, the government dropped this proposal and instead, constructed more office space in Hull, Quebec, such as the Terrasses de la Chaudière and Place du Portage. In 1976, the Parliament Buildings and the grounds of Parliament Hill were each designated as National Historic Sites of Canada, given their importance as the physical embodiment of the Canadian government and as the focal point of national celebrations. The Parliament of Canada Act renders it illegal for anyone to name any other area or establishment within the National Capital Region as Parliament Hill, as well as forbidding the production of merchandise with that name on it. Any violation of this law is punishable on summary conviction. The parliament buildings are three edifices arranged around three sides of Parliament Hill 's central lawn, the use and administration of the spaces within each building overseen by the speakers of each chamber of the legislature. The Centre Block has the Senate and Commons chambers, and is fronted by the Peace Tower on the south facade, with the Library of Parliament at the building 's rear. The East and West Blocks each contain ministers ' and senators ' offices, as well as meeting rooms and other administrative spaces. Gothic Revival has been used as the unifying style of all three structures, though the Centre Block is a more modern Gothic Revival, while the older East and West Blocks are of a Victorian High Gothic. This collection is one of the world 's most important examples of the Gothic Revival style; while the buildings ' manner and design are unquestionably Gothic, they resemble no building constructed during the Middle Ages. The forms were the same, but their arrangement was uniquely modern. The parliament buildings also departed from the Medieval models by integrating a variety of eras and styles of Gothic architecture, including elements from Britain, France, the Low Countries, and Italy, all in three buildings. In his 1867 Hand Book to the Parliamentary and Departmental Buildings, Canada, Joseph Bureau wrote The style of the Buildings is the Gothic of the 12th and 13th Centuries, with modifications to suit the climate of Canada. The ornamental work and the dressing round the windows are of Ohio sandstone. The plain surface is faced with a cream - coloured sandstone of the Potsdam formation, obtained from Nepean, a few miles from Ottawa. The spandrils (sic) of the arches, and the spaces between window - arches and the sills of the upper windows, are filled up with a quaint description of stonework, composed of stones of irregular size, shape and color (sic), very neatly set together. These with the Potsdam red sandstone employed in forming the arches over the windows, afford a pleasant variety of color (sic) and effect, and contrast with the general masses of light coloured sandstone, of which the body of the work is composed. The sculptural ornament is overseen by the Dominion Sculptor. Five people have held the position since its creation in 1936: Cléophas Soucy (1936 -- 50), William Oosterhoff (1949 -- 62), Eleanor Milne (1962 -- 93), Maurice Joanisse (1993 -- 2006) and Phil R. White (2006 -- present). The only structure on Parliament Hill to have been purposefully demolished was the old Supreme Court building, which stood behind the West Block and housed the Supreme Court of Canada between 1889 and 1945. Throughout the 1950s and 1960s, there were proposals to demolish other parliamentary precinct buildings, including the Library of Parliament and West Block for new structures, and the East Block for parking, but none of these plans were adopted. Instead, renovations were undertaken to the East Block, beginning in 1966. In 2002, an extensive $1 billion renovation project began across the parliamentary precinct, specifically focusing on masonry restoration, asbestos removal, vehicle screening, parking, electrical and mechanical systems, and improved visitors ' facilities. The Library of Parliament and Peace Tower, as well as some exterior areas of masonry on the Centre Block have so far been completed, though focus has shifted to the West Block due to its rapidly deteriorating cladding. In 2018, when the Centre Block is slated to be closed for five years to carry out an extensive interior restoration and upgrade, the inner courtyard of the newly renovated West Block will be enclosed and fitted with a temporary chamber for the House of Commons while the Senate will be temporarily relocated down Wellington St. in The Government Conference Centre. Restoration of the East block is set to commence upon completion of Centre Block 's restoration. Most of the statues on Parliament Hill are arranged behind the three parliamentary buildings, with one outside of the main fence. A number of other monuments are distributed across the hill, marking historical moments or acting as memorials for larger groups of people. Media related to Parliament Hill at Wikimedia Commons
which of the following states does not have a bjp government
National Democratic alliance (India) - wikipedia The National Democratic Alliance (NDA) is a centre - right coalition of political parties in India. At the time of its formation in 1998, it was led by the Bharatiya Janata Party (BJP) and had thirteen constituent parties. Its chairman was former Prime Minister Atal Bihari Vajpayee. Also representing the alliance are L.K. Advani, former Deputy Prime Minister, who is the acting chairman of the Alliance, Narendra Modi, current Prime Minister and Leader of the House in Lok Sabha; and Arun Jaitley, Leader of the House in Rajya Sabha and Finance minister. The coalition was in power from 1998 to 2004. The alliance returned to power in the 2014 General election with a combined vote share of 38.5 %. Its leader, Narendra Modi, was sworn in as Prime Minister of India on 26 May 2014. The National Democratic Alliance was formed in May 1998 as a coalition to contest the general elections. It was led by the Bharatiya Janata Party, and included several regional parties, including the Samta Party and the All India Anna Dravida Munnetra Kazhagam (AIADMK), as well as Shiv Sena, the only member which shared the Hindutva ideology of the BJP. With outside support provided by the Telugu Desam Party (TDP), the NDA was able to muster a slim majority in the elections of 1998, and Atal Bihari Vajpayee returned as prime minister. The government collapsed within a year because the (AIADMK) withdrew its support. After the entry of a few more regional parties, the NDA proceeded to win the 1999 elections with a larger majority. Vajpayee became Prime Minister for a third time, this time for a full five - year term. The NDA called elections in early 2004, six months ahead of schedule. Its campaign was based around the slogan of "India Shining '' which attempted to depict the NDA government as responsible for a rapid economic transformation of the country. However, the NDA suffered a defeat, winning only a 186 seats in the Lok Sabha, compared to the 222 of the United Progressive Alliance led by the Congress, with Manmohan Singh succeeding Vajpayee as prime minister. Some commentators have stated that the NDA 's failure to reach out to the rural masses was the explanation for its defeat; others have pointed to its "divisive '' policy agenda as the reason. The National Democratic Alliance does not have a formal governing structure in place, such as an executive board or politburo. It has been up to the leaders of the individual parties to make decisions on issues such as sharing of seats in elections, allocation of ministries and the issues that are raised in Parliament. Given the varied ideologies among the parties, there have been many cases of disagreement and split voting among the allies. Owing to ill health, George Fernandes, who was the NDA convener until 2008, was discharged of his responsibility and replaced by Sharad Yadav, the then national president of the Janata Dal (United) political party. On 16 June 2013, the JD (U) left the coalition and Sharad Yadav resigned from the role of the NDA convener. Then the CM of Andhra Pradesh Chandrababu Naidu was made the NDA convener. On 27 July 2017 JD (U) with the help of BJP formed the government in Bihar. Later, on 19 Aug 2017 JD (U) formally joined the NDA after 4 years. Currently, the parties in and supporting the NDA are: As of March 2018, the BJP holds a majority of Legislative Assembly in 13 states - Arunachal Pradesh, Assam, Chhattisgarh, Gujarat, Haryana, Himachal Pradesh, Jharkhand, Madhya Pradesh, Manipur, Rajasthan, Tripura, Uttarakhand and Uttar Pradesh. In 2 states - Goa and Maharashtra BJP shares power as Senior Partner (Chief Ministers of BJP) with other political parties of NDA coalition. In 5 other states, Bihar, Jammu and Kashmir, Meghalaya, Nagaland and Sikkim. It shares power as Junior Partner with other political parties of the NDA coalition. The BJP has previously been the sole party in power in Karnataka and National Capital Territory of Delhi. It has also ruled Andhra Pradesh, Odisha, Punjab and Puducherry as part of coalition governments. ^ BJP had fielded 427 candidates on 427 seats out of 543 but nomination of BJP candidate S. Gurumoorthy was rejected from Niligiris for failing to submit mandatory forms during his nomination. (#) NPP, NPF and MNF are contesting in each Seats & Other 8 Members supporting NDA Candidates Janata Dal (United) Shiromani Akali Dal Shiv Sena Indian National Lok Dal Rashtriya Lok Dal Asom Gana Parishad Nagaland People 's Front Gorkha Janmukti Morcha Uttarakhand Kranti Dal Kamtapur Progressive Party Ladakh Union Territory Front Telangana Rashtra Samithi Janata Dal (United) All India Anna Dravida Munnetra Kazhagam Telugu Desam Party Biju Janata Dal Shiromani Akali Dal All India Trinamool Congress Shiv Sena Janata Party Mizo National Front Indian Federal Democratic Party Manipur State Congress Party Janata Dal (United) Dravida Munnetra Kazhagam Samata Party Biju Janata Dal Shiromani Akali Dal Nationalist Trinamool Congress Shiv Sena Pattali Makkal Katchi Lok Shakti Marumalarchi Dravida Munnetra Kazhhagam Haryana Vikas Party Indian National Lok Dal Mizo National Front Sikkim Democratic Front Manipur State Congress Party Telugu Desam Party (External Support) All India Anna Dravida Munnetra Kazhagam Samata Party Biju Janata Dal Shiromani Akali Dal Nationalist Trinamool Congress Shiv Sena Pattali Makkal Katchi Lok Shakti Marumalarchi Dravida Munnetra Kazhhagam Haryana Vikas Party Janata Party Mizo National Front NTR TDP (LP) New parties that have joined NDA coalition are Haryana based Haryana Janhit Congress (BL) and Maharashtra based Republican Party of India. Ajit Singh led Rashtriya Lok Dal withdrew from the NDA. NDA nominated P.A. Sangma as its presidential candidate who lost against UPA 's Pranab Mukherjee. Jaswant Singh was named as the candidate for the post of Vice-President against UPA 's Hamid Ansari. Ansari won his second term in office. On 16 June 2013, Nitish Kumar led Janta Dal United has withdrawn from NDA. On 13 September 2013, Narendra Modi declared as PM candidate for 2014 Elections. On 11 August 2013, after a discussion with BJP President Rajnath Singh, Janata Party Chairman Dr. Subramanian Swamy officially joined Bharatiya Janata Party and merged his Janata Party with the Bharatiya Janata Party in presence of BJP president Rajnath Singh. The announcement was made by Mr Swamy and BJP president Rajnath Singh after they met at the latter 's residence in Delhi. Former BJP chief Nitin Gadkari and senior party leader Arun Jaitley were also present at the meet. On 1 January 2014, Marumalarchi Dravida Munnetra Kazhhagam leader Vaiko has announced that MDMK formally joined back to NDA. Vaiko also announced Modi will be the best candidate for Prime Minister. The two small parties viz Kongunadu Munnetra Kazhagam and Indhiya Jananayaga Katchi have also joined NDA Alliance. The BJP would like Two more southern parties such as Desiya Murpokku Dravida Kazhagam, Pattali Makkal Katchi to also join the alliance. In Maharashtra, two regional political outfits, Swabhimani Paksha and Rashtriya Samaj Paksha, joined NDA in January. The coalition of Five parties is termed as Mahayuti. So in Maharashtra now NDA alliance consist of 5 Parties viz BJP, Shiv Sena, Republican Party of India, Swabhimani Paksha and Rashtriya Samaj Paksha. On 23 February 2014, Rashtriya Lok Samata Party led by Upendra Kushwaha joined NDA and will be contesting at 3 Lok Sabha seats in Bihar. On 27 February 2014 Lok Janshakti Party led by Ramvilas Paswan joined NDA It would contest at 7 Lok Sabha Seats in Bihar during 2014 Elections. DMDK will be fighting Lok Sabha Election through an alliance with BJP led NDA. MDMK, PMK led Social Democratic Alliance are the other allies of NDA in Tamil Nadu. Maharashtra Navnirman Sena: Its President, Raj Thackeray announced external support to NDA on 9 March 2014 which is marked as Party 's formation day, supporting Narendra Modi as Prime Ministerial Candidate. Indian National Lok Dal: Its Gen. Sec., Sh. Ajay Singh Chautala announced external support to NDA, supporting Sh. Narendra Modi as Prime Ministerial Candidate. Lok Satta Party: President Shri JP Narayan announced external support to NDA, supporting Sh. Narendra Modi as Prime Ministerial Candidate All India NR Congress (AINRC) formally joined NDA on 13 March 2014 and will be contesting in Puducherry. Telugu Desam Party (TDP) rejoined NDA on 6 April, after breaking alliance in 2004 post general election defeat. Shiv Sena Though Shiv Sena has quit Mahayuti in Maharashtra, before Maharashtra Legislative Assembly Elections 2014, but has decided to remain with NDA at the Centre. All Jharkhand Students Union clinched an alliance with BJP for Jharkhand Assembly elections under which its junior partner will contest eight of the 81 seats in the state. Bharatiya Janata Party on February 27, 2015 clinched an alliance with People 's Democratic Party for Government Formation in Jammu & Kashmir under which its CM will be from PDP. In January 2016, Bharatiya Janata Party clinched an alliance with Bodoland People 's Front in Assam. In March 2016, after a meeting with AGP President Atul Bora and Former Chief minister Prafulla Kumar Mahanta, BJP formed an alliance with Asom Gana Parishad for upcoming Assam legislative assembly election 2016. BJP also aligned with Rabha and Tiwa Tribe outfit Rabha Jatiya Aikya Manch and Tiwa Jatiya Aikya Manch. In March 2016, BJP forged an alliance with Kerala - based Ezhava outfit Bharath Dharma Jana Sena Party for Kerala Elections 2016. Following BJP 's victory in the Assam Legislative Assembly Elections 2016, the party formed an alliance of like - minded non-Congress parties in the Northeast, called the North - East Democratic Alliance, consisting of 11 regional parties of Northeast India. Himanta Biswa Sarma, BJP leader from Assam has been appointed Convener of the regional alliance. On December 21, 2016, Khandu was suspended from the party by the party president and Takam Pario was named as the next likely Chief Minister of Arunachal Pradesh replacing Khandu after People 's Party of Arunachal suspended Khandu along with 6 other MLAs. In December 2016, Khandu proved majority on the floor with 33 of the People 's Party of Arunachal 's 43 legislators joining the Bharatiya Janata Party as the BJP party increased its strength to 45 and it has the support of two independents. He became second Chief Minister of Arunachal Pradesh of Bharatiya Janata Party in Arunachal Pradesh after the 44 days lead Gegong Apang government in 2003. In January 2017, Bharatiya Janata Party 's alliance partner Maharashtrawadi Gomantak Party in Goa and Shiv Sena in Maharashtra came together to contest Goa Legislative Assembly election in 2017 against the BJP with another Sangh Pariwar group called Goa Suraksha Manch. The results of the 2017 Goa Assembly election gave rise to a hung assembly since no political party could achieve a complete majority of 21 in the 40 member Goa Legislative Assembly. The Indian National Congress emerged the largest party with 17 seats but ultimately, the Bharatiya Janata Party which emerged victorious in 13 constituencies formed the government with the support of the Goa Forward Party, Maharashtrawadi Gomantak Party and independents. The Goa Forward Party expressed its support to the Bharatiya Janata Party on the condition that the then Union Defence Minister of India Manohar Parrikar would return to Goa as the Chief Minister of Goa. On 15 March 2017, N. Biren Singh was sworn as the Chief Minister by having coalition with NPP, NPF, LJP and others, the first time that BJP formed a government in Manipur, though the INC emerged as the single largest party. On 27 July 2017, Janata Dal (United) rejoined NDA and formed a coalition government with Bharatiya Janata Party (BJP) in Bihar with Nitish Kumar as the Chief Minister and Sushil Kumar Modi as the Deputy Chief Minister, and with that BJP completed its domination in Hindi belt. On 9 March 2018, Biplab Kumar Deb was sworn as the Chief Minister having pre-poll alliance with IPFT, the first time that BJP formed a government in Tripura. As BJP government failed to implement Andhra Pradesh Bifurcation act 2014 Andhra Pradesh, TDP pulls out NDA, Major alliance from South India. Telugu Desam Party (TDP) had withdrawn from NDA on 16 March 2018. The party also suspects that the BJP was conspiring with YSRCP of Jaganmohan Reddy and Jana Sena Party of Pawan Kalyan to weaken the TDP.
when do figs appear on a fig tree
Common fig - wikipedia Ficus carica is an Asian species of flowering plant in the mulberry family, known as the common fig (or just the fig). It is the source of the fruit also called the fig, and as such is an important crop in those areas where it is grown commercially. Native to the Middle East and western Asia, it has been sought out and cultivated since ancient times, and is now widely grown throughout the world, both for its fruit and as an ornamental plant. The species has become naturalized in scattered locations in Asia and North America. The term fig has its origins from the Latin word, ficus, as well as the older Hebrew name, feg. The name of the caprifig (Ficus caprificus Risso) is derived from Latin, with capro referring to goat and ficus referring to fig. Ficus carica is a gynodioecious (functionally dioecious), deciduous tree or large shrub, growing to a height of 7 -- 10 metres (23 -- 33 ft), with smooth white bark. Its fragrant leaves are 12 -- 25 centimetres (4.7 -- 9.8 in) long and 10 -- 18 centimetres (3.9 -- 7.1 in) across, and deeply lobed with three or five lobes. The complex inflorescence consists of a hollow fleshy structure called the syconium, which is lined with numerous unisexual flowers. The flowers themselves are not visible from outside the syconium, as they bloom inside the infructescence. Although commonly referred to as a fruit, the fig is actually the infructescence or scion of the tree, known as a false fruit or multiple fruit, in which the flowers and seeds are borne. It is a hollow - ended stem containing many flowers. The small orifice (ostiole) visible on the middle of the fruit is a narrow passage, which allows the specialized fig wasp Blastophaga psenes to enter the fruit and pollinate the flower, whereafter the fruit grows seeds. See Ficus: Fig fruit and reproduction system. The edible fruit consists of the mature syconium containing numerous one - seeded fruits (druplets). The fruit is 3 -- 5 centimetres (1.2 -- 2.0 in) long, with a green skin, sometimes ripening towards purple or brown. Ficus carica has milky sap (laticifer). The sap of the fig 's green parts is an irritant to human skin. The common fig tree has been cultivated since ancient times and grows wild in dry and sunny areas, with deep and fresh soil; also in rocky areas, from sea level to 1,700 meters. It prefers relatively light free - draining soils, and can grow in nutritionally poor soil. Unlike other fig species, Ficus carica does not always require pollination by a wasp or from another tree, but can be pollinated by the fig wasp, Blastophaga psenes to produce seeds. Fig wasps are not present to pollinate in colder countries like the United Kingdom. The plant can tolerate seasonal drought, and the Middle Eastern and Mediterranean climate is especially suitable for the plant. Situated in a favorable habitat, old specimens when mature can reach a considerable size and form a large dense shade tree. Its aggressive root system precludes its use in many urban areas of cities, but in nature helps the plant to take root in the most inhospitable areas. The common fig tree is mostly a phreatophyte that lives in areas with standing or running water. It grows well in the valleys of the rivers and ravines saving no water, having strong need of water that is extracted from the ground. The deep - rooted plant searches groundwater, in aquifers, ravines, or cracks in the rocks. The fig tree, with the water, cools the environment in hot places, creating a fresh and pleasant habitat for many animals that take shelter in its shade in the times of intense heat. The mountain or rock fig ("Anjeer Kohi '', انجیر کوهی, in Persian) is a wild variety, tolerant of cold dry climates, of the semi-arid rocky mountainous regions of Iran, especially in the Kohestan Mountains of Khorasan. Ficus carica is dispersed by birds and mammals that scatter their seeds in droppings. Fig fruit is an important food source for much of the fauna in some areas, and the tree owes its expansion to those that feed on its fruit. The common fig tree also sprouts from the root and stolon issues. The infructescence is pollinated by a symbiosis with a kind of fig wasp (Blastophaga psenes). The fertilized female wasp enters the fig through the scion, which has a tiny hole in the crown (the ostiole). She crawls on the inflorescence inside the fig and pollinates some of the female flowers. She lays her eggs inside some of the flowers and dies. After weeks of development in their galls, the male wasps emerge before females through holes they produce by chewing the galls. The male wasps then fertilize the females by depositing semen in the hole in the gall. The males later return to the females and enlarge the holes to enable the females to emerge. Then some males enlarge holes in the scion, which enables females to disperse after collecting pollen from the developed male flowers. Females have a short time (< 48 hours) to find another fig tree with receptive scions to spread the pollen, assist the tree in reproduction, and lay their own eggs to start a new cycle. The edible fig is one of the first plants that was cultivated by humans. Nine subfossil figs of a parthenocarpic (and therefore sterile) type dating to about 9400 -- 9200 BC were found in the early Neolithic village Gilgal I (in the Jordan Valley, 13 km north of Jericho). The find predates the domestication of wheat, barley, and legumes, and may thus be the first known instance of agriculture. It is proposed that this sterile but desirable type was planted and cultivated intentionally, one thousand years before the next crops were domesticated (wheat and rye). Figs were widespread in ancient Greece, and their cultivation was described by both Aristotle and Theophrastus. Aristotle noted that as in animal sexes, figs have individuals of two kinds, one (the cultivated fig) that bears fruit, and one (the wild caprifig) that assists the other to bear fruit. Further, Aristotle recorded that the fruits of the wild fig contain psenes (fig wasps); these begin life as larvae, and the adult psen splits its "skin '' (pupa) and flies out of the fig to find and enter a cultivated fig, saving it from dropping. Theophrastus observed that just as date palms have male and female flowers, and that farmers (from the East) help by scattering "dust '' from the male on to the female, and as a male fish releases his milt over the female 's eggs, so Greek farmers tie wild figs to cultivated trees. They do not say directly that figs reproduce sexually, however. Figs were also a common food source for the Romans. Cato the Elder, in his c. 160 BC De Agri Cultura, lists several strains of figs grown at the time he wrote his handbook: the Mariscan, African, Herculanean, Saguntine, and the black Tellanian (De agri cultura, ch. 8). The fruits were used, among other things, to fatten geese for the production of a precursor of foie gras. It was cultivated from Afghanistan to Portugal, also grown in Pithoragarh in the Kumaon hills of India. From the 15th century onwards, it was grown in areas including Northern Europe and the New World. In the 16th century, Cardinal Reginald Pole introduced fig trees to Lambeth Palace in London. In 1769, Spanish missionaries led by Junipero Serra brought the first figs to California. The Mission variety, which they cultivated, is still popular. The fact that it is parthenocarpic (self - pollinating) made it an ideal cultivar for introduction. The Kadota cultivar is even older, being mentioned by the Roman naturalist Pliny in the 1st century A.D. As California 's population grew, especially after the gold rush, a number of other varieties were brought to California by individuals and nurserymen from the East Coast of the United States and from France and England, and by the end of the 19th century, it became apparent that California had potential for being a great fig - producing state with its Mediterranean climate and a latitude of 38 degrees, lining San Francisco up with Smyrna, Turkey. G.P. Rixford first brought true Smyrna figs to California in 1880. The effort was amplified by the San Francisco Bulletin Company, which sought to bring new varieties from Smyrna to California and distribute the cuttings to the Bulletin 's subscribers, with the expectation that the subscribers would report back which varieties were most fit for California or regions of California. In 1881, some 14,000 cuttings were shipped in good condition to California and distributed to Bulletin Company subscribers as promised. However, not one of the trees planted produced a single mature fruit. George Roeding concluded this was due to the lack of pollination since the insect pollinator was not present in California. After a couple of failed attempts, wild fig trees carrying fig wasps were successfully introduced to California on April 6, 1899 to allow for fruit production of Smyrna - type figs. The most popular variety of Smyrna - type fig is Calimyrna, a name combining "California '' and "Smyrna. '' The variety itself, however, is not one produced through a breeding program, but it is from one of the cuttings brought to California in the latter part of the 19th century. It is identical to the Lob Injir variety that has been grown in Turkey for many centuries. The common fig is grown for its edible fruit throughout the temperate world. It is also grown as an ornamental tree, and the cultivar ' Brown Turkey ' has gained the Royal Horticultural Society 's Award of Garden Merit. Figs can be found in continental climates with hot summers as far north as Hungary and Moravia, and can be harvested up to four times per year. Thousands of cultivars, most named, have been developed as human migration brought the fig to many places outside its natural range. Figs plants can be propagated by seed or by vegetative methods. Vegetative propagation is quicker and more reliable, as it does not yield the inedible caprifigs. Seeds germinate readily in moist conditions and grow rapidly once established. For vegetative propagation, shoots with buds can be planted in well - watered soil in the spring or summer, or a branch can be scratched to expose the bast (inner bark) and pinned to the ground to allow roots to develop. Two crops of figs can be produced each year. The first or breba crop develops in the spring on last year 's shoot growth. The main fig crop develops on the current year 's shoot growth and ripens in the late summer or fall. The main crop is generally superior in quantity and quality, but some cultivars such as ' Black Mission ', ' Croisic ', and ' Ventura ' produce good breba crops. There are three types of edible figs: There are dozens of fig cultivars, including main and Breba cropping varieties, and an edible caprifig (the Croisic). Varieties are often local, found in a single region of one country. While the fig contains more naturally occurring varieties than any other tree crop, a formal breeding program was not developed until the beginning of the 20th century. Ira Condit, "High Priest of the Fig, '' and William Storey tested some thousands of fig seedlings in the early 20th Century based at University of California, Riverside. It was then continued at the University of California, Davis. However, the fig breeding program was ultimately closed in the 1980s. Due to insect and fungal disease pressure in both dried and fresh figs, the breeding program was revived in 1989 by James Doyle and Louise Ferguson using the germplasm established at UC Riverside by Ira Condit and William Storey. Crosses were made and two new varieties are now in production in California: the public variety "Sierra '', and the patented variety "Sequoia ''. Source: United Nations FAOSTAT In 2014, world production of raw figs was 1.14 million tonnes, led by Turkey, Egypt, Algeria, and Morocco as the four largest producers, collectively accounting for 64 % of the world total. While the United States is lower on the list of fig producing countries, California produces some 80 % of the U.S. production. California varieties in relative order of acreage are: Calimyrna, Mission, Adriatic types (Conadria, Adriatic, Di Redo, Tena), Brown Turkey, Kadota, Sierra, and Sequoia. Figs can be eaten fresh or dried, and used in jam - making. Most commercial production is in dried or otherwise processed forms, since the ripe fruit does not transport well, and once picked does not keep well. The widely produced fig newton or fig roll is a biscuit (cookie) with a filling made from figs. Fresh figs are in season from August through to early October. Fresh figs used in cooking should be plump and soft, and without bruising or splits. If they smell sour, the figs have become over-ripe. Slightly under - ripe figs can be kept at room temperature for 1 -- 2 days to ripen before serving. Figs are most flavorful at room temperature. Raw figs are a good source (14 % of the Daily Value, DV) of dietary fiber per 100 gram serving (74 calories), but otherwise do not supply essential nutrients in significant content (table). In a 100 gram serving providing 229 calories, dried figs are a rich source (> 20 % DV) of dietary fiber and the essential mineral, manganese (26 % DV), while several other dietary minerals are in moderate - to - low content. Figs contain diverse phytochemicals, including polyphenols such as gallic acid, chlorogenic acid, syringic acid, (+) - catechin, (−) - epicatechin and rutin. Fig color may vary between cultivars due to various concentrations of anthocyanins, with cyanidin - 3 - O - rutinoside having particularly high content. In the Biblical Book of Genesis, Adam and Eve clad themselves with fig leaves (Genesis 3: 7) after eating the "forbidden fruit '' from the Tree of Knowledge of Good and Evil. Likewise, fig leaves, or depictions of fig leaves, have long been used to cover the genitals of nude figures in painting and sculpture, for example in Masaccio 's The Expulsion from the Garden of Eden. The Book of Deuteronomy specifies the fig as one of the Seven Species (Deuteronomy 8: 7 - 8), describing the fertility of the land of Canaan. This is a set of seven plants indigenous to the Middle East that together can provide food all year round. The list is organized by date of harvest, with the fig being fourth due to its main crop ripening during summer. Also in the Bible (Matthew 21: 18 -- 22 and Mark 11: 12 -- 14, 19 -- 21) is a story of Jesus finding a fig tree when he was hungry; the tree had leaves on it, but no fruit. Jesus then curses the fig tree, which withers. The biblical quote "each man under his own vine and fig tree '' (1 Kings 5: 5) has been used to denote peace and prosperity. It was commonly quoted to refer to the life that would be led by settlers in the American West, and was used by Theodor Herzl in his depiction of the future Jewish Homeland: "We are a commonwealth. In form it is new, but in purpose very ancient. Our aim is mentioned in the First Book of Kings: ' Judah and Israel shall dwell securely, each man under his own vine and fig tree, from Dan to Beersheba ''. United States President George Washington, writing in 1790 to the Touro Synagogue of Newport, Rhode Island, extended the metaphor to denote the equality of all Americans regardless of faith. Buddha achieved enlightenment under the bodhi tree, a large and old sacred fig tree (Ficus religiosa, or Pipal). Sura 95 of the Qur'an is named al - Tīn (Arabic for "The Fig ''), as it opens with the oath "By the fig and the olive. '' The fruit is also mentioned elsewhere in the Qur'an. Within the Hadith, Sahih al - Bukhari records Muhammad stating: "If I had to mention a fruit that descended from paradise, I would say this is it because the paradisiacal fruits do not have pits... eat from these fruits for they prevent hemorrhoids, prevent piles and help gout. '' In Greek mythology, the god Apollo sends a crow to collect water from a stream for him. The crow sees a fig tree and waits for the figs to ripen, tempted by the fruit. He knows that he is late and that his tardiness will be punished, so he gets a snake from the stream and collects the water. He presents Apollo with the water and uses the snake as an excuse. Apollo sees through the crow 's lie and throws the crow, goblet, and snake into the sky where they form the constellations Hydra, Crater, and Corvus. In Aristophanes ' Lysistrata one of the women boasts about the "curriculum '' of initiation rites she went through to become an adult woman (Lys. 641 -- 7). As her final accomplishment before marriage, when she was already a fair girl, she bore the basket as a kanephoros, wearing a necklace of dried figs. In the course of his campaign to persuade the Roman Republic to pursue a third Punic War, Cato the Elder produced before the Senate a handful of fresh figs, said to be from Carthage. This showed its proximity to Rome (and hence the threat), and also accused the Senate of weakness and effeminacy: figs were associated with femininity, owing to the appearance of the inside of the fruit. One of many explanations for the origin of the word "sycophant '' (from the Ancient Greek συκοφάντης sykophántēs) is that it refers to the vulgar gesture of showing the fig. Since the flower is invisible, there are various idioms related to it in languages around the world. In a Bengali idiom as used in tumi yēna ḍumurēr phul hay. ē gēlē (তুমি যেন ডুমুরের ফুল হয়ে গেলে), i.e., ' you have become (invisible like) the fig flower (doomurer phool) '. There is a Hindi idiom related to flower of fig tree, गूलर का फूल (gūlar kā phūl i.e. flower of fig) means something that just would not ever see i.e. rare of the rarest In Awadh region of Uttar Pradesh state of India apart from standard Hindi idiom a variant is also used; in the region it is assumed that if something or work or job contains (or is contaminated by) flower of fig it will not get finished e.g. this work contains fig flower i.e. it is not getting completed by any means. Gular ka phool (flower of fig) is a collection of poetry in written in Hindi by Rajiv Kumar Trigarti. A poem in Telugu written by Yogi Vemana, says "Medi pandu chuda melimayyi undunu, potta vippi chuda purugulundunu '', "The fig fruit looks harmless but once you open you find tiny insects (refers to the fig wasp) in there ''. The phrase is comparable with the English phrase "Do n't judge a book by its cover ''.
when is the olympics coming to los angeles
Los Angeles bid for the 2024 Summer Olympics - Wikipedia The Los Angeles bid for the 2024 Summer Olympics and Summer Paralympics was the attempt to bring the Summer Olympic Games to the city of Los Angeles, California in 2024; the games were ultimately awarded to the city for 2028. Following withdrawals by other bidding cities during the 2024 Summer Olympics bidding process that led to just two candidate cities (Los Angeles and Paris), the International Olympic Committee (IOC) announced that the 2028 Summer Olympics would be awarded at the same time as 2024. After extended negotiations, Los Angeles agreed to bid for the 2028 Games if certain conditions were met. On July 31, 2017, the IOC announced Los Angeles as the sole candidate for the 2028 games, with $ 1.8 billion of additional funding to support local sports and the Games program. Los Angeles was chosen by the United States Olympic Committee (USOC) on August 28, 2015, after the Los Angeles City Council voted unanimously to back the bid. Los Angeles was the second city submitted by the USOC for the 2024 Summer Olympics. Boston was originally chosen to be the American bid, but withdrew on July 27, 2015. Los Angeles also originally bid for the USOC 's nomination in late 2014, when Boston was chosen over Los Angeles, Washington, D.C., and San Francisco. This was the third United States summer bid since hosting the Centennial Olympic Games (1996) in Atlanta, previously losing in 2012 and 2016 to London and Rio de Janeiro. Los Angeles previously hosted the 1932 Summer Olympics and the 1984 Summer Olympics, and will become the third city -- after London and Paris in 2024 -- to host the Summer Games three times. Los Angeles will become the first American city to host the Olympic games since the 2002 Winter Olympics in Salt Lake City. It will be the fifth time a US city has hosted the Summer Olympics. In 2006, Los Angeles entered the bidding to become the US applicant city for the 2016 Summer Olympics; the United States Olympic Committee (USOC) selected Chicago instead that year. In September 2011, Los Angeles was awarded the 2015 Special Olympics World Summer Games. In March 2013, Mayor Antonio Villaraigosa sent a letter to the USOC stating that the city was interested in bidding to host the 2024 Olympic Games. On September 17, 2013, the LA County Board of Supervisors unanimously approved a resolution seeking interest in the games. On 26 April 2014, the Southern California Committee for the Olympic Games announced its bid proposal for the 2024 Olympics. On 28 July 2015, the USOC contacted Los Angeles about stepping in as a replacement bidder for the 2024 Summer Games after Boston dropped its bid. On 1 September 2015, the LA City Council voted 15 -- 0 to support a bid for the 2024 Olympic Games. The U.S. Olympic Committee finalised its selection moments after the LA City Council 's vote. On 13 January 2016, Los Angeles 2024 committee officials said they were "thrilled to welcome '' the construction of a $2 - billion - plus, state - of - the - art football stadium in Inglewood, California and believed the arrival of one -- and perhaps two -- NFL teams would bolster its chances. On 25 January 2016, the Los Angeles 2024 committee announced that it planned to place its Olympic Village on the UCLA campus. LA 2024 also announced that media members and some Olympic officials would be housed in a 15 - acre residential complex USC planned to build. On 16 February 2016, LA 2024 unveiled a new logo and its slogan, "Follow the sun. '' On 23 February 2016, more than 88 % of Angelenos were in favor of the city 's hosting the 2024 Olympic and Paralympic Games bid, according to a survey conducted by Loyola Marymount University. On 10 March 2016, Los Angeles officials bidding for the 2024 Summer Olympics turned their focus to temporary facilities that might be needed. Current plans include an elevated track built over the football field at the Los Angeles Memorial Coliseum and a proposal to temporarily convert Figueroa Street into a miles - long promenade for pedestrians and bicyclists. On 2 June 2016, the IOC confirmed that Los Angeles would proceed to the second stage of bidding for the 2024 Summer Games. On 29 July 2016, LA 2024 officials released artist renderings of an updated Los Angeles Memorial Coliseum and temporary swim stadium that would be used if Los Angeles is awarded the 2024 Summer Olympics. On 31 July 2016, Mayor Eric Garcetti led a 25 - person contingent from Los Angeles to Rio de Janeiro to promote their city 's bid for the 2024 Summer Olympics. On 7 September 2016, LA 2024 planned to send a 16 - person delegation to the 2016 Paralympics in Rio de Janeiro as part of its ongoing campaign to bring the Olympics back to Southern California. On 13 September 2016, the LA 2024 bid committee released a two - minute video featuring a montage of local scenes narrated by children talking about their "dream city ''. On 23 September 2016, LA 2024 agreed to terms with the U.S. Olympic Committee on a required but controversial marketing arrangement. The Joint Marketing Program Agreement outlines shared responsibilities -- and shared income -- between Los Angeles and the USOC. On 7 October 2016, LA 2024 officials again made adjustments to their proposal for the 2024 Summer Olympics, moving half of a large and potentially expensive media center to the USC campus. On 21 October 2016, the LA 2024 bid committee again enlisted U.S. Olympians to help make the case for bringing the Summer Olympics back to Los Angeles. On 9 November 2016, LA 2024 issued a statement noting "LA 2024 congratulates President - elect Donald J. Trump and appreciates his longstanding support of the Olympic movement in the United States. We strongly believe the Olympics and LA 2024 transcend politics and can help unify our diverse communities and our world. '' On 12 November 2016, Mayor Eric Garcetti and six - time gold medalist sprinter Allyson Felix led an LA 2024 presentation to an array of Olympic leaders and sports officials at a general assembly for the Assn. of National Olympic Committees in Doha, Qatar. On 23 November 2016, President - elect Trump expressed his support for Los Angeles 's 2024 Olympic bid during a phone call with Mayor Garcetti. On 2 December 2016, LA 2024 released a new budget estimating it would spend $5.3 billion to stage the Games. On 2 January 2017, Angeleno Olympians and Paralympians rode on the Rose Parade float titled "Follow the Sun '' to promote the city 's bid. On 9 January 2017, LA 2024 issued a report predicting that the mega-sporting event would boost the local economy by $11.2 billion. On 25 January 2017, the Los Angeles City Council gave unanimous final approval for a privately run bid. On 28 February 2017, it was announced that four Hollywood film studios (Disney, Fox, NBCUniversal and Warner Bros) would be helping promote the Los Angeles bid. On 20 April 2017, the private committee trying to bring the Summer Olympics back to Los Angeles has issued a new set of renderings and videos showing what those Games might look like. Following the decision to award the 2024 and 2028 games simultaneously, Los Angeles announced that it would consider a bid for the 2028 Games, if certain conditions were met. On 31 July 2017, the IOC announced Los Angeles as the sole candidate for the 2028 games, with $ 1.8 billion of additional funding to support local sport and the Games programme. 2024 Olympic Bid Evaluation Commission: UCLA, University of Southern California (USC), NBC Universal, Los Angeles Rams and the City of Los Angeles are modernizing or building infrastructure to future Olympic venues totaling over $3 billion. Not listed as non-OCOG (Organizing Committee for Olympic Games). The City of Los Angeles has guaranteed to sign the required Olympic City Charter and be the sole entity responsible for the games and cost surplus or overruns. The City has pledged to contribute $250 million to cover any cost overruns. The State of California has created an Olympic Games Trust Fund that would pay for potential budget overruns up to $250 million. Both government guaranty payments would take place only if LA 2024 's private insurance proves inadequate to cover cost overruns. The theme and bid embodies Agenda 2020 reforms of Olympics in Los Angeles, A surplus of $161 million is predicted. On January 9, 2017, the LA 2024 committee issued a report predicting that the Olympics would boost the local economy by $11.2 billion. LAX, the city 's main airport, is investing more than USD 1.9 billion into an expansion of the Tom Bradley International Terminal. The new Midfield Concourse Terminal is scheduled to add 11 gates for 2019, and many other improvements are planned with an expected completion date of 2023. The Los Angeles Metro passed a county - wide measure expanding the county of Los Angeles ' transportation tax for modernization of its infrastructure in 2008. This measure provides funding for many of the highest priority projects, including the Crenshaw / LAX Line connecting to LAX, Regional Connector light rail subway line corridor thru Downtown LA to Santa Monica and Long Beach, Purple Line Extension subway to UCLA, the Los Angeles Streetcar through downtown LA and five other transit lines and projects in the draft stages. The Purple Line and Crenshaw / LAX connectors are to be completed in time for 2024. The transportation plans are already fully funded by LA County voters. A second measure, Measure M, which passed in November 2016 elections, will extend the transportation tax funds indefinitely and speed many other projects with $120 billion in highway and transit projects over forty years, including a Sepulveda Subway line from the Valley to the Los Angeles westside thru the Sepulveda Pass. LA 2024 bid leaders are touting these measures and infrastructure improvements as indicators of the new Los Angeles and a car - free Olympics in a city known for its car culture. 158.5 km (98 miles) of new rails, 93 stations and 350,000 daily average boardings. Los Angeles had no rail lines in 1984. Bid leaders indicate public rail transportation lines will be available to all of the clusters: Downtown Long Beach, San Fernando Valley Sports Park, Downtown L.A., and the Santa Monica beach cluster. In addition, the 2024 Bid Committee includes a 108 - member athletes ' advisory committee, which includes Andre Agassi, Allyson Felix, Michelle Kwan, Katie Ledecky, Greg Louganis, Carl Lewis, Apolo Ohno, Landon Donovan, Kobe Bryant and Michael Phelps. The Los Angeles Olympic bid committee has stated that its legacy will be delivering a sustainable model for the bidding process and delivery of a cost - effective Olympic Games. Los Angeles bid leaders are focusing on delivering an Olympic Games for the best athlete experience and not a centerpiece for a city revitalization project, as was recently the case for Sochi, Russia and Beijing, China. Bid leaders have indicated Los Angeles is transforming itself, does not need a city showcase, and has the ability to showcase the athletes instead. The theme and bid embodies Agenda 2020 reforms of Olympics in Los Angeles, A surplus of $161 million is predicted. On January 9, 2017, the LA 2024 committee issued a report predicting that the Olympics would boost the local economy by $11.2 billion. The 2024 Los Angeles Olympic bid uses existing venues, venues under construction and new temporary venues in and around the City of Los Angeles. Approximately half of the venues are outside the City of Los Angeles. Many of the proposed venues are facilities constructed after the 1984 games. Staples Center opened in 1999. Stub Hub Center was opened in 2003. Galen Center was opened in 2006. Microsoft Theater opened in 2007. UCLA proposed the Olympic Village on their campus with dorms built in 2015. Rose Bowl was renovated in 2013. The Forum was recently renovated in 2014. USC "University Village '' is currently under construction and set to open in 2017. The "MyFigueroa '' street redevelopment project is currently under construction. Banc of California soccer Stadium and the American Football L.A. Stadium at Hollywood Park are currently under construction with completion dates of 2018 and 2020 respectively. The Los Angeles Memorial Coliseum renovations are scheduled to begin in mid-2017 by USC. The Los Angeles Convention Center (LACOEX) remodel and additions are to begin in 2018. The NBC / IBC proposed center is set to be completed in 2019. All of these proposed venues will be renovated or completed with or without the Olympic Games being awarded. Olympic ceremonies could be held in two venues simultaneously; the ceremony would begin at the Los Angeles Memorial Coliseum in Exposition Park to honor the legacy of the Olympics in Los Angeles and then transfer to the new Los Angeles Stadium at Hollywood Park in Inglewood to proceed with the parade of athletes, oaths, traditional Olympic protocol and the lighting of a cauldron. LA 2024 bid leaders wish to use the new LA stadium to dispel negative thought about using the LA Memorial Coliseum for a third Olympics. They also cite ticket sales at both sites as extra cash flow for the committee. The LA 2024 team also stated they would reverse the closing ceremony with a start at LA Stadium and close the show at the LA Coliseum if chosen. Football venues will be situated within Los Angeles and in other parts of California, to be determined. Potential venues: According to LA2024.org there will be eight venues in the borders of State of California (probably eight venues in four municipalities). Piggyback Yard, a rail yard along the LA River, was the original proposed location for the Olympic Village. It would have been an entirely new residential development that would be permanent housing after the games. The plan was abandoned and UCLA was chosen as the new proposed location.
who stole the cookie from the cookie jar activities
Who stole the cookie from the cookie jar? - Wikipedia "Who Stole the Cookie from the Cookie Jar? '' or the Cookie Jar Song is a sing along and game of children 's music. The song is an infinite - loop motif, where each verse directly feeds into the next. The game begins with the children sitting or standing, arranged in an inward - facing circle. The song usually begins with the group leader asking who stole a cookie from an imaginary (or sometimes real) cookie jar, followed by the name of one of the children in the circle. The child questions the "accusation, '' answered by an affirmation from the "accuser, '' followed by continued denial from the "accused. '' The accuser asks who stole the cookie, followed by the name of another child in the circle. The call - and - answer is potentially infinitely recursive, limited only by the number of participants or the amount of time the participants wish to spend on it. Sometimes, a clapping or snapping beat is used by the children in the circle. Sometimes, the other children in the group sing along with the "accuser '' after the "accused '' has been identified. Some variations on the theme include the use by teachers of the song as a lesson in keeping with a beat and improvisation. As with many children 's songs, there can be many variations on the execution of the performance. The song 's lyrics are usually: This is followed by the "accused '' saying the name of someone else, as "(name of a child in the circle) stole the cookie from the cookie jar, '' and the subsequent back - and - forth lines are repeated. The song may be repeated ad infinitum or it may end - if it is being performed as part of a game, where members of the group are eliminated by failing to keep up with the prescribed beat or eliminated as a result of being chosen as one of the accused. The song, and game, is featured as one of the sequences in Grandpa 's Magical Toys; the only accusation that is missing is that of the Dutch Girl, and she reveals that she got all the toys and the last cookie. The song was also done on three episodes of Barney & Friends. A variation of the song has been used in Smart4life commercials in the US beginning in 2009. The song was also used in the Family Guy episode "When You Wish Upon a Weinstein. '' The song was used in The Simpsons episode "Kamp Krustier '' where Chief Wiggum arrests two kids after they sing it in a group activity.
what is the deal with yahoo and oath
Oath Inc.. - Wikipedia Oath Inc. (stylized as Oath:) is a subsidiary of Verizon Communications that serves as the umbrella company of its digital content subdivisions, including AOL and Yahoo!. Verizon acquired AOL on June 23, 2015 and Yahoo! 's operating business on June 13, 2017. Within Oath, AOL and Yahoo! maintain their respective brands. Tim Armstrong, Oath 's former CEO, said the new company name was chosen to convey Oath 's commitment to the digital media business. Oath Inc. is a subsidiary of Verizon Communications. It is part of Verizon 's Media and Telematics division. The company maintains dual headquarters in the former AOL and Yahoo! headquarters in Manhattan, New York, and Sunnyvale, California. Oath has offices elsewhere throughout the United States, in addition to Australia, Belgium, Brazil, Canada, China, Denmark, France, Germany, Hong Kong, India, Ireland, Israel, Italy, Japan, Luxembourg, Netherlands, New Zealand, Norway, Singapore, South Korea, Spain, Sweden, Taiwan, Thailand, and United Kingdom. Tim Armstrong, AOL 's former CEO, was selected as Oath 's chief executive. As of June 2017, Oath employs about 12,000 people. Verizon announced a $4.4 billion deal to acquire AOL in May 2015. The deal was an effort by Verizon to expand its technology and media offerings. The deal officially closed a month later. A year after the completion of the AOL acquisition, Verizon announced a $4.8 billion deal for Yahoo! 's core internet business, looking to invest in the internet company 's search, news, finance, sports, video, email and Tumblr products. Yahoo! announced in September and December 2016 two major internet security breaches affecting more than a billion customers. As a result, Verizon lowered its offer for Yahoo! by $350 million to $4.48 billion. Two months before closing the deal for Yahoo!, Verizon announced it would place Yahoo! and AOL under the Oath umbrella. The deal closed on June 13, 2017, and Oath was launched. Upon completion of the deal, Yahoo! CEO Marissa Mayer resigned. Yahoo! operations not acquired in the deal were renamed Altaba, a holding company whose primary assets are its 15.5 percent stake in Alibaba Group and 35.5 percent stake in Yahoo! Japan. After the merger, Oath cut 15 percent of the Yahoo - AOL workforce. In April 2018, Helios and Matheson acquired the movie listings website Moviefone from Oath. As part of the transaction, Verizon took a stake in MoviePass stock. In May 2018, Verizon and Samsung agreed to terms that would preload four Oath apps onto Samsung Galaxy S9 smartphones. The agreement includes Oath 's Newsroom, Yahoo Sports, Yahoo Finance, and go90 mobile video apps, and the deal includes integration of native ads from Oath into both the Oath apps and Samsung 's own Galaxy and Game Launcher apps. On September 12 2018, it was announced that K. Guru Gowrappan would succeed Tim Armstrong as CEO, effective October 1. Some of the digital brands under Oath include: Verizon has partial ownership of Moviefone 's parent company, Helios and Matheson Analytics Inc
who sang the only fools and horses theme song
Only Fools and Horses - wikipedia Only Fools and Horses is a British television sitcom created and written by John Sullivan. Seven series were originally broadcast on BBC One in the United Kingdom from 1981 to 1991, with sixteen sporadic Christmas specials until its end in 2003. Episodes are regularly repeated on UKTV comedy channel Gold, Yesterday and occasionally repeated on BBC One. Set in Peckham in south - east London, it stars David Jason as ambitious market trader Derek "Del Boy '' Trotter, Nicholas Lyndhurst as his younger brother Rodney Trotter, and Lennard Pearce as their elderly Grandad. After Pearce 's death in 1984, his character was replaced by Del and Rodney 's Uncle Albert (Buster Merryfield) who first appeared in February 1985. Backed by a strong supporting cast, the series follows the Trotters ' highs and lows in life, in particular their attempts to get rich. The show achieved consistently high ratings, and the 1996 episode "Time on Our Hands '' (the last episode to feature Uncle Albert) holds the record for the highest UK audience for a sitcom episode, attracting over 24.3 million viewers. Critically and popularly acclaimed, the series received numerous awards, including recognition from BAFTA, the National Television Awards and the Royal Television Society, as well as winning individual accolades for both Sullivan and Jason. It was voted Britain 's Best Sitcom in a 2004 BBC poll. The series influenced British culture, contributing several words and phrases to the English language. It spawned an extensive range of merchandise, including books, videos, DVDs, toys, and board games. A spin - off series, The Green Green Grass, ran for four series in the UK from 2005 to 2009. A prequel, Rock & Chips, ran for three specials in 2010 and 2011. A special Sport Relief episode aired in March 2014, guest starring David Beckham. Derek "Del Boy '' Trotter (played by David Jason), a fast - talking, archetypal South London ' fly ' trader, lives in a council flat in a high - rise tower block, Nelson Mandela House, in Peckham, South London, with his much younger brother, Rodney Trotter (Nicholas Lyndhurst), and their elderly Grandad (Lennard Pearce). Their mother, Joan, died when Rodney was young, and their father Reg absconded soon afterwards, so Del became Rodney 's surrogate father and the family patriarch. Despite the difference in age, personality, and outlook, the brothers share a constant bond throughout. The situation focuses primarily on their attempts to become millionaires through questionable get rich quick schemes and by buying and selling poor - quality and illegal goods. They have a three - wheeled Reliant Regal van and trade under the name of Trotters Independent Traders, mainly on the black market. Initially, Del Boy, Rodney and Grandad were the only regulars, along with the occasional appearances of roadsweeper Trigger (Roger Lloyd - Pack) and pretentious used car salesman Boycie (John Challis). Over time, the cast expanded, mostly in the form of regulars at the local pub The Nag 's Head. These included pub landlord Mike (Kenneth MacDonald), lorry driver Denzil (Paul Barber), youthful spiv Mickey Pearce (Patrick Murray) and Boycie 's flirtatious wife Marlene (Sue Holderness). As the series progressed, the scope of the plots expanded. Many early episodes were largely self - contained, with few plot - lines mentioned again, but the show developed a story arc and an ongoing episodic dimension. After Grandad died following the death of actor Lennard Pearce, his younger brother Uncle Albert (Buster Merryfield) emerged and moved in with Del and Rodney. After years of searching, both Del and Rodney find long - term love, in the form of Raquel (Tessa Peake - Jones) and Cassandra (Gwyneth Strong) respectively; Del also has a son with Raquel, Damien (played by five actors, most recently Ben Smith). Rodney and Cassandra marry, separate and then get back together again. Cassandra miscarries, but then she and Rodney eventually have a baby. Rodney finds out who his real father was. The Trotters finally become millionaires, lose their fortune, and then regain some of it. In 1980, John Sullivan, a scriptwriter under contract at the BBC, was already well known as the writer of the successful sitcom Citizen Smith. It came to an end that year and Sullivan was searching for a new project. An initial idea for a comedy set in the world of football was rejected by the BBC, as was his alternative idea, a sitcom centring on a cockney market trader in working class, modern - day London. The latter idea persisted. Through Ray Butt, a BBC producer and director whom Sullivan had met and become friends with when they were working on Citizen Smith, a draft script was shown to the Corporation 's Head of Comedy, John Howard Davies. Davies commissioned Sullivan to write a full series. Sullivan believed the key factor in its being accepted was the success of ITV 's new drama, Minder, a series with a similar premise and also set in modern - day London. Sullivan had initially given the show the working title Readies. For the actual title he intended to use, as a reference to the protagonist 's tax and work - evading lifestyle, Only Fools and Horses. That name was based on a genuine, though very obscure, saying, "only fools and horses work for a living '', which had its origins in 19th - century American vaudeville. Only Fools and Horses had also been the title of an episode of Citizen Smith, and Sullivan liked the expression and thought it was suited to the new sitcom. He also thought longer titles would attract attention. He was first overruled on the grounds that the audience would not understand the title, but he eventually got his way. Filming of the first series began in May 1981, and the first episode, "Big Brother '', was transmitted on BBC1 at 8.30 pm on 8 September that year. It attracted 9.2 million viewers and generally received a lukewarm response from critics. The viewing figures for the whole first series, which averaged at around 7 million, were considered mediocre. The costumes for the first series were designed by Phoebe De Gaye. Del 's attire was inspired by her going to car boot sales. The budget was limited, and she took David Jason shopping in Oxford Street, and had him try a variety of suits. De Gaye purchased some gaily coloured Gabicci shirts, which were fashionable at the time and she thought "horrible ''. Del 's rings and bracelet were made of fake gold and came from Chapel Market. Rodney 's combat jacket came from the BBC 's Costume Department, and De Gaye added a Yasser Arafat scarf purchased from Shepherd 's Bush Market. De Gaye used Vaseline, make - up, and food to make Grandad 's costume look dirty. The idea was that he never had his hat off, never dressed properly, usually had pyjamas underneath his clothes, which would be dirty. A second series was commissioned for 1982. The second series fared a little better. However, both the first and second series had a repeat run in June 1983 in a more low - key time slot, but attracted a high enough viewing figure for Davies to commission a third series. From there, the show began to top the television ratings. Viewing figures for the fourth series were double those of the first. In early December 1984, during the filming of Series 4, Lennard Pearce suffered a heart attack and was taken to hospital. He died on 16 December, the day he was due to resume. Sullivan got to work with the episode "Strained Relations '' which featured their goodbye to Grandad. According to Sullivan, recasting Grandad was considered disrespectful to Pearce by the team, so it was decided that another older family member was to be cast. Buster Merryfield was then cast as Grandad 's brother Albert. The scenes of "Hole in One '' that featured Pearce were re-filmed with Merryfield. Midway through the filming of the fifth series, Jason told Sullivan that he wished to leave the show in order to further his career elsewhere. Sullivan thus wrote "Who Wants to Be a Millionaire? '', which was intended to be the final episode and would see Del accepting a friend 's offer to set up business in Australia, leaving Rodney and Albert behind. Plans were made for a spin - off entitled Hot - Rod, which would have followed Rodney 's attempts to survive on his own with help from Mickey Pearce, but leaving open the prospect of Del 's return. Jason then changed his mind, and the ending of the episode was changed to show Del rejecting the offer. Sullivan had a tendency to write scripts that were too long, meaning a lot of good material had to be cut. Shortly before filming of the sixth series began, he and Jason requested that the show 's time slot be extended and it was agreed to extend its running time to 50 minutes. This required a 40 per cent increase in the show 's budget, and coincided with the show becoming one of the BBC 's most popular programmes, according to producer Gareth Gwenlan. Robin Stubbs became the costume designer for the sixth series, and was responsible for getting Del 's attire to match his new yuppy image. His new suits cost around £ 200 each and were purchased from Austin Reed in Regent Street. The rest came from stores such as Tie - Rack and Dickins and Jones. His jewellery was replaced each series because it was very cheap (the rings with "D '' cost 50p each). The seventh series aired in early 1991. Jason and Sullivan were involved with other projects, and it was confirmed that there were no plans for a new series. Despite this, the show continued in Christmas specials until 1993. Sullivan nonetheless wanted a final episode to tie up the show. In late 1996, three more one - hour episodes were filmed, to be broadcast over Christmas 1996. All three were well received and, due to the ending, were assumed to be the last. The show made a return in Christmas 2001 with the first of three new episodes which were shot together but ultimately broadcast over three consecutive Christmases from 2001 to 2003. Despite rumours of further episodes, in a 2008 interview, Sullivan was quoted as saying: "There will not be another series of Only Fools And Horses. I can say that. We had our day, it was wonderful but it is best to leave it now ''. Though Sullivan died in 2011, it returned for a special Sport Relief episode in 2014. The most frequent roles for guest actors in Only Fools and Horses were as Del or Rodney 's once - seen girlfriends, barmaids at the Nag 's Head, or individuals the Trotters were doing business with. Del and Rodney 's deceased mother, Joan, though never seen, cropped up in Del 's embellished accounts of her, or in his attempts to emotionally blackmail Rodney. Her grave -- a flamboyant monument -- was seen occasionally. Their absent father, Reg, appeared once in "Thicker Than Water '' (played by Peter Woodthorpe), before leaving under a cloud, never to be seen again. Other members of the Trotter family were rarely sighted, the exceptions being the woman they believe to be Auntie Rose (Beryl Cooke) in "The Second Time Around '', and cousins Stan and Jean (Mike Kemp and Maureen Sweeney), who attended Grandad 's funeral. When Rodney met Cassandra, her father Alan Parry (Denis Lill) became a recurring character. Raquel 's parents, James and Audrey (Michael Jayston and Ann Lynn), appeared in "Time On Our Hands '', and it was James who discovered the antique watch which made the Trotters millionaires. In some episodes, a guest character was essential to the plot. Del 's ex-fiancee Pauline (Jill Baker) dominated Del 's libido in "The Second Time Around '', prompting Rodney and Grandad to leave. In "Who Wants to Be a Millionaire '', Del 's old business partner Jumbo Mills (Nick Stringer) wanted Del to return to Australia with him and restore their partnership, forcing Del to make a decision. An attempt by Lennox (Vas Blackwood) to rob a local supermarket set - up the "hostage '' situation in "The Longest Night ''. Del and Rodney spent the whole of "Tea for Three '' battling each other for the affections of Trigger 's niece Lisa (Gerry Cowper). Abdul (Tony Anholt) in "To Hull and Back '' and Arnie (Philip McGough) in "Chain Gang '' were responsible for setting up dubious enterprises involving the Trotters in their respective episodes. Tony Angelino (Philip Pope), the singing dustman with a speech impediment, was the key to the humour and the storyline of "Stage Fright '' and EastEnders actor Derek Martin guest starred in Fatal Extraction. Del 's nemesis from his school days, corrupt policeman DCI Roy Slater (played by Jim Broadbent), made three appearances, in "May The Force Be With You '', "To Hull and Back '' and "Class of ' 62 ''. Feared local villains, the Driscoll Brothers (Roy Marsden and Christopher Ryan) featured once, in "Little Problems '', but were mentioned in two previous episodes ("Video Nasty '' and "The Frog 's Legacy ''), and are important in the story of The Green Green Grass. A grown - up Damien (Douglas Hodge) appeared in "Heroes and Villains ''. Rodney and Mickey 's friends, the smooth - talking Jevon (Steven Woodcock) and then, briefly, Chris (Tony Marshall), a ladies ' hairdresser, featured sporadically during the sixth and seventh series and the intervening Christmas specials. The two - part 1991 Christmas special, "Miami Twice '', saw Richard Branson and Barry Gibb make cameo appearances. Mike Read appeared as himself, hosting an episode of "Top Of The Pops '', in "It 's Only Rock and Roll '' and Jonathan Ross appeared as himself in "If They Could See Us Now ''. While their characters were less significant, well - known actors who played cameos in the programme included Joan Sims, best known for her numerous roles in the Carry On films, who guest - starred in the feature - length episode "The Frog 's Legacy '' as an aunt of Trigger and old friend of Del 's late mother; Hollywood star David Thewlis, who played a young wannabe musician in "It 's Only Rock and Roll ''; John Bardon, who played the role of Jim Branning in the soap opera "EastEnders '', as the supermarket security officer in "The Longest Night ''. Walter Sparrow, who appeared as Dirty Barry in "Danger UXD '', went on to appear in several Hollywood films. Sixty - three episodes of Only Fools and Horses, all written by John Sullivan, were broadcast on BBC1 between 8 September 1981 and 25 December 2003. The show was aired in seven series (1981 -- 83, 1985 -- 86, 1989 and 1990 -- 91), and thereafter in sporadic Christmas special editions (1991 -- 93, 1996, 2001 -- 03). All earlier episodes had a running time of 30 minutes, but this was extended after Series Six (1989), and all subsequent episodes had a running time ranging from 50 to 95 minutes. Several mini-episodes were produced. An eight - minute episode aired on 27 December 1982 as part of a show hosted by Frank Muir, The Funny Side of Christmas, and attracted 7.2 million viewers. A 5 - minute spoof BBC documentary was shown on Breakfast Time on 24 December 1985, with Del being investigated by a BBC consumer expert. An educational episode named "Licensed to Drill '', in which Del, Rodney and Grandad discuss oil drilling, was recorded in 1984 but only shown in schools. A 5 - minute 1990 -- 91 Persian Gulf War special (dated 1 December 1990) has Del, Rodney, and Albert convey a message to British troops serving in the conflict. It has never been broadcast commercially, but a copy exists at the Imperial War Museum, London. A Comic Relief special showing Del, Rodney and Albert making an appeal for donations was shown on 14 March 1997, with 10.6 million viewers. A Sport Relief special was aired on 21 March 2014. Only Fools and Horses had two producers: Ray Butt from 1981 to 1987, and Gareth Gwenlan thereafter. Seven directors were used: Martin Shardlow directed all episodes in series one, Bernard Thompson directed the 1981 Christmas special, Susan Belbin series four, and Mandie Fletcher series five. Butt directed series three and five, as well as the 1985, 1986 and 1987 Christmas specials. Tony Dow became the established director after 1988, directing all subsequent episodes, bar the first part of Miami Twice, which was directed by Gareth Gwenlan. John Sullivan was executive producer on seven of the final eight episodes. Only Fools and Horses is one of the UK 's most popular sitcoms. It was among the ten most - watched television shows of the year in the UK in 1986, 1989, 1990, 1991, 1992, 1993, 1996, 2001, 2002 and 2003. The 1996 Christmas trilogy of "Heroes and Villains '', "Modern Men '' and "Time On Our Hands '' saw the show 's peak. The first two attracted 21.3 million viewers, while the third episode -- at the time believed to be the final one -- got 24.3 million, a record audience for a British sitcom. Repeat episodes also attract millions of viewers, and the BBC has received criticism for repeating the show too often. Only Fools and Horses won the BAFTA award for best comedy series in 1985, 1988 and 1996, was nominated in 1983, 1986, 1989, 1990 and 1991, and won the audience award in 2004. David Jason received individual BAFTAs for his portrayal of Del Boy in 1990 and 1996. The series won a National Television Award in 1997 for most popular comedy series; Jason won two individual awards, in 1997 and 2002. At the British Comedy Awards, the show was named best BBC sitcom for 1990, and received the People 's Choice award in 1997. It also won the Royal Television Society best comedy award in 1997 and two Television and Radio Industries Club Awards for comedy programme of the year, in 1984 and 1997. John Sullivan received the Writers ' Guild of Great Britain comedy award in 1997. The show regularly features in polls to find the most popular comedy series, moments and characters. It was voted Britain 's Best Sitcom in a 2004 BBC poll, and came 45th in the British Film Institute 's list of the 100 Greatest British Television Programmes. It was 3rd on a subsequent viewers ' poll on the BFI website. Empire magazine ranked Only Fools and Horses # 42 on their list of the 50 greatest television shows of all time. It was also named the funniest British sitcom of all time through a scientific formula, in a study by Gold. Scenes such as Del Boy 's fall through a bar flap in "Yuppy Love '' and the Trotters accidentally smashing a priceless chandelier in "A Touch of Glass '' have become recognisable British comedy moments, invariably topping polls of comedy viewers. Del Boy was voted the most popular British television character of all time in a survey by Open... and came fourth in a Channel 4 list of Britain 's best - loved television characters. A Onepoll survey found that Only Fools and Horses was the television series Britons would most like to see return. Only Fools and Horses has separate theme songs for the opening and closing credits, "Only Fools and Horses '' and "Hooky Street '', respectively. The original theme tune was produced by Ronnie Hazlehurst and recorded on 6 August 1981 at Lime Grove Studios. Alf Bigden, Paul Westwood, Don Hunt, John Dean, Judd Proctor, Eddie Mordue, and Rex Morris were hired to play the music. The tune was changed after the first series, and the new one was written by John Sullivan (he disliked the tune for the first series, and his new one explained the show 's title), and Hazlehurst conducted it. It was recorded at Lime Grove on 11 May 1982, with musicians John Horler, Dave Richmond, Bigden, and Proctor. Sullivan wanted the band Chas & Dave singing, but this was not possible due to their single Ai n't No Pleasing You being successful in the charts. Sullivan had intended for Chas & Dave to sing it because they had enjoyed success with the "Rockney '' style, a mixture of rock n ' roll and traditional Cockney music. Sullivan was persuaded to do it himself by Ray Butt. Despite the creation of a new theme tune, the original one remained in occasional use. Chas & Dave did later contribute to the show, performing the closing credits song for the 1989 episode "The Jolly Boys ' Outing ''. Both songs are performed by Sullivan himself, and not -- as is sometimes thought -- by Nicholas Lyndhurst. The opening credits see images of the three principal actors peel on and off the screen sequentially. These appear over a background of still photographs of everyday life in South London. The sequence was conceived by graphic designer, Peter Clayton, as a "metaphor for the vagaries of the Trotters ' lifestyle '', whereby money was earned and quickly lost again. Clayton had also considered using five - pound notes having Del 's face. The action was shot manually frame by frame, and took around six weeks to complete. Clayton knew that it was important to have the characters established in the titles, and prepared a storyboard depicting his ideas using drawings. He photographed various locations with a photographer, and the titles were shot using a rostrum camera and not edited. Brian Stephens, a professional animator, was hired to create the labels ' movement. Clayton returned to the show when the titles were changed: for "Christmas Crackers, '' he re-cut the entire sequence, and added Christmas items. Another change was made necessary by Lennard Pearce 's death and Buster Merryfield joining the cast, so the pictures of David Jason and Nicholas Lyndhurst were updated too. The sequence was shot on motor drive. The closing credits for the programme varied series by series. The first series used peeling labels featuring the names of the cast and crew, mirroring the opening sequence, but these had to be updated with every new episode, making the process very time - consuming; from the second series the credits switched to a standard rolling format. The third series featured additional symbols. For the fourth series, these designs were replaced with white lettering on a black background. The fifth series had a black and white background, but the sixth series reverted to the black one. For the seventh series, the credits scrolled against a freeze frame of the final scene. The original "Nelson Mandela House '' in the titles was Harlech Tower, Park Road East, Acton, London, and since 1988, was filmed at Whitemead House, Duckmoor Road, Ashton, Bristol. In addition to its mainstream popularity, Only Fools and Horses has also developed a cult following. The Only Fools and Horses Appreciation Society, established in 1993, has a membership of around 7,000, publishes a quarterly newsletter, Hookie Street, and organises annual conventions of fans, usually attended by cast members. The Society has also organised an Only Fools and Horses museum, containing props from the series, including Del 's camel - hair coat and the Trotters ' Ford Capri. It was named one of the top 20 cult television programmes of all - time by TV critic Jeff Evans. Evans spoke of: Only Fools and Horses -- and consequently John Sullivan -- is credited with the popularisation in Britain of several words and phrases used by Del Boy regularly, particularly "Plonker '', meaning a fool or an idiot, and two expressions of delight or approval: "Cushty '' and "Lovely jubbly ''. The latter was borrowed from an advertising slogan for a popular 1960s orange juice drink, called Jubbly, which was packaged in a pyramid shaped, waxed paper carton. Sullivan remembered it and thought it was an expression Del Boy would use; in 2003, the phrase was incorporated into the new Oxford English Dictionary. Owing to its exposure on Only Fools and Horses, the Reliant Regal van is now often linked with the show in the British media. The one used by the Trotters has attained cult status and is currently on display at the Cars of the Stars exhibition at the National Motor Museum, alongside many other vehicles from British and American television and movies, such as the Batmobile and the DeLorean from Back to the Future. Boxer Ricky Hatton, a fan of the show, purchased one of the original vans in 2004. Another of the vans used in the series was sold at auction in the UK for £ 44,000 in February 2007. During the media frenzy surrounding The Independent 's revelations that the new bottled water Dasani, marketed by Coca - Cola, was in fact just purified tap water from Sidcup, mocking parallels were made with the Only Fools and Horses episode, "Mother Nature 's Son '', in which Del sells tap water as "Peckham Spring ''. In the closing ceremony of the 2012 London Olympics, the Trotters ' yellow Reliant van appeared on stage, along with two characters dressed as Batman and Robin, a reference to the Only Fools and Horses episode "Heroes and Villains ''. A character is referred to as "a bit of a Del Boy '' in the Doctor Who episode "Father 's Day ''. Four episodes ("The Long Legs of the Law '', "A Losing Streak '', "No Greater Love '' and "The Yellow Peril '') were re-edited for audio purposes and released on audio cassette on 12 October 1998. The cassette was re-released in October 2000. A 4 - minute show named "Royal Variety Performance '' was shown on 27 November 1988 (viewed by 18.14 million people) and had Del, Rodney, and Albert appear on the Royal Variety Show. It was staged on 24 November 1986, and the plot saw David Jason, Nicholas Lyndhurst and Buster Merryfield appear on stage in character, thinking that they are delivering boxes of alcohol to an associate of Del 's, only later realising where they actually are. They also mistake the Duchess of York for Del 's associate. An idea of an Only Fools and Horses stage show was mooted by Ray Butt, following the success of other sit - com crossovers such as Dad 's Army and Are You Being Served?. Sullivan was n't keen, owing to his work on Just Good Friends as well as Only Fools and Horses, and inexperience with the theatre, so nothing came of it. A six - part documentary series titled "The Story of Only Fools and Horses '', began on 29 August 2017 on Gold and it will be shown over six weeks. The series features rare and unseen footage from the Trotter archives and specially re-created moments from Del Boy 's family and friends. Only Fools and Horses was sold to countries throughout the world. Australia, Belgium, Cyprus, Greece, Ireland, Israel, Malta, New Zealand, Pakistan, Portugal, South Africa, Spain and Yugoslavia are among those who purchased it. In all former Yugoslav countries in which Serbo - Croatian is spoken the title was Mućke (or Мућке in Cyrillic script), which can roughly be translated as "shady deals. '' In Macedonia, it 's Spletki (Сплетки in Cyrillic). In Slovenia, however, the show was coined Samo bedaki in konji, which is a literal Slovenian translation of the original English title. The show has enjoyed particular popularity in the former Yugoslavia, and is regarded as a cult series in Croatia, Montenegro, Serbia and Slovenia. A number of overseas re-makes have also been produced. A Dutch version aired for one series in 1995, entitled Wat schuift 't? (What 's it worth?). The Trotters were renamed the Aarsmans and it starred Johnny Kraaykamp jnr. as Stef (Del), Sacco Van der Made as Grandad and Kasper van Kooten as Robbie (Rodney), and was shown on RTL 4. A Portuguese re-make, O Fura - Vidas, a local expression for someone who lives outside the law, ran for three series from 1999 to 2001. It was a literal translation of the British version, with all episodes based on the originals. It centred on the Fintas family, who live in Sapadores, a neighbourhood in Lisbon, and starred Miguel Guilherme as Quim (Del), Canto e Castro as Grandad, and Ivo Canelas as Joca (Rodney). In this Portuguese version the Reliant 's equivalent was a 1988 Suzuki Super Carry. A Slovenian re-make, called Brat bratu (Brother to Brother), was broadcast from 2008 to 2009. All episodes were based on the original British storylines, and it was made in co-operation with John Sullivan. It featured brothers Brane (Brane Šturbej) and Bine (Jure Drevenšek), who moved from Maribor to Ljubljana. The series also stars Peter Ternovšek as Grandad. It was directed by Branko Đurić. The series was cancelled after thirteen episodes due to poor ratings. There have been several plans to produce an American version. One was to be a star vehicle for former M * A * S * H actor Harry Morgan, with Grandad rather than Del becoming the lead character. The other, entitled This Time Next Year..., would have seen the Trotters renamed the Flannagans. A draft script was written for the latter, but neither show materialised. In 2010 Steve Carell, star of the US version of The Office, expressed an interest in making an American version of the series, with him to star as Del Boy. In January 2012 US network ABC commissioned a pilot of an Only Fools and Horses remake titled "King of Van Nuys '', written by Scrubs writers Steven Cragg and Brian Bradley. It was developed, rejected and then redeveloped -- only to be rejected again later in the year. The pilot starred John Leguizamo as Del, Dustin Ybarra as his brother Rodney and Christopher Lloyd as Grandad. A parody called Only Jerks and Horses was written by David Walliams and Matt Lucas and directed by Edgar Wright in 1997. A spin - off of Only Fools and Horses entitled The Green Green Grass, also written by John Sullivan and directed by Tony Dow, was first aired in the UK in September 2005. Sullivan had considered writing a sitcom around the popular characters of Boycie and Marlene (John Challis and Sue Holderness) since the mid-1980s, but it was not until the series finally ended that the idea came to fruition. The Green Green Grass sees Boycie and Marlene forced to leave Peckham by one - time Only Fools and Horses villains, the Driscoll Brothers, and has included guest appearances by Denzil (Paul Barber) and Sid (Roy Heather). A second series of The Green Green Grass was broadcast in the UK in October 2006, a third in November 2007 and a fourth in January 2009. In 2003, it was reported that Sullivan was developing a prequel to the original series, Once Upon a Time in Peckham, which would feature Del as a youngster in the 1960s, and have a prominent role for his parents. In 2009, it was again reported that the BBC were considering commissioning the show, although nothing was confirmed. On 5 April 2009, Sullivan told The Mail on Sunday that he was planning a prequel to Only Fools and Horses which would star Nicholas Lyndhurst as Freddie "The Frog '' Robdal, a local criminal and Rodney 's biological father; Robdal was the focus of the episode "The Frog 's Legacy ''. On 3 July 2009, the BBC revealed that the title of the spin - off would be Sex, Drugs & Rock ' n ' Chips, and would be a 90 - minute comedy drama. The title was subsequently changed to Rock & Chips. Filming began in August 2009, and it was shown on BBC One at 9pm on 24 January 2010. In October 2009 it was confirmed that Lyndhurst would star as Robdal. The Inbetweeners and Off The Hook actor James Buckley played the role of the young Del Boy. Only Fools and Horses spawned many merchandising spin - offs. Several books have been published, such as "The Only Fools and Horses Story '' by Steve Clark and "The Complete A-Z of Only Fools and Horses '' by Richard Webber, both of which detail the history of the series. The scripts have been published in a three - volume compendium, "The Bible of Peckham ''. It has been released on VHS, DVD and audio CD in several guises. A DVD collection containing every episode was issued, along with various other special edition box - sets, such as a tin based on their Reliant Regal. Videos and DVDs of Only Fools and Horses continue to be among the BBC 's biggest - selling items, having sold over 6 million VHS copies and 1 million DVDs in the UK. Two board games based on the show were released: a Monopoly - style game, the "Trotters Trading Game '', in which participants attempt to emulate the Trotters and become millionaires, and another game set in their local pub, entitled the "Nag 's Head Board Game ''. In October 2015, He Who Dares..., a fictional autobiography, was released by Ebury Press. The book was written with the backing of John Sullivan 's estate.
who is nick with on young and the restless
Nicholas Newman - wikipedia Nicholas Newman is a fictional character from the American CBS soap opera The Young and the Restless. Created and introduced by William J. Bell, he was born onscreen in 1988 as the second child of supercouple characters Victor and Nikki Newman. Portrayed by a set of twins and later two child actors for his first six - year period, the writers of the series decided to rapidly age the character to a teenager in the summer of 1994. That June, Joshua Morrow began portraying Nick, and has remained in the role to present time. The character was reintroduced with the purpose of developing a relationship with another character, Sharon Collins, who was introduced around the same story arc. The pairing, which yielded three children, Cassie, Noah and Faith Newman, proved popular with viewers. They are regarded as a prominent supercouple by the soap opera media. In 2005, the character underwent a dramatic change in storyline when the role of Nick and Sharon 's fourteen - year - old daughter Cassie was killed off after a car accident. This led Nick to seek comfort in the arms of another woman, Phyllis Summers, which resulted in an affair. Phyllis became pregnant with Nick 's child, Summer Newman. Nick divorced Sharon and married Phyllis the following year. For years, however, Nick battled between his ongoing feelings with Sharon and Phyllis. His other storylines have included trying to seek independence away from his powerful father as well as multiple other relationships. In 2013, a storyline regarding Summer 's paternity was visited, centered on speculation that she is actually the daughter of Jack Abbott, Phyllis ' ex-husband. Morrow regards Nick as a "good dude '' who has a "fiery '' protective personality, which has developed into a "more mature '' persona as the character grew. Nick and Morrow gained a significant amount of crazed fan attention and popularity during the late 1990s, considered a long - time fan favorite by TV Guide. The pairing of Nick and Sharon has been met with notable success, listed as one of the genre 's best supercouples by The Huffington Post and SoapNet, among other sources. Morrow 's performance has been met with critical acclaim, having garnered him the Daytime Emmy Award for Outstanding Younger Actor in a Drama Series nomination consecutively from 1996 to 2000. Created and introduced by William J. Bell, the role of Nicholas Newman was born onscreen during the episode dated December 31, 1988. The character was portrayed by infant twins Marco and Stefan Flores in 1989, while child actor Griffin Ledner took over the following year, departing in January 1991. Child actor John Alden played the role of Nick from 1991 to 1994. On June 21, 1994, the producers of the series decided to rapidly age Nick to a teenager; ending speculation that another teenage child of Victor Newman (Eric Braeden) would be introduced to the soap opera. The character 's return as a teenager was written as him returning from boarding school after a long duration. Since then, the role has been portrayed by actor Joshua Morrow. Morrow had previously auditioned for the role of "Dylan '' on another CBS soap opera The Bold and the Beautiful, making it to the final two casting options, but lost the role to Dylan Neal. The network later requested him to read for the part of Nick, which he won. Morrow, who worked in a restaurant prior to debuting on The Young and the Restless, considered his change in profession a "quantum leap in careers ''. He said that accepting the role was "a very easy decision '' considering how popular the show was. The Record newspaper described the role as being "tailor - made '' for Morrow. In 2002, Morrow signed a new contract with the soap opera that would ensure his portrayal of the role for an additional five years. His contract was considered "record - breaking '' at the time, as no soap opera contract had exceeded four years prior to this. Upon his contract 's expiration in 2007, Morrow agreed to an additional five years with The Young and the Restless. He stated he was "happy to know '' where he was going to be for the next five years, as well as expressing gratitude towards the soap opera for accommodating his living conditions and schedule. Morrow, on Nick suing his father (2011) Nick is described by the soap opera 's official website as often thinking "with his heart instead of his head ''. When the teenaged version of Nick debuted on the series in 1994, he was characterized as a "frisky '' 16 - year - old "rich - kid '', with Morrow being able to "kick back and relax in typical teenager attire -- jeans, tees, etc ''. Morrow said "Nick 's eager and fun - loving... and that 's fun to play ''. Nick also has an "eye for the ladies ''. In 1999, Morrow told soap magazine Soap Opera Digest that Nick has a "fiery personality '', and "if you attack him, he 's going to fight back ''. Morrow has insisted that he is not concerned about viewer backlash towards Nick, having stated: "My job is not to be palatable to the audience. I want to tell a convincing story. I have never been a man in love with two women. I can only imagine what that must be like. '' The actor felt that if he needed to "look like an ass and a bad guy '' to keep people interested in his portrayal, "so be it ''. He told Tommy Garrett of Canyon News: "I 'd rather be playing an imperfect good guy than a boring one. There is a lot of me in Nick, there is not that big of a stretch for me on a daily basis ''. A controversial storyline occurred in 2011 when alongside his sisters Abby Newman (Marcy Rylan) and Victoria Abbott (Amelia Heinle), Nick sued his father. Morrow felt that Nick wants to teach Victor a lesson, hoping he will "appreciate his children more ''. He also stated Nick wants to get respect, as he is n't "this young and impressionable boy who looks at his father in this hero light anymore. '' By the following year, Morrow opined that Nick is now "more mature '' with his decision - making, and that "He 's got a very practical view of the world, and I like that he 's kind of become a steady character on the show ''. He stated that although the part is n't as "splashy or flashy '' as Adam Newman (Michael Muhney), Nick 's onscreen brother, he "really '' likes Nick because "he 's a good dude '' and a "very supportive '' person. Since 1994, Nick, whose love life according to Morrow is "messy and convoluted '', has been romantically linked to his high - school sweetheart Sharon Collins (Sharon Case). As teenagers, they dream of eloping, and face major hurdles when Sharon 's ex-boyfriend Matt Clark (Eddie Cibrian) exposes Sharon 's teenaged motherhood (when she gave her baby, fathered by her former boyfriend in Madison, up for adoption). Nick, believing Sharon is a virgin, briefly breaks up with her but they reunite. With Matt behind them, they marry in February 1996 and have their first child -- Noah (Robert Adamson) -- in 1997, the year Sharon is reunited with her daughter Cassie (Camryn Grimes). Sharon 's best friend Grace Turner (Jennifer Gareis) seduces Nick, triggering another breakup; Sharon and Nick reunite during a custody battle for Cassie. Of their highly anticipated reunion, former head writer Kay Alden noted that during the previous several months "there 's been a lot of stop - and - go '' and "mixed messages '' between the couple, but their feelings were "definitely clear '' again; Morrow felt that Nick loved Cassie as much as he loves Noah. However, in 2002, Sharon has an affair with Diego Guittierez (Greg Vaughan), causing additional martial struggles for the pair. In May 2005, 14 - year - old Cassie is killed in a car accident. Nick becomes distant from Sharon, and cheats on her with Phyllis Summers (Michelle Stafford). The marriage ends the following year; Nick marries a pregnant Phyllis, who gives birth to their daughter, Summer Newman (Hunter King). Despite his marriage to Phyllis, Nick remains in love with Sharon, causing the love triangle between Nick, Sharon and Phyllis to reemerge in 2009. Nick became unable to forget Sharon, who began unraveling in her own life after a recent divorce. Morrow said Sharon "seems kind of vulnerable and susceptible to people saying things about her. So, Nick feels he has to protect her. '' However, this came at the expense of his marriage to Phyllis which began falling apart. In January 2009, Nick and Sharon reunite at the Abbott cabin, an affair "much anticipated '' by viewers. According to Case, Sharon wanted Nick "more than anyone '' but only if he left Phyllis; Morrow told TV Guide that although the reunion was "messy '', fans "wanted this for a long time '', sure that they would become a solid couple again. After another one - night stand Sharon becomes pregnant, briefly lying that Jack Abbott (Peter Bergman) -- Sharon 's ex-husband -- is the baby 's father when Summer becomes ill and Phyllis needs Nick. Nick soon finds out, but their daughter Faith is kidnapped at birth by Nick 's brother, Adam Newman (Michael Muhney), and given to Ashley Abbott (Eileen Davidson). Sharon, believing Faith has died, seduces a guilty Adam. Faith is eventually reunited with her parents, and Adam 's crimes are revealed. Nick and Sharon are briefly engaged in November 2010, but they break up when she sleeps with Adam and their relationship becomes bitter. Although Case and Morrow believe the couple wo n't "ever be over '' and "belong together '', they are not currently together and may not reunite "anytime soon ''. In 2005, former "good - girl '' Cassie becomes a rebellious teenager; Nick and Sharon have a difficult time dealing with her. Cassie has a crush on bad - boy Daniel Romalotti (Michael Graziadei), who is dating Lily Winters (Christel Khalil). One night, against her parents ' wishes she sneaks out to a party. In a ploy to impress a drunken Daniel she attempts to drive him home, despite being underage. The car crashes, leaving them with no memory of the accident. Daniel is thought to have been driving, and is blamed for the accident. A weakened Cassie escapes from the hospital to find Daniel and tell him she was driving; she is returned to the hospital. During the episode dated May 24, Cassie dies with Nick and Sharon at her side. Grimes said she did not think Cassie would die, but if it was "meant to happen, it 's meant to happen ''. The actress found filming her last scenes with Case and Morrow "ridiculously hard ''. Daniel is cleared of all charges and Nick and Sharon begin Cassie 's Foundation, a movement to prevent teenage drinking and driving. Following her death, the strong relationship between Nick and Sharon changes forever. Weighing in on the "groundbreaking '' and "shocking '' plot, Morrow stated, "As a person who watched Camryn grow up, I was devastated and it was not that difficult to use those feelings in the scenes we had to play. But as an actor, I have to commend our producer and writers at that time because it gave the show a huge new storyline that years later we are still playing out ''. He felt that losing a child gave Nick "an edge and a change '' that affected him deeply. In 2012, during a court battle over Newman Enterprises (the family company), Nick tells a judge that Sharon 's mental problems date to Cassie 's death. Luke Kerr of Zap2it referred to this as playing the "Cassie card ''. Nick uses an affair with workmate Phyllis Summers (Michelle Stafford) to numb his emotions after Cassie 's tragic death. Phyllis and Nick go their separate ways as he reunites with Sharon, although he marries Phyllis when she becomes pregnant with his child. Their daughter, Summer Newman, is born in December 2006. Their three - year marriage eventually succumbs to Nick 's ongoing feelings for Sharon, which resulted in an affair and the birth of their daughter, Faith. Nick divorces Phyllis in 2010. They briefly reunite and remarry in 2012, although it ends because of Phyllis ' adultery and lies. Morrow stated that "not many husbands would put up with what he has '' in his marriages to Phyllis. For years, it has been speculated that Summer was fathered by Jack, not Nick. When asked about this in 2010, Morrow stated: "The way Nick and Summer bond... Well, it 'll be tough if he reveals he 's not her dad now. But I 'm a betting man and I 'll bet anything that she 's Jack 's baby ''. In 2013, eighteen - year - old Summer (Hunter King) is intent on losing her virginity, and develops an interest for Jack Abbott 's son Kyle Abbott (Blake Hood). Responding to this, head writer Josh Griffith stated that they were "looking at the possibility of anything ''. A poll run by Daytime Confidential revealed that most audiences believe Summer is the daughter of Jack, with 79 % of votes. In May 2013, Morrow spoke with Michael Logan of TV Guide, revealing that the question of Summer 's paternity would be visited, as Nick 's "game - changing deception '' would be uncovered. It later revealed onscreen that Summer may in fact not be his daughter; the paternity results read only by Nick years ago were inconclusive. Logan called it "messy development '', stating that his character may be "trashed forever ''. According to Morrow, after Cassie 's death, Nick withheld the fact that the results were corrupted because "he just could n't bear the thought of losing Summer, too '', but called it "out of character ''. "But now he 's freaking out because there 's a good likelihood that Summer and Kyle are brother and sister, '' he stated. Nick later secretly does another test using hair from Summer 's hairbrush, which comes back conclusive. Later that May, Morrow stated during an interview with TV Buzz that Nick would be devastated by the paternity test outcome; confirming that Summer is in fact Jack 's daughter. He stated that Nick is "terrified '' of the potential repercussions and the reactions, from Jack, Sharon, Noah, and most importantly Summer. However, months later it was revealed that the paternity test results were actually switched by Nick 's ex-wife Sharon Newman (Sharon Case), while she was off her bipolar medication, in a ploy to win Nick back, meaning that he is in fact Summer 's biological father after all. Sharon 's secret has yet to be revealed. Speaking of the forthcoming reveal, King stated, "I 'm very curious as to how they 're going to play it out with Summer. I 'm sure she 's still going to have the close bond that she now has with Jack. '' Business tycoon Victor Newman (Eric Braeden) and his socialite wife Nikki Newman 's (Melody Thomas Scott) marriage deteriorates and they decide to divorce. However, they end up conceiving a child. A son, Nicholas, is born on New Year 's Eve of 1988. Nick grows up with two prominent men in his life, his father and stepfather Jack Abbott (Peter Bergman). Much like his older sister Victoria Newman (Amelia Heinle), Nick is sent to boarding school at a young age. In 1994, Nick returns to Genoa City, a typical sixteen - year - old teenager. He briefly dates Amy Wilson (Julianne Morris), but soon falls for her best friend, new resident Sharon Collins (Sharon Case). Nick and Sharon fall deeply in love, as she earns Victor 's love and trust. However, Nikki is outraged, as she thinks Sharon, who is struggling financially, happens to be using Nick to get to the Newman fortune. Nonetheless, the relationship continues to blossom. Nick believes that Sharon is a virgin and agrees to wait for her, however, her ex-boyfriend Matt Clark shows up in town to expose Sharon 's secret. It is revealed that Sharon gave birth to a daughter at age sixteen, which she gave up for adoption. Nick and Sharon briefly split because of this, but soon reunite. They continue to face problems, such as Sharon being raped by Matt. When Matt is nearly shot to death, Nick becomes the main suspect and is sent to jail. He eventually returns home and marries Sharon. She quickly becomes insecure in her marriage and stops using birth control, which results in a pregnancy. Months later, a premature son, Noah Newman, is welcomed. When it is feared that Noah would die at birth, Sharon 's best friend, Grace Turner, tracks down her long - lost daughter, Cassie. However, when Noah lives, Grace attempts to keep Cassie as her own, but soon relinquishes her back to Sharon. Nick develops a bond with Cassie and becomes her father figure. Sharon and Nick 's marriage weakens for months after he cheats on her with a manipulative Grace. However, they soon reunite during a custody battle over Cassie, where they are pronounced her official parents. Matt returns to Genoa City with a reconstructed face, rapes Sharon once more, and is eventually captured. When Sharon ends up pregnant, it is unknown who the father is: Nick or Matt. An argument with Nick causes Sharon to give birth to a stillborn daughter, which is revealed to be Nick 's following her death. Nick and Sharon 's marriage weakens further when they both cheat on each other once more, Nick with Grace and Sharon with the Newman stableman Diego Guittierez (Greg Vaughan). Nick tries to forgive her, but when he witnesses Sharon and Victor kissing, things become harder to mend than ever. Sharon later runs away from town, depressed. When she returns, Nick accepts her and they reunite. However, a man named Cameron Kirsten (Linden Ashby) who physically abused her (after having an affair with her) followed Sharon to Genoa City, wanting to bed her again. Eventually, Cameron 's schemes are foiled and Sharon and Nick return to their normal life at last. In 2005, a fourteen - year - old rebellious Cassie is killed in a car accident. Nick closes off his emotions and cheats on Sharon with his Newman Enterprises work friend Phyllis Summers. Phyllis ends up pregnant, with Jack or Nick as possible fathers. The original DNA test to determine the child 's paternity returns inconclusive, but the thought of losing another child leads Nick into lying, and claiming that he is Summer 's father. Nick and Sharon divorce after an eleven - year marriage and he marries Phyllis instead. Their daughter, Summer Newman, is born. Sharon also moves on with her life, marrying Jack, which makes Nick feel uncomfortable. In 2007, Nick 's flight on the Newman jet crashes and a body is never recovered; he is presumed dead, devastating his family and Sharon. Six months after being presumed dead, Nick is revealed to be alive, living with amnesia. Upon his return to Genoa City, he believes that Cassie is alive and that he is still married to Sharon and kisses her, intent on winning her back. However, soon his memory returns, and he reunites with Phyllis and Summer. Jack, Phyllis, Nick and Sharon soon start a business venture, creating a new magazine entitled Restless Style. However, the four constantly clash, and Jack and Sharon are driven out of the company. However, Nick and Sharon grow closer. While in Paris searching for a teenage Noah, Nick and Sharon share a kiss on the bridge, which is witnessed by Phyllis. Back in Genoa City, Nick and Sharon are snowed in at the Abbott cabin and end up making love. Their emotional affair continues, and they sleep together a second time three months later. This results in Sharon 's pregnancy. She gives birth to Faith Newman in September 2009. She is stolen by Nick 's brother Adam Newman at birth, presumed dead for months. Faith is eventually returned to her parents, after Sharon developed a relationship and married a manipulative Adam, unaware of his actions. Nick 's marriage to Phyllis is over by this point. Nick and Sharon grow closer co-parenting Faith and eventually reunite. However, it is short - lived when she sleeps with Adam, unable to shake her deep - rooted feelings for him. Nick, disgusted with her actions, takes full custody of Faith from her when Sharon is unjustly arrested for murder, but continues to support her. Sharon is sentenced to life in prison and skips town, leaving everyone to believe she is dead. Nick begins a relationship with Diane Jenkins (Maura West), who is also developing a courtship with his father, Victor, at the same time. Diane is killed by Nikki in self - defense. Sharon returns and is freed with the help of lawyer Avery Bailey Clark (Jessica Collins). Nick has a short affair with Avery, until he returns to Phyllis. Phyllis becomes pregnant once more, but soon miscarries the child. Nonetheless, Nick still marries her. However, the marriage is short - lived, ending just months later, with Nick moving on to a relationship with Avery. Nick begins a new phase in his life. He decides to end his career as a businessman at Newman Enterprises and open up a club, which he names The Underground. When Summer finds herself attracted to Jack 's son Kyle Abbott (Blake Hood), Nick panics, knowing that Summer may become involved with her biological brother. He has another DNA test performed, which appears to confirm that Nick is not Summer 's biological father. However, in reality, Sharon (who has stopped taking medication to treat her bipolar disorder), altered the results in an attempt to get Nick to return to her. Nick confesses to Summer that Jack is her father. Sharon 's attempts fail, and Nick and Avery become engaged; Avery misses the wedding after tending to her mentally distraught ex-boyfriend Dylan McAvoy (Steve Burton), and the relationship ends. Nick grows closer with Sharon, who continues to withhold the truth. Sharon 's memories of changing the DNA results are wiped out after she undergoes electroconvulsive therapy to get rid of her "visions '' of Cassie 's ghost, which in actuality is Mariah Copeland (Camryn Grimes), a Cassie lookalike hired by Victor to haunt Sharon. TV Guide noted Morrow to be one of the soap opera 's longtime fan favorites. In his early years, The Chicago Tribune described the character as a "heartthrob ''. Also during his early tenure on the series, Liz Wilson of The Record wrote that Morrow is a "soap opera hunk '', noting his large group of "mostly young female fans ''. In July 1995, more than 12,000 fans of the show attended a "Men of Y&R '' event, which also featured fellow actors Eddie Cibrian, Shemar Moore and Don Diamont. The event organizer stated that it was a "mob scene '', as over 6,000 fans turned up to meet each actor. According to The Augusta Chronicle, by 1999, "Wedding proposals, prom invitations and underwear - tossing fans are a part of everyday life for soap opera star Joshua Morrow '', noting his fans to be "trembling '' and "starry - eyed '' in his presence. Nancy Dehart of The Spectator noted that his "mostly female '' fans "even seemed to know more about his soap opera life than he did '', quoting a fan exclaiming, "He 's so sexy, I 've got 100 pictures of him at home on my walls, '' at another event. Morrow revealed that his fans commonly refer to him as Nick, not Joshua, stating: "Fans see me everyday as Nick Newman, so I do n't fault them for calling me that (...) It 's important that they do realize it 's pretend, though ''. Michael Fairman of On - Air On - Soaps describes the character of Nick to be "not - so - upstanding '', however has lauded Morrow as "one of the most underrated actors in daytime, who delivers real and honest performances day after day, no matter what the material ''. Luke Kerr of Zap2it wrote that Nick is "smooth, suave and did we mention rich? '', naming him the "Newman prince ''. Tommy Garrett of the website Highlight Hollywood has often lauded Morrow 's acting. Garrett stated that he "makes perfect choice daily as an actor '' and is "solid as a rock from outer space '' who "continues to shine '' on screen. In a separate article, Garrett referred to Nick as someone who is occasionally "out of control '' and "fights for what he wants '', stating that "Morrow is the man in control '' of his storyline, naming him "even more talented than any of daytime TV 's actors ''. The supercouple pairing of Nick and Sharon has been met with notable popularity among viewers as well as critics. SoapNet placed them on a list of "Best of 2000s Decade: Couples '', stating: "... They 've been apart since the death of their daughter, Cassie, but we all know that they belong together ''. On a list of "Greatest Soap Opera Supercouples '' by Kim Potts of The Huffington Post, Nick and Sharon were placed 16th. They are referred to by fans as "Shick '', a combination of Sharon and Nick 's names. The couple 's onscreen reunion in 2009 allowed the show 's Nielsen Ratings to rise by an average of 447,000, capitalizing on their popularity. Nick 's relationship with Phyllis has also gained a fan following, who refer to them as "Phick ''. Morrow 's performance earned him five Daytime Emmy Award nominations for Outstanding Younger Actor consecutively from 1996 to 2000. However, the actor has won two Soap Opera Digest Awards, one for "Outstanding Younger Newcomer '' in 1996 and another for "Outstanding Hero '' in 2001. Amid critical praise for Cassie 's storyline in 2005, Dan J. Kroll of the website Soap Central noted that it had "widely been expected '' that Morrow and Case would receive nominations for Outstanding Lead Actress and Actor at the 33rd Daytime Emmy Awards; however, they were not. Despite this, Grimes was nominated for the Outstanding Younger Actress award. The paternity reveal in which Summer was revealed to be Jack 's daughter was met with a negative response. Sara Bibel of the website Xfinity praised the acting performances delivered, but panned the writing as it "fell flat ''. Bibel wrote: "When a secret like this is released into the soap wild, its consequences should ricochet through the town. But this time the impact is minimal ''. Jamey Giddens of Zap2it stated that it was "sad to see a storyline that started with such huge, game - changing potential devolve into a head - scratching stink bomb ''.
is personal loan protection the same as ppi
Payment protection insurance - wikipedia Payment protection insurance (PPI), also known as credit insurance, credit protection insurance, or loan repayment insurance, is an insurance product that enables consumers to ensure repayment of credit if the borrower dies, becomes ill or disabled, loses a job, or faces other circumstances that may prevent them from earning income to service the debt. It is not to be confused with income protection insurance, which is not specific to a debt but covers any income. PPI was widely sold by banks and other credit providers as an add - on to the loan or overdraft product. Credit insurance can be purchased to insure all kinds of consumer loans including car loans, loans from finance companies, and home mortgage borrowing. Credit card agreements may include a form of PPI cover as standard. Policies are also available to cover specific categories of risk, e.g. credit life insurance, credit disability insurance, and credit accident insurance. PPI usually covers payments for a finite period (typically 12 months). For loans or mortgages this may be the entire monthly payment, for credit cards it is typically the minimum monthly payment. After this point the borrower must find other means to repay the debt, although some policies repay the debt in full if you are unable to return to work or are diagnosed with a critical illness. The period covered by insurance is typically long enough for most people to start working again and earn enough to service their debt. PPI is different from other types of insurance such as home insurance, in that it can be quite difficult to determine if it is right for a person or not. Careful assessment of what would happen if a person became unemployed would need to be considered, as payments in lieu of notice (for example) may render a claim ineligible despite the insured person being genuinely unemployed. In this case, the approach taken by PPI insurers is consistent with that taken by the Benefits Agency in respect of unemployment benefits. Most PPI policies are not sought out by consumers. In some cases, consumers claim to be unaware that they even have the insurance. In sales connected to loans, products were often promoted by commission based telesales departments. Fear of losing the loan was exploited, as the product was effectively cited as an element of underwriting. Any attention to suitability was likely to be minimal, if it existed at all. In all types of insurance some claims are accepted and some are rejected. Notably, in the case of PPI, the number of rejected claims is high compared to other types of insurance. In the rare cases where the customer is not prompted or pushed towards a policy, but seek it out, may have little recourse if and when a policy does not benefit them. As PPI is designed to cover repayments on loans and credit cards, most loan and credit card companies sell the product at the same time as they sell the credit product. By May 2008, 20 million PPI policies existed in the UK with a further increase of 7 million policies a year being purchased thereafter. Surveys show that 40 % of policyholders claim to be unaware that they had a policy. "PPI was mis - sold and complaints about it mishandled on an industrial scale for well over a decade. '' with this mis - selling being carried out by not only the banks or providers, but also by third party brokers. The sale of such policies was typically encouraged by large commissions, as the insurance would commonly make the bank / provider more money than the interest on the original loan, such that many mainstream personal loan providers made little or no profit on the loans themselves; all or almost all profit was derived from PPI commission and profit share. Certain companies developed sales scripts which guided salespeople to say only that the loan was "protected '' without mentioning the nature or cost of the insurance. When challenged by the customer, they sometimes incorrectly stated that this insurance improved the borrower 's chances of getting the loan or that it was mandatory. A consumer in financial difficulty is unlikely to further question the policy and risk the loan being refused. Several high - profile companies have now been fined by the Financial Conduct Authority for the widespread mis - selling of Payment Protection Insurance. Alliance and Leicester were fined £ 7m for their part in the mis - selling controversy, several others including Capital One, HFC and Egg were fined up to £ 1.1 m. Claims against mis - sold PPI have been slowly increasing, and may approach the levels seen during the 2006 - 07 period, when thousands of bank customers made claims relating to allegedly unfair bank charges. In their 2009 / 2010 annual report, the Financial Ombudsman Service stated that 30 % of new cases referred to payment protection insurance. A customer who purchases a PPI policy may initiate a claim for mis - sold PPI by complaining to the bank, lender, or broker who sold the policy. Slightly before that, on 6 April 2011, the Competition Commission released their investigation order designed to prevent mis - selling in the future. Key rules in the order, designed to enable the customer to shop around and make an informed decision, include: provision of adequate information when selling payment protection and providing a personal quote; obligation to provide an annual review; prohibition of selling payment protection at the same time the credit agreement is entered into. Most rules came into force in October 2011, with some following in April 2012. The Central Bank of Ireland in April 2014 was described as having "arbitrarily excluded the majority of consumers '' from getting compensation for mis - sold Payment Protection Insurance, by setting a cutoff date of 2007 when it introduced its Consumer Protection Code. UK banks provided over £ 22bn for PPI misselling costs -- which, if scaled on a pro-rata basis, is many multiples of the compensation the Irish banks were asked to repay. The offending banks were also not fined which was in sharp contrast to the regime imposed on UK banks. Lawyers were appalled at the "reckless '' advice the Irish Central Bank gave consumers who were missold PPI policies, which "will play into the hands of the financial institution. '' The price paid for payment protection insurance can vary quite significantly depending on the lender. A survey of forty - eight major lenders by Which? Ltd found the price of PPI was 16 - 25 % of the amount of the debt. PPI premiums may be charged on a monthly basis or the full PPI premium may be added to the loan up - front to cover the cost of the policy. With this latter payment approach, known as a "Single Premium Policy '', the money borrowed from the provider to pay for the insurance policy incurs additional interest, typically at the same APR as is being charged for the original sum borrowed, further increasing the effective total cost of the policy to the customer. Payment protection insurance on credit cards is calculated differently from lump sum loans, as initially there is no sum outstanding and it is unknown if the customer will ever use their card facility. However, in the event that the credit facility is used and the balance is not paid in full each month, a customer will be charged typically between 0.78 % and 1 % or £ 0.78 to £ 1.00 from every £ 100 which is a balance of their current card balance on a monthly basis, as the premium for the insurance. When interest on the credit card is added to the premium, it can become very expensive. For example, the cost of PPI for the average credit card in the UK charging 19.32 % on an average of £ 5,000 each month adds an extra £ 3,219.88 in premiums and interest. With lump sum loans PPI premiums are paid upfront with the cost from 13 % to 56 % of the loan amount as reported by the Citizens Advice Bureau (CAB) who launched a Super Complaint into what it called the Protection Racket. When interest is charged on the premiums, the cost of a single premium policy increases the cost geometrically. The above secured loan of £ 25,000 over a 25 - year term at 4.5 % interest costs the customer an additional £ 20,221.74 for PPI. Moneymadeclear calculates the repayment for that loan to be £ 138.96 a month whereas a stand - alone payment protection policy for say a 30 - year - old borrowing the same amount covering the same term would cost the customer £ 1992 in total, almost one - tenth of the cost of the single premium policy. Payment Protection Insurance can be extremely useful insurance; however, many PPI policies have been mis - sold alongside loans, credit cards and mortgages. There are many examples of PPI mis - selling, and as a result may leave the borrower with PPI that is no use to them if they came to make a claim. Reclaiming PPI payments and statutory interest charges on these payments is possible in this case either by the affected borrower or by the use of a solicitor or claims management company. If the borrower at the time of the PPI claim owes money to the lender, the lender will usually have a contractual right to offset any PPI refund against the debt. If there is any PPI value left over, then the balance will be repaid to the PPI solicitor and or the client. The first ever PPI case was in 1992 - 93 (Bristol Crown Court 93 / 10771). It was judged that the total payments of the insurance premium were almost as high as the total benefit that could be claimed. A 10 - year non disclosure clause was put in place as part of the settlement. After 10 years, a copy of the judgement was sent to the Office of Fair Trading and Citizens Advice Bureau. Soon after, a super complaint was raised. The judicial review that followed hit the headlines as it eventually ruled in the favour of the borrowers, enabling a large number of consumers to reclaim PPI payments. To date, £ 28.5 billion has been repaid to consumers (January 2018). In 2014, a PPI claim from Susan Plevin against Paragon Personal Finance revealed that over 71 % of the PPI sale was a commission. This was deemed as a form of mis - selling. The Plevin case has caused the banks and the Financial Ombudsman to review even more PPI claims. PPI claim companies are currently one of the most common sources of internet click bait, often using misleading information to attract interest from casual browsing. UK banks have set up multibillion - pound provisions to compensate customers who were mis - sold PPI; Lloyds Banking Group have set aside £ 3.6 bn, HSBC have provisions of £ 745m, and RBS have estimated they will compensate £ 950m. Payment Protection Insurance has become the most complained about financial product ever.
where was the movie gods of egypt filmed
Gods of Egypt (film) - Wikipedia Gods of Egypt is a 2016 fantasy action film directed by Alex Proyas based on the ancient Egyptian deities. It stars Nikolaj Coster - Waldau, Gerard Butler, Brenton Thwaites, Chadwick Boseman, Élodie Yung, Courtney Eaton, Rufus Sewell and Geoffrey Rush. The film portrays a mortal Egyptian hero who partners with the Egyptian god Horus to save the world from Set and rescue his love. Filming took place in Australia under the American film production and distribution company Summit Entertainment. While the film 's production budget was $140 million, the parent company Lionsgate 's financial exposure was less than $10 million due to tax incentives and pre-sales. The Australian government provided a tax credit for 46 % of the film 's budget. When Lionsgate began promoting the film in November 2015, it received backlash for its predominantly white cast playing Egyptian deities. In response, Lionsgate and director Proyas apologized for ethnically - inaccurate casting. Lionsgate released Gods of Egypt in theaters globally, starting on February 25, 2016, in 2D, RealD 3D, and IMAX 3D, and in the United States, Canada, and 68 other markets on February 26. The film was poorly reviewed by critics, who criticized its choice of cast, script, acting and special effects; these reviews prompted an angry response from Proyas. It grossed a total of $150 million against a $140 million budget, becoming a box office bomb and losing the studio up to $90 million. It received five nominations at the 37th Golden Raspberry Awards. Set in the mythology of ancient Egypt, the world is flat and the Egyptian gods live among humans, they differ from humans by their greater height, golden blood, and ability to transform into their divine forms. A young thief named Bek and his love Zaya are attending the coronation of Horus. During the ceremony, Osiris is killed by his extremely jealous brother Set, who seizes the throne and declares a new regime where the dead will have to pay with riches to pass into the afterlife. Horus is stripped of his eyes, the source of his power, and almost killed. His love, Hathor pleads with Set to spare him and he is instead exiled in exchange for Egypt 's surrender. A year later, Bek has been working as a slave building monuments while Zaya is now under the ownership of Set 's chief architect Urshu. Believing that Horus is the only one who can defeat Set, she gives Bek the floor plans to his treasure vault and also manages to get a glimpse of the plans for his pyramid. Bek is able to steal back one of Horus ' eyes from the treasure vault. However, Urshu finds out and kills Zaya as the couple flee. Bek takes her body to the blind Horus and makes a bargain: Horus agrees to bring Zaya back from the dead for his eye and Bek 's knowledge about Set 's pyramid. The two fly to the divine vessel of Horus ' grandfather Ra. Horus is unable to convince Ra to regrant him his power so he can defeat Set himself, as Ra is both neutral about their conflict and daily at war with the enormous shadow beast Apophis that threatens to devour the world. He does however allow Horus to obtain a vial of the divine waters that surround his vessel, which can extinguish the desert thirst and weaken Set gravely. Ra tells Horus that his weakness is the result of him not fulfilling his destiny, which Horus believes means avenging his parents ' deaths. Set asks Hathor to take him to the underworld, but she refuses and escapes. After surviving an attack by Set 's Egyptian Minotaurs led by Mnevis and an attack by some of Set 's minions and their giant cobras, Hathor finds and saves Bek and Horus. Horus at first does n't trust her as she is a mistress of Set, while she tries to convince him that Set is her enemy as well. When they tell her of their plan regarding Set 's pyramid, she warns them of a guardian sphinx who will kill anyone not wise enough to solve a riddle. The group then heads to the library of Thoth, so they can recruit him to solve the riddle. Arriving at Set 's pyramid, they overcome its traps including the sphinx, to reach the source of Set 's power. But before they can pour the divine water, Set traps them and reveals to Bek that Horus can not bring Zaya back from the dead. Set destroys their flask and takes Thoth 's brain. Horus is able to save Hathor and Bek. Horus admits to caring more about his revenge than the mortals. Hathor sacrifices her own safety for Zaya 's payment into the afterlife by giving Bek her bracelet and calls Anubis to take him to Zaya, letting herself be dragged to the underworld. Having combined Thoth 's brain, Osiris 's heart, Horus 's eye, and wings from Nephthys with himself, Set travels to Ra, questioning his favoritism for Osiris and denying him the throne and children. Ra claims that all of Set 's prior mistreatments were tests preparing Set for his true role: taking Ra 's place as the defender of the world aboard his solar barge, fighting against Apophis. Set is dismayed and wants to destroy the afterlife so that he can be immortal. Ra tries to fight Set by blasting him with his spear, though Set survives with his combined power. He stabs Ra, takes his spear, and casts him off the boat freeing Apophis to consume both the mortal and underworld realms. Bek finds Zaya, who refuses Hathor 's gift as she does n't want an afterlife without Bek, but then Apophis attacks and the gate to the afterlife is closed. Bek returns to the mortal world, where Horus is amazed that Bek still wants to help take down Set. Bek tells him it was Zaya who told him to, as she still has faith in Horus. Horus climbs up the outer wall of an obelisk Set is standing on and attempts to battle him, but is heavily outmatched. Whilst Bek fights Urshu who is thrown off to his death, Bek ascends on the inside and joins the battle, removing Horus 's stolen eye from Set 's armor, being wounded in the process. As Bek slides toward the edge of the obelisk, he throws the eye toward Horus, who must choose to catch it or save Bek instead. Horus reaches for Bek and apologizes for all he has put him through. As they plummet toward the ground, Horus regains the power to transform and he catches Bek and flies him to safety. Horus realizes that it was n't the recovery of his eye nor revenge that was his destiny, it was the protection of his people for which he needed to fight. Now, Horus has the strength for battling Set, and he out maneuvers and kills him. He then finds Ra wounded and floating in space, and returns his spear to him, allowing Ra to once again repel Apophis and for Anubis to reopen the gates. As Horus returns to Bek, a child holds out his other eye, while people cheer him. Horus finds Bek dying from the wounds he sustained whilst fighting Set, carries him to Osiris 's tomb and lays him beside Zaya. Ra arrives and offers to bestow any power on him to repay Horus for his life and Egypt 's survival. All Horus wants is bringing Bek and Zaya back to life. The other gods, who had not passed into the afterlife, are also revived leaving Horus 's parents who had already passed into the afterlife. Horus is crowned king by Thoth and declares access to the afterlife will be paid with good deeds. Bek is made chief advisor, and he gives Horus back Hathor 's bracelet letting him leave to rescue her from the underworld. -- Director Alex Proyas, December 2015 Gods of Egypt is directed by Alex Proyas based on a screenplay by Matt Sazama and Burk Sharpless. The film was produced under Summit Entertainment. Proyas was contracted by Summit in May 2012, to write the screenplay with Sazama and Sharpless, and to direct the film. Proyas said he sought to make a big - budget film with an original premise, to contrast franchise films. The director cited the following films as influences on Gods of Egypt: The Guns of Navarone (1961), Lawrence of Arabia (1962), The Man Who Would Be King (1975), Raiders of the Lost Ark (1981), and Sergio Leone 's Western films. Lionsgate anticipated that Gods of Egypt would be the first film in a new franchise after it finished releasing The Hunger Games films. Actor Nikolaj Coster - Waldau was cast in June 2013. Gerard Butler, Geoffrey Rush, and Brenton Thwaites joined the cast toward the end of 2013. Chadwick Boseman and Élodie Yung joined the cast at the start of 2014. The film was shot in Australia. A crew of 200 began pre-production in Sydney in New South Wales, and producers considered filming in Melbourne in Victoria, to take advantage of the state 's tax incentive. Docklands Studios Melbourne was too booked to accommodate Gods of Egypt, and producers were instead offered an airport facility for production. The Australian states New South Wales and Victoria competed to be the location of the film 's production, and Summit selected NSW in February 2014. The state 's deputy premier, Andrew Stoner, estimated that the production would add 400 jobs to the state and contribute $75 million to its economy. Principal photography began on March 19, 2014 at Fox Studios Australia in Sydney. The setting of Anubis ' temple was filmed at Centennial Park in Sydney, and visual effects were laid over the scene. The production budget was $140 million. Jon Feltheimer, the CEO of Summit 's parent company Lionsgate, said Lionsgate 's financial exposure was under $10 million due to tax incentives of filming in Australia, as well as foreign pre-sales. The Australian government 's tax credit to have the film produced in the country covered 46 % of the $140 million production budget, and most of the remaining budget was covered by the foreign pre-sales. In the film, the gods in humanoid form are 9 feet (2.7 m) tall and in "battle beast '' form are over 12 feet (3.7 m) tall. Proyas used forced perspective and motion control photography to portray the difference in height between the actors portraying the gods and the humans. Proyas called the logistical challenge a "reverse Hobbit '', referring to The Lord of the Rings films, in which Hobbits are depicted as shorter than humans. For the Sphinx, actor Kenneth Ransom portrayed the giant creature via motion capture. For the god Thoth, who can appear as many copies, actor Chadwick Boseman was filmed hundreds of times from different angles. For a scene with many copies of Thoth, other actors took a day to film the scene, where Boseman filmed the scene for three days. Composer Marco Beltrami, who scored Proyas 's previous films Knowing (2009) and I, Robot (2004), returned to score Gods of Egypt. White actors make up most of the principal cast of Gods of Egypt. When Lionsgate began marketing the film, the Associated Press said the distributor received backlash for ethnically inaccurate casting. Lionsgate and director Alex Proyas both issued apologies. The AP said, "While some praised the preemptive mea culpa... others were more skeptical, concluding that it 's simply meant to shut down any further backlash. '' The casting practice of white actors as Egyptian characters was first reported after filming started in March 2014, when Daily Life 's Ruby Hamad highlighted the practice as "Hollywood whitewashing ''. Lionsgate released a set of character posters in November 2015, and The Guardian reported that the casting received a backlash on Twitter over the predominantly white cast. Some suggested that the casting of black actor Chadwick Boseman, who plays the god Thoth, played into the Magical Negro stereotype. The previous year, the biblical epic Exodus: Gods and Kings by director Ridley Scott received similar backlash for having a white cast. The Washington Post 's Soraya Nadia McDonald also disparaged the casting practice for Gods of Egypt and said Lionsgate released the posters at an unfortunate time. She said with the release of Aziz Ansari 's TV series Master of None in the previous week, "Whitewashed casting and the offensiveness of brownface has pretty much dominated the pop culture conversation this week. Promotion for the movie is beginning just as we 're wrapping a banner year for discussions of diversity and gender pay equity in the film industry. '' When Lionsgate followed its release of posters with a release of a theatrical trailer, Scott Mendelson at Forbes said, "The implication remains that white actors, even generic white actors with zero box office draw, are preferable in terms of domestic and overseas box office than culturally - specific (minority) actors who actually look like the people they are supposed to be playing. '' He said almost none of the actors, aside from potentially Butler, qualified as box office draws. BET 's Evelyn Diaz said while Ridley Scott had defended his casting practice for Exodus by claiming the need to cast box office draws, "Gods of Egypt is headlined by character actors and Gerard Butler, none of whom will have people running to the theater on opening day. '' Deadline 's Ross A. Lincoln said of the released trailer, "Casting here stands out like a sore thumb leftover from 1950s Hollywood. I suspect this film generates a lot of conversation before it hits theaters February 26, 2016. '' In response to criticisms of its casting practice, director Alex Proyas and Lionsgate issued apologies in late November 2015 for not considering diversity; Lionsgate said it would strive to do better. Mendelson of Forbes said the apologies were "a somewhat different response '' than defenses made by Ridley Scott for Exodus and Joe Wright for Pan (2015). Ava DuVernay, who directed Selma (2014), said, "This kind of apology never happens -- for something that happens all the time. An unusual occurrence worth noting. '' The Guardian 's Ben Child said, "The apologies are remarkable, especially given that Gods of Egypt does not debut in cinemas until 26 February and could now suffer at the box office. '' Michael Ordoña of San Francisco Chronicle said of the apologies, "That 's little comfort to the nonwhite actors denied opportunities or the Egyptians who will see a pale shadow of their ancestral traditions. '' The Casting Society of America applauded the statements from Lionsgate and Proyas. Professor Todd Boyd, chair for the Study of Race and Popular Culture at the University of Southern California, said, "The apology is an attempt to have it both ways. They want the cast that they selected and they do n't want people to hold it against them that it 's a white cast. '' Boseman, who plays the god Thoth, commenting on the whitewashing, said he expected the backlash to happen when he saw the script. He said, "I 'm thankful that it did, because actually, I agree with it. That 's why I wanted to do it, so you would see someone of African descent playing Thoth, the father of mathematics, astronomy, the god of wisdom. '' Actor Nikolaj Coster - Waldau said, "A lot of people are getting really worked up online about the fact that I 'm a white actor. I 'm not even playing an Egyptian; I 'm an 8 - foot - tall god who turns into a falcon. A part of me just wants to freak out, but then I think, ' There 's nothing you can do about it. ' You ca n't win in that sort of discussion. '' In the month leading up to release, director Proyas said his film was fantasy and not intended to be history. He cited "creative license and artistic freedom of expression '' to cast the actors he found to fit the roles. He said "white - washing '' was a justified concern but for his fantasy film, "To exclude any one race in service of a hypothetical theory of historical accuracy... would have been biased. '' Proyas said that films "need more people of color and a greater cultural diversity '' but that Gods of Egypt "is not the best one to soap - box issues of diversity with ''. He argued that the lack of English - speaking Egyptian actors, production practicalities, the studio 's requirement for box office draws, and Australia having guidelines limiting "imported '' actors were all factors in casting for the film. He concluded, "I attempted to show racial diversity, black, white, Asian, as far as I was allowed, as far as I could, given the limitations I was given. It is obviously clear that for things to change, for casting in movies to become more diverse many forces must align. Not just the creative. To those who are offended by the decisions which were made I have already apologised. I respect their opinion, but I hope the context of the decisions is a little clearer based on my statements here. '' After the film was critically panned, Proyas said, "I guess I have the knack of rubbing reviewers the wrong way. This time of course they have bigger axes to grind -- they can rip into my movie while trying to make their mainly pale asses look so politically correct by screaming ' white - wash!!! ' '' Lionsgate spent an estimated $30 million on marketing the film. It released a set of character posters in November 2015, for which it received backlash due to white actors playing Egyptian characters, as noted above. Later in the month, it released a theatrical trailer. Lionsgate aired a 60 - second spot for Gods of Egypt during the pre-game show of the Super Bowl 50 on February 7, 2016, though they released the trailer online a day earlier. In the week before the film 's release, Lions released the tie - in mobile game Gods of Egypt: Secrets of the Lost Kingdom on iOS and Android. BoxOffice forecast two months in advance that Gods of Egypt would gross $14.5 million on its opening weekend in the United States and Canada, and that it would gross $34 million total on its theatrical run. The magazine said the film had "a strong ensemble cast '' and that its director has "had a noteworthy following ''. BoxOffice also said the premise could attract movie - goers who saw Clash of the Titans, Wrath of the Titans, and the Percy Jackson films. Admissions to 3D screenings would help boost Gods of Egypt 's gross. The magazine said factors negatively affecting the film 's gross were a "lackluster reaction '' to its marketing and the backlash to its predominantly white cast causing negative buzz. It anticipated that the film 's release would be front - loaded (focused on profiting mainly from opening weekend) due to the poor buzz, its categorization as a fantasy film, and with London Has Fallen opening the following weekend. A week before the film 's release, TheWrap 's Todd Cunningham reported an updated forecast of $15 million for its opening weekend in the United States and Canada. The Hollywood Reporter 's Pamela McClintock said it was tracking to gross between $12 million and $15 million. Cunningham said the expected gross was low for the film 's triple - digit budget as well as additional marketing costs. He said the film could attract Alex Proyas 's fan base but that it had suffered "some negativity out there '' due to the predominantly white casting as well as the film being perceived to have an "old - fashioned '' feel. Exhibitor Relations senior media analyst Jeff Bock said the film "feels late '' years after the release of 300 (2007) and Immortals (2011), and an earlier production and release would have been more advantageous. Cunningham said with Lionsgate 's financial exposure only being $10 million and the expected opening gross, the film could gross between $40 million and $50 million for its theatrical run in the United States and Canada and ultimately avoid a loss. The Hollywood Reporter 's McClintock said "ancient epics '' in recent years had not performed well at the box office, citing the 2014 films Hercules, The Legend of Hercules, and Pompeii. Ryan Faughnder of Los Angeles Times said in the week before the film 's release that the expected opening weekend gross meant that Lionsgate 's plans to make Gods of Egypt the first film in a new franchise were unlikely. Faughnder said the film would need to perform strongly in territories outside the United States and Canada for a sequel to be developed. Variety 's Brent Lang reported that analysts said the film would need to open to $30 million or more in the United States to justify a sequel. Gods of Egypt grossed $31.2 million in the United States and Canada, and $119.6 million in other countries, for a worldwide total of $150.7 million against a production budget of $140 million. The Hollywood Reporter estimated the film lost the studio up to $90 million, when factoring together all expenses and revenues. Lionsgate released Gods of Egypt in theaters globally starting on February 25, 2016. It was released in 2D, RealD 3D, and IMAX 3D. Lionsgate released the film in the United States and Canada on the weekend of February 26 -- 28, as well as in 68 other markets, including Russia, South Korea, and Brazil. Jaguar Films released the film in the United Arab Emirates (February 25) and other countries in the Middle East under the title Kings of Egypt. Viacom 18 released the film in India on February 26, 2016 in four languages: English, Hindi, Tamil, and Telugu. In the United States and Canada, the film was released in 3,117 theaters. It grossed an estimated $800,000 in Thursday - night previews and $4.8 million on Friday, lowering the projected weekend gross to $11 -- 13 million. It went on to gross $14.1 million in its opening weekend, finishing second at the box office behind Deadpool ($31.5 million). It competed with fellow newcomers Eddie the Eagle and Triple 9, as well as with Deadpool, which opened two weekends earlier. Opening - day audiences polled by CinemaScore gave the film an average grade of "B -- '' on an A+ to F scale. The Christian Science Monitor 's Molly Driscoll said the Gods of Egypt 's US release was during "a traditionally quiet time at the box office ''. Scott Mendelson of Forbes commented on supporting versus opposing a successful debut of the film, while existing as a great right (an original fantasy from a gifted and visionary director) at the same time. '' Outside North America, the film got a staggered release. In its opening weekend, it was number one across Central America, Eastern Europe and South East Asia. Top openings were in Russia ($4.3 million), Brazil ($1.9 million) and Philippines ($1.7 million). Le Vision Pictures acquired rights from Lionsgate in November 2015 to distribute Gods of Egypt in China, and released the film there on March 11, 2016. China was the film 's largest territory, with US $35.6 million. Lionsgate also released the film in the United Kingdom on June 17, 2016. Gods of Egypt was panned by critics. On Rotten Tomatoes, the film has a rating of 15 %, based on 172 reviews, with an average rating of 3.4 / 10. Metacritic gives the film a score of 25 out of 100, based on 25 critics, indicating "generally unfavorable reviews ''. Alonso Duralde of TheWrap wrote, "A mishmash of unconvincing visual effects and clumsy writing -- not to mention another depiction of ancient Egypt in which the lead roles are almost all played by white folks -- Gods of Egypt might have merited a so - bad - it 's - good schadenfreude fanbase had it maintained the unintentional laughs of its first 10 minutes. Instead, it skids into dullness, thus negating the camp classic that it so often verges on becoming. '' Joycelyn Noveck of the Chicago Sun Times gave the film a half star out of four, writing, "It 's obvious the filmmakers were gunning for a sequel here. But this bloated enterprise is so tiresome by the end, it seems more likely headed for a long rest somewhere in the cinematic afterlife. '' Ignatiy Vishnevetsky of The A.V. Club called Gods of Egypt "overlong and very silly, '' and said: "A treasure trove of gilded fantasy bric - a-brac and clashing accents, Proyas ' sword - and - sandals space opera is a head above the likes of Wrath of the Titans, but it rapidly devolves into a tedious and repetitive succession of monster chases, booby traps, and temples that start to crumble at the last minute. '' Jordan Hoffman of The Guardian said, "This is ridiculous. This is offensive. This should n't be, and I 'm not going to say otherwise if you ca n't bring yourself to buy a ticket for this movie. But if you are on the fence you can always offset your karmic footprint with a donation to a charity, because this movie is a tremendous amount of fun. '' On the other hand, Olly Richards of Empire was heavily critical, calling the film a "catastrophe '' and "the Dolly Parton of movies, without any of the knowing wit ''. In response to the reviews, director Proyas posted to Facebook calling critics "diseased vultures pecking at the bones of a dying carcass '', who were "trying to peck to the rhythm of the consensus. I applaud any film - goer who values their own opinion enough to not base it on what the pack - mentality says is good or bad. '' The film was released on May 31, 2016, on DVD and Blu - ray.
how many episodes of lucifer season 3 are there
List of Lucifer episodes - wikipedia Lucifer is an American fantasy police procedural comedy - drama television series developed by Tom Kapinos that premiered on Fox on January 25, 2016. It features a character created by Neil Gaiman, Sam Kieth, and Mike Dringenberg taken from the comic book series The Sandman, which later became the protagonist of the spin - off comic book series Lucifer written by Mike Carey, both published by DC Comics ' Vertigo imprint. On May 11, 2018, Fox canceled the series after three seasons. On June 15, 2018, it was announced that Netflix had picked the series up for a fourth season of ten episodes, which is set to be released in 2019. As of May 28, 2018, 57 episodes of Lucifer have aired, concluding the third season.
the crescent moon and star symbol of islam
Star and crescent - wikipedia The star and crescent is an iconographic symbol used in various historical contexts but most well known today as a symbol of the former Ottoman Empire and, by popular extension, the Islamic world. It develops in the iconography of the Hellenistic period (4th -- 1st centuries BCE) in the Kingdom of Pontus, the Bosporan Kingdom and notably the city of Byzantium by the 2nd century BCE. It is the conjoined representation of the crescent and a star, both of which constituent elements have a long prior history in the iconography of the Ancient Near East as representing either Sun and Moon or Moon and Morning Star (or their divine personifications). Coins with crescent and star symbols represented separately have a longer history, with possible ties to older Mesopotamian iconography. The star, or Sun, is often shown within the arc of the crescent (also called star in crescent, or star within crescent, for disambiguation of depictions of a star and a crescent side by side); In numismatics in particular, the term crescent and pellet is used in cases where the star is simplified to a single dot. In Byzantium, the symbol became associated with its patron goddess Artemis / Hecate, and it is used as a representation of Moon goddesses (Selene / Luna or Artemis / Diana) in the Roman era. Ancient depictions of the symbol always show the crescent with horns pointing upward and with the star (often with eight rays) placed inside the crescent. This arrangement is also found on Sassanid coins beginning in the 5th or 6th century CE. The combination is found comparatively rarely in late medieval and early modern heraldry. It rose to prominence with its adoption as the flag and emblem of the Ottoman Empire and some of its administrative divisions (eyalets and vilayets) and later in the 19th - century Westernizing tanzimat (reforms). The Ottoman flag of 1844, with a white ay - yıldız (Turkish for "crescent - star '') on a red background, continues to be in use as the flag of the Republic of Turkey, with minor modifications. Other states formerly part of the Ottoman Empire also used the symbol, including Libya (1951 -- 1969 and after 2011), Tunisia (1956) and Algeria (1958). The same symbol was used in other national flags introduced during the 20th century, including the flags of Azerbaijan (1918), Pakistan (1947), Malaysia (1948), Singapore (1959), Mauritania (1959), Uzbekistan (1991), Turkmenistan (1991), Comoros (2001), and Maldives (1965). In the later 20th century, the star and crescent have acquired a popular interpretation as a "symbol of Islam '', occasionally embraced by Arab nationalism or Islamism in the 1970s to 1980s, but often rejected as erroneous or unfounded by Muslim commentators in more recent times. Unicode introduced a "star and crescent '' character in its Miscellaneous Symbols block, at U + 262A (☪). Crescents appearing together with a star or stars are a common feature of Sumerian iconography, the crescent usually being associated with the moon god Sin (Nanna) and the star with Ishtar (Inanna, i.e. Venus), often placed alongside the sun disk of Shamash. In Late Bronze Age Canaan, star and crescent moon motifs are also found on Moabite name seals. The depiction of the crescent - and - star or "star inside crescent '' as it would later develop in Bosporan Kingdom is difficult to trace to Mesopotamian art. Exceptionally, a combination of the crescent of Sin with the five - pointed star of Ishtar, with the star placed inside the crescent as in the later Hellenistic - era symbol, placed among numerous other symbols, is found in a boundary stone of Nebuchadnezzar I (12th century BCE; found in Nippur by John Henry Haynes in 1896). An example of such an arrangement is also found in the (highly speculative) reconstruction of a fragmentary stele of Ur - Nammu (Third Dynasty of Ur) discovered in the 1920s. Mithradates VI Eupator of Pontus (r. 120 -- 63 BCE) used an eight rayed star with a crescent moon as his emblem. McGing (1986) notes the association of the star and crescent with Mithradates VI, discussing its appearance on his coins, and its survival in the coins of the Bosporan Kingdom where "(t) he star and crescent appear on Pontic royal coins from the time of Mithradates III and seem to have had oriental significance as a dynastic badge of the Mithridatic family, or the arms of the country of Pontus. '' Several possible interpretations of the emblem have been proposed. In most of these, the "star '' is taken to represent the Sun. The combination of the two symbols has been taken as representing Sun and Moon (and by extension Day and Night), the Zoroastrian Mah and Mithra, or deities arising from Greek - Anatolian - Iranian syncretism, the crescent representing Mēn Pharnakou (Μήν Φαρνακου, the local moon god) and the "star '' (Sun) representing Ahuramazda (in interpretatio graeca called Zeus Stratios) By the late Hellenistic or early Roman period, the star and crescent motif had been associated to some degree with Byzantium. If any goddess had a connection with the walls in Constantinople, it was Hecate. Hecate had a cult in Byzantium from the time of its founding. Like Byzas in one legend, she had her origins in Thrace. Hecate was considered the patron goddess of Byzantium because she was said to have saved the city from an attack by Philip of Macedon in 340 BCE by the appearance of a bright light in the sky. To commemorate the event the Byzantines erected a statue of the goddess known as the Lampadephoros ("light - bearer '' or "light - bringer ''). Some Byzantine coins of the 1st century BCE and later show the head of Artemis with bow and quiver, and feature a crescent with what appears to be a six - rayed star on the reverse. Star and crescent on a coin of Uranopolis, Macedon, ca. 300 BCE (see also Argead star). A star and crescent symbol with the star shown in a sixteen - rayed "sunburst '' design (3rd century BCE). Coin of Mithradates VI Eupator. The obverse side has the inscription ΒΑΣΙΛΕΩΣ ΜΙΘΡΑΔΑΤΟΥ ΕΥΠΑΤΟΡΟΣ with a stag feeding, with the star and crescent and monogram of Pergamum placed near the stag 's head, all in an ivy - wreath. Byzantine coin (1st century) with a bust of Artemis on the obverse and an eight - rayed star within a crescent on the reverse side. The star and crescent symbol appears on some coins of the Parthian vassal kingdom of Elymais in the late 1st century CE. The same symbol is present in coins that are possibly associated with Orodes I of Parthia (1st century BCE). In the 2nd century CE, some Parthian coins show a simplified "pellet within crescent '' symbol. A star and a crescent appearing (separately) on the obverse side of a coin of Orodes II of Parthia (r. 57 -- 37 BCE). Coin of Phraates V of Parthia (r.c. 2 BCE to CE 4) Coin of Vardanes I of Parthia (r.c. CE 40 -- 45) The star and crescent motif appears on the margin of Sassanid coins in the 5th century. Sassanid rulers also appear to have used crowns featuring a crescent, sphere and crescent, or star and crescent. Use of the star - and - crescent combination apparently goes back to the earlier appearance of a star and a crescent on Parthian coins, first under King Orodes II (1st century BCE). In these coins, the two symbols occur separately, on either side of the king 's head, and not yet in their combined star - and - crescent form. Such coins are also found further afield in Greater Persia, by the end of the 1st century CE in a coin issues by the Western Satraps ruler Chashtana. This arrangement is likely inherited from its Ancient Near Eastern predecessors; the star and crescent symbols are not frequently found in Achaemenid iconography, but they are present in some cylinder seals of the Achaemenid era. Ayatollahi (2003) attempts to connect the modern adoption as an "Islamic symbol '' to Sassanid coins remaining in circulation after the Islamic conquest although there is no evidence for any connection of the symbol with Islam or the Ottomans prior to its adoption in Ottoman flags in the late 18th century. In the 2nd century, the star - within - crescent is found on the obverse side of Roman coins minted during the rule of Hadrian, Geta, Caracalla and Septimius Severus, in some cases as part of an arrangement of a crescent and seven stars, one or several of which were placed inside the crescent. Coin of Roman Emperor Hadrian (r. 117 -- 138). The reverse shows an eight - rayed star within a crescent. Roman period limestone pediment from Perge, Turkey (Antalya Museum) showing Diana - Artemis with a crescent and a radiant crown. The crescent on its own is used in western heraldry from at least the 13th century, while the star and crescent (or "Sun and Moon '') emblem is in use in medieval seals at least from the late 12th century. The crescent in pellet symbol is used in Crusader coins of the 12th century, in some cases duplicated in the four corners of a cross, as a variant of the cross-and - crosslets ("Jerusalem cross ''). Many Crusader seals and coins show the crescent and the star (or blazing Sun) on either side of the ruler 's head (as in the Sassanid tradition), e.g. Bohemond III of Antioch, Richard I of England, Raymond VI, Count of Toulouse. At the same time, the star in crescent is found on the obverse of Crusader coins, e.g. in coins of the County of Tripoli minted under Raymond II or III c. 1140s -- 1160s show an "eight - rayed star with pellets above crescent ''. The star and crescent combination appears in attributed arms from the early 14th century, possibly in a coat of arms of c. 1330, possibly attributed to John Chrysostom, and in the Wernigeroder Wappenbuch (late 15th century) attributed to one of the three Magi, named "Balthasar of Tarsus ''. Crescents (without the star) increase in popularity in early modern heraldry in Europe. Siebmachers Wappenbuch (1605) records 48 coats of arms of German families which include one or several crescents. The star and crescent combination remains rare prior to its adoption by the Ottoman Empire in the second half of the 18th century. In the late 16th century, the Korenić - Neorić Armorial shows a white star and crescent on a red field as the coat of arms of Illyria. Great Seal of Richard I of England (1198) Equestrian seal of Raymond VI, Count of Toulouse with a star and a crescent (13th century) Templar seal of the 13th century, probably of the preceptor of the commanderies at Coudrie and Biais (Brittany). The Polish Leliwa coat of arms (14th - century seal) Coats of arms of the Three Magi, with "Baltasar of Tarsus '' being attributed a star and crescent increscent in a blue field, Wernigerode Armorial (c. 1490) Coat of arms of John Freigraf of "Lesser Egypt '' (i.e. Romani / gypsy), 18th - century drawing of a 1498 coat of arms in Pforzheim church. Depictions of stars with crescents are a common motif on the stećak 12th to 16th century tombstones of medieval Bosnia 1668 representation by Joan Blaeu of Coat of arms of the Kingdom of Bosnia from 1595 Korenić - Neorić Armorial The coat of arms of "Illyria '' from the Korenić - Neorić Armorial (1590s) Star and crescent on the obverse of the Jelacic - Gulden of the Kingdom of Croatia (1848) While the crescent on its own is depicted as an emblem used on Islamic war flags from the medieval period, at least from the 13th century although it does not seem to have been in frequent use until the 14th or 15th century, the star and crescent in an Islamic context is more rare in the medieval period, but may occasionally be found in depictions of flags from the 14th century onward. Some Mughal era (17th century) round shields were decorated with a crescent or star and crescent. Depiction of a star and crescent flag on the Saracen side in the Battle of Yarmouk (manuscript illustration of the History of the Tatars, Catalan workshop, early 14th century). A miniature painting from a Padshahnama manuscript (c. 1640), depicting Mughal Emperor Shah Jahan as bearing a shield with a star and crescent decoration. A painting from a Padshahnama manuscript (1633) depicts the scene of Aurangzeb facing the maddened War elephant Sudhakar. Sowar 's shield is decorated with a star and crescent. The adoption of star and crescent as the Ottoman state symbol started during the reign of Mustafa III (1757 -- 1774) and its use became well - established during Abdul Hamid I (1774 -- 1789) and Selim III (1789 -- 1807) periods. A buyruldu from 1793 states that the ships in the Ottoman navy have that flag, and various other documents from earlier and later years mention its use. The ultimate source of the emblem is unclear. It is mostly derived from the star - and - crescent symbol used by the city of Constantinople in antiquity, possibly by association with the crescent design (without star) used in Turkish flags since before 1453. With the Tanzimat reforms in the 19th century, flags were redesigned in the style of the European armies of the day. The flag of the Ottoman Navy was made red, as red was to be the flag of secular institutions and green of religious ones. As the reforms abolished all the various flags (standards) of the Ottoman pashaliks, beyliks and emirates, a single new Ottoman national flag was designed to replace them. The result was the red flag with the white crescent moon and star, which is the precursor to the modern flag of Turkey. A plain red flag was introduced as the civil ensign for all Ottoman subjects. The white crescent with an eight - pointed star on a red field is depicted as the flag of a "Turkish Man of War '' in Colton 's Delineation of Flags of All Nations (1862). Steenbergen 's Vlaggen van alle Natiën of the same year shows a six - pointed star. A plate in Webster 's Unabridged of 1882 shows the flag with an eight - pointed star labelled "Turkey, Man of war ''. The five - pointed star seems to have been present alongside these variants from at least 1857. In addition to Ottoman imperial insignias, symbols appears on the flag of Bosnia Eyalet (1580 -- 1867) and Bosnia Vilayet (1867 -- 1908), as well as the flag of 1831 Bosnian revolt, while the symbols appeared on some representations of medieval Bosnian coat of arms too. In the late 19th century, "Star and Crescent '' came to be used as a metaphor for Ottoman rule in British literature. The increasingly ubiquitous fashion of using the star and crescent symbol in the ornamentation of Ottoman mosques and minarets led to a gradual association of the symbol with Islam in general in western Orientalism. The "Red Crescent '' emblem was used by volunteers of the International Committee of the Red Cross (ICRC) as early as 1877 during the Russo - Turkish War; it was officially adopted in 1929. After the foundation of the Republic of Turkey in 1923, the new Turkish state maintained the last flag of the Ottoman Empire. Proportional standardisations were introduced in the Turkish Flag Law (Turkish: Türk Bayrağı Kanunu) of May 29, 1936. Besides the most prominent example of Turkey (see Flag of Turkey), a number of other Ottoman successor states adopted the design during the 20th century, including the Emirate of Cyrenaica and the Kingdom of Libya, Algeria, Tunisia, and the proposed Arab Islamic Republic. The Ottoman flag of 1844 with a white "ay - yıldız '' (Turkish for "crescent - star '') on a red background continues to be in use as the flag of the Republic of Turkey with minor modifications. Other Ottoman successor states using the star and crescent design in their flag are Tunisia (1956), Libya (1951, re-introduced 2011) and Algeria (1958). The modern emblem of Turkey shows the star outside the arc of the crescent, as it were a "realistic '' depiction of a conjunction of Moon and Venus, while in the 19th century, the Ottoman star and crescent was occasionally still drawn as the (classical but "astronomically incorrect '') star - within - crescent. By contrast, the designs of both the flags of Algeria and Tunisia (as well as Mauritania and Pakistan) place the star within the crescent. Flag of Turkey Flag of Algeria Flag of Libya Flag of Tunisia The same symbol was used in other national flags introduced during the 20th century, including the flags of Azerbaijan (1918, re-introduced 1991), Pakistan (1947), Malaysia (1948), Mauritania (1959), and the partially recognized states of the Sahrawi Arab Democratic Republic (1976) and Northern Cyprus (1983). The symbol also may represent flag of cities or emirates such as the emirate of Umm Al - Quwain. Flag of Azerbaijan Flag of Pakistan Flag of Malaysia Flag of Mauritania Flag of Sahrawi Arab Democratic Republic Flag of Northern Cyprus Flag of Umm al - Quwain National flags with a crescent alongside several stars: Flag of Singapore (1959): crescent and five stars (based on the five stars of the flag of the People 's Republic of China) Flag of Uzbekistan (1991): crescent and twelve stars Flag of Turkmenistan (2001): crescent and five stars (representing five provinces) Flag of the Comoros (2002): crescent and four stars (representing four islands) Flag of the Cocos (Keeling) Islands (2003): crescent and southern cross By the mid 20th century, the symbol came to be re-interpreted as the symbol of Islam or the Muslim community. This symbolism was embraced by movements of Arab nationalism or Islamism in the 1970s, such as the proposed Arab Islamic Republic (1974) and the American Nation of Islam (1973). Cyril Glassé in his The New Encyclopedia of Islam (2001 edition, s.v. "Moon '') states that "in the language of conventional symbols, the crescent and star have become the symbols of Islam as much as the cross is the symbol of Christianity. '' By contrast, some religious Islamic publications emphasize that the crescent is rejected "by many Muslim scholars ''. The star and crescent as a traditional heraldic charge is in continued use in numerous municipal coats of arms (notably the based on the Leliwa (Tarnowski) coat of arms in the case of Polish municipalities). Coat of arms of Halle an der Saale, Germany (1327). Coat of arms of Mińsk Mazowiecki, Poland. Coat of arms of Przeworsk, Poland. Coat of arms of Tarnobrzeg, Poland. Coat of arms of Tarnów, Poland. Coat of arms of Zagreb, Croatia. Flag of Portsmouth, England (18th century): crescent and estoile (with eight wavy rays). Coat of arms of Mattighofen, Austria (1781) Coat of arms of Oelde, Germany (1910). Coat of arms of Niederglatt, Switzerland (1928) Coat of arms of Oberglatt, Switzerland (1928) Coat of arms of Niederweningen, Switzerland (1928)
what is name of the longest region of the large intestine
Large intestine - Wikipedia The large intestine, also known as the large bowel or colon, is the last part of the gastrointestinal tract and of the digestive system in vertebrates. Water is absorbed here and the remaining waste material is stored as feces before being removed by defecation. Most sources define the large intestine as the combination of the cecum, colon, rectum, and anal canal. Some other sources exclude the anal canal. In humans, the large intestine begins in the right iliac region of the pelvis, just at or below the waist, where it is joined to the end of the small intestine at the cecum, via the ileocecal valve. It then continues as the colon ascending the abdomen, across the width of the abdominal cavity as the transverse colon, and then descending to the rectum and its endpoint at the anal canal. Overall, in humans, the large intestine is about 1.5 metres (5 ft) long, which is about one - fifth of the whole length of the gastrointestinal tract. The colon is the last part of the digestive system. It extracts water and salt from solid wastes before they are eliminated from the body and is the site in which flora - aided (largely bacterial) fermentation of unabsorbed material occurs. Unlike the small intestine, the colon does not play a major role in absorption of foods and nutrients. About 1.5 litres or 45 ounces of water arrives in the colon each day. The length of the adult human male colon is 166 cm (range of 80 to 313 cm), on average, for females it is 155 cm (range of 80 to 214 cm). In mammals, the colon consists of five sections: the cecum plus the ascending colon, the transverse colon, the descending colon, the sigmoid colon and the rectum. Sections of the colon are: The parts of the colon are either intraperitoneal or behind it in the retroperitoneum. Retroperitoneal organs in general do not have a complete covering of peritoneum, so they are fixed in location. Intraperitoneal organs are completely surrounded by peritoneum and are therefore mobile. Of the colon, the ascending colon, descending colon and rectum are retroperitoneal, while the cecum, appendix, transverse colon and sigmoid colon are intraperitoneal. This is important as it affects which organs can be easily accessed during surgery, such as a laparotomy. The average inner diameter of sections of the colon in centimeters (with ranges in parentheses) are cecum 8.7 (8.0 - 10.5), ascending colon 6.6 (6.0 - 7.0), transverse colon 5.8 (5.0 - 6.5), descending / sigmoid colon 6.3 (6.0 - 6.8) and rectum near rectal / sigmoid junction 5.7 (4.5 - 7.5). The cecum is the first section of the colon and involved in the digestion, while the appendix which develops embryologically from it, is a structure of the colon, not involved in digestion and considered to be part of the gut - associated lymphoid tissue. The function of the appendix is uncertain, but some sources believe that the appendix has a role in housing a sample of the colon 's microflora, and is able to help to repopulate the colon with bacteria if the microflora has been damaged during the course of an immune reaction. The appendix has also been shown to have a high concentration of lymphatic cells. The ascending colon is the first of four sections of the large intestine. It is connected to the small intestine by a section of bowel called the cecum. The ascending colon runs upwards through the abdominal cavity toward the transverse colon for approximately eight inches (20 cm). One of the main functions of the colon is to remove the water and other key nutrients from waste material and recycle it. As the waste material exits the small intestine through the ileocecal valve, it will move into the cecum and then to the ascending colon where this process of extraction starts. The unwanted waste material is moved upwards toward the transverse colon by the action of peristalsis. The ascending colon is sometimes attached to the appendix via Gerlach 's valve. In ruminants, the ascending colon is known as the spiral colon. Taking into account all ages and sexes, colon cancer occurs here most often (41 %). The transverse colon is the part of the colon from the hepatic flexure, also known as the right colic, (the turn of the colon by the liver) to the splenic flexure also known as the left colic, (the turn of the colon by the spleen). The transverse colon hangs off the stomach, attached to it by a large fold of peritoneum called the greater omentum. On the posterior side, the transverse colon is connected to the posterior abdominal wall by a mesentery known as the transverse mesocolon. The transverse colon is encased in peritoneum, and is therefore mobile (unlike the parts of the colon immediately before and after it). The proximal two - thirds of the transverse colon is perfused by the middle colic artery, a branch of the superior mesenteric artery (SMA), while the latter third is supplied by branches of the inferior mesenteric artery (IMA). The "watershed '' area between these two blood supplies, which represents the embryologic division between the midgut and hindgut, is an area sensitive to ischemia. The descending colon is the part of the colon from the splenic flexure to the beginning of the sigmoid colon. One function of the descending colon in the digestive system is to store feces that will be emptied into the rectum. It is retroperitoneal in two - thirds of humans. In the other third, it has a (usually short) mesentery. The arterial supply comes via the left colic artery. The descending colon is also called the distal gut, as it is further along the gastrointestinal tract than the proximal gut. Gut flora are very dense in this region. The sigmoid colon is the part of the large intestine after the descending colon and before the rectum. The name sigmoid means S - shaped (see sigmoid; cf. sigmoid sinus). The walls of the sigmoid colon are muscular, and contract to increase the pressure inside the colon, causing the stool to move into the rectum. The sigmoid colon is supplied with blood from several branches (usually between 2 and 6) of the sigmoid arteries, a branch of the IMA. The IMA terminates as the superior rectal artery. Sigmoidoscopy is a common diagnostic technique used to examine the sigmoid colon. The rectum is the last section of the large intestine. It holds the formed feces awaiting elimination via defecation. The cecum -- the first part of the large intestine The taenia coli run the length of the large intestine. Because the taenia coli are shorter than the large bowel itself, the colon becomes sacculated, forming the haustra of the colon which are the shelf - like intraluminal projections. Arterial supply to the colon comes from branches of the superior mesenteric artery (SMA) and inferior mesenteric artery (IMA). Flow between these two systems communicates via a "marginal artery '' that runs parallel to the colon for its entire length. Historically, it has been believed that the arc of Riolan, or the meandering mesenteric artery (of Moskowitz), is a variable vessel connecting the proximal SMA to the proximal IMA that can be extremely important if either vessel is occluded. However, recent studies conducted with improved imaging technology have questioned the actual existence of this vessel, with some experts calling for the abolition of the terms from future medical literature. Venous drainage usually mirrors colonic arterial supply, with the inferior mesenteric vein draining into the splenic vein, and the superior mesenteric vein joining the splenic vein to form the hepatic portal vein that then enters the liver. Lymphatic drainage from the ascending colon and proximal two - thirds of the transverse colon is to the colic lymph nodes and the superior mesenteric lymph nodes, which drain into the cisterna chyli. The lymph from the distal one - third of the transverse colon, the descending colon, the sigmoid colon, and the upper rectum drain into the inferior mesenteric and colic lymph nodes. The lower rectum to the anal canal above the pectinate line drain to the internal iliac nodes. The anal canal below the pectinate line drains into the superficial inguinal nodes. The pectinate line only roughly marks this transition. One variation on the normal anatomy of the colon occurs when extra loops form, resulting in a colon that is up to five metres longer than normal. This condition, referred to as redundant colon, typically has no direct major health consequences, though rarely volvulus occurs, resulting in obstruction and requiring immediate medical attention. A significant indirect health consequence is that use of a standard adult colonoscope is difficult and in some cases impossible when a redundant colon is present, though specialized variants on the instrument (including the pediatric variant) are useful in overcoming this problem. The wall of the large intestine is lined with simple columnar epithelium with invaginations. The invaginations are called the intestinal glands or colonic crypts. The colon crypts are shaped like microscopic thick walled test tubes with a central hole down the length of the tube (the crypt lumen). Four tissue sections are shown here, two cut across the long axes of the crypts and two cut parallel to the long axes. In these images the cells have been stained by immunohistochemistry to show a brown - orange color if the cells produce a mitochondrial protein called cytochrome c oxidase subunit I (CCOI). The nuclei of the cells (located at the outer edges of the cells lining the walls of the crypts) are stained blue - gray with haematoxylin. As seen in panels C and D, crypts are about 75 to about 110 cells long. Baker et al. found that the average crypt circumference is 23 cells. Thus, by the images shown here, there are an average of about 1,725 to 2530 cells per colonic crypt. Nooteboom et al. measuring the number of cells in a small number of crypts reported a range of 1500 to 4900 cells per colonic crypt. Cells are produced at the crypt base and migrate upward along the crypt axis before being shed into the colonic lumen days later. There are 5 to 6 stem cells at the bases of the crypts. As estimated from the image in panel A, there are about 100 colonic crypts per square millimeter of the colonic epithelium. Since the average length of the human colon is 160.5 cm and the average inner circumference of the colon is 6.2 cm, the inner surface epithelial area of the human colon has an average area of about 995 sq cm, which includes 9,950,000 (close to 10 million) crypts. In the four tissue sections shown here, many of the intestinal glands have cells with a mitochondrial DNA mutation in the CCOI gene and appear mostly white, with their main color being the blue - gray staining of the nuclei. As seen in panel B, a portion of the stem cells of three crypts appear to have a mutation in CCOI, so that 40 % to 50 % of the cells arising from those stem cells form a white segment in the cross cut area. Overall, the percent of crypts deficient for CCOI is less than 1 % before age 40, but then increases linearly with age. Colonic crypts deficient for CCOI in women reaches, on average, 18 % in women and 23 % in men by 80 -- 84 years of age. Crypts of the colon can reproduce by fission, as seen in panel C, where a crypt is fissioning to form two crypts, and in panel B where at least one crypt appears to be fissioning. Most crypts deficient in CCOI are in clusters of crypts (clones of crypts) with two or more CCOI - deficient crypts adjacent to each other (see panel D). About 150 of the many thousands of protein coding genes expressed in the large intestine, some are specific to the mucous membrane in different regions and include CEACAM7. The large intestine absorbs water and any remaining absorbable nutrients from the food before sending the indigestible matter to the rectum. The colon absorbs vitamins that are created by the colonic bacteria, such as vitamin K (especially important as the daily ingestion of vitamin K is not normally enough to maintain adequate blood coagulation), thiamine and riboflavin. It also compacts feces, and stores fecal matter in the rectum until it can be discharged via the anus in defecation. The large intestine also secretes K+ and Cl -. Chloride secretion increases in cystic fibrosis. Recycling of various nutrients takes place in colon. Examples include fermentation of carbohydrates, short chain fatty acids, and urea cycling. The appendix is attached to the inferior surface of the cecum, and contains a small amount of mucosa - associated lymphoid tissue which gives the appendix an undetermined role in immunity. However, the appendix is known to be important in fetal life as it contains endocrine cells that release biogenic amines and peptide hormones important for homeostasis during early growth and development. The appendix can be removed with no apparent damage or consequence to the patient. By the time the chyme has reached this tube, most nutrients and 90 % of the water have been absorbed by the body. At this point some electrolytes like sodium, magnesium, and chloride are left as well as indigestible parts of ingested food (e.g., a large part of ingested amylose, starch which has been shielded from digestion heretofore, and dietary fiber, which is largely indigestible carbohydrate in either soluble or insoluble form). As the chyme moves through the large intestine, most of the remaining water is removed, while the chyme is mixed with mucus and bacteria (known as gut flora), and becomes feces. The ascending colon receives fecal material as a liquid. The muscles of the colon then move the watery waste material forward and slowly absorb all the excess water, causing the stools to gradually solidify as they move along into the descending colon. The bacteria break down some of the fiber for their own nourishment and create acetate, propionate, and butyrate as waste products, which in turn are used by the cell lining of the colon for nourishment. No protein is made available. In humans, perhaps 10 % of the undigested carbohydrate thus becomes available, though this may vary with diet; in other animals, including other apes and primates, who have proportionally larger colons, more is made available, thus permitting a higher portion of plant material in the diet. The large intestine produces no digestive enzymes -- chemical digestion is completed in the small intestine before the chyme reaches the large intestine. The pH in the colon varies between 5.5 and 7 (slightly acidic to neutral). Water absorption at the colon typically proceeds against a transmucosal osmotic pressure gradient. The standing gradient osmosis is the reabsorption of water against the osmotic gradient in the intestines. Cells occupying the intestinal lining pump sodium ions into the intercellular space, raising the osmolarity of the intercellular fluid. This hypertonic fluid creates an osmotic pressure that drives water into the lateral intercellular spaces by osmosis via tight junctions and adjacent cells, which then in turn moves across the basement membrane and into the capillaries, while more sodium ions are pumped again into the intercellular fluid. Although water travels down an osmotic gradient in each individual step, overall, water usually travels against the osmotic gradient due to the pumping of sodium ions into the intercellular fluid. This allows the large intestine to absorb water despite the blood in capillaries being hypotonic compared to the fluid within the intestinal lumen. The large intestine houses over 700 species of bacteria that perform a variety of functions, as well as fungi, protozoa, and archaea. Species diversity varies by geography and diet. The microbes in a human distal gut often number in the vicinity of 100 trillion, and can weigh around 200 grams (0.44 pounds). This mass of mostly symbiotic microbes has recently been called the latest human organ to be "discovered '' or in other words, the "forgotten organ ''. The large intestine absorbs some of the products formed by the bacteria inhabiting this region. Undigested polysaccharides (fiber) are metabolized to short - chain fatty acids by bacteria in the large intestine and absorbed by passive diffusion. The bicarbonate that the large intestine secretes helps to neutralize the increased acidity resulting from the formation of these fatty acids. These bacteria also produce large amounts of vitamins, especially vitamin K and biotin (a B vitamin), for absorption into the blood. Although this source of vitamins, in general, provides only a small part of the daily requirement, it makes a significant contribution when dietary vitamin intake is low. An individual who depends on absorption of vitamins formed by bacteria in the large intestine may become vitamin - deficient if treated with antibiotics that inhibit the vitamin producing species of bacteria as well as the intended disease - causing bacteria. Other bacterial products include gas (flatus), which is a mixture of nitrogen and carbon dioxide, with small amounts of the gases hydrogen, methane, and hydrogen sulfide. Bacterial fermentation of undigested polysaccharides produces these. Some of the fecal odor is due to indoles, metabolized from the amino acid tryptophan. The normal flora is also essential in the development of certain tissues, including the cecum and lymphatics. They are also involved in the production of cross-reactive antibodies. These are antibodies produced by the immune system against the normal flora, that are also effective against related pathogens, thereby preventing infection or invasion. The two most prevalent phyla of the colon are firmicutes and bacteroides. The ratio between the two seems to vary widely as reported by the Human Microbiome Project. Bacteroides are implicated in the initiation of colitis and colon cancer. Bifidobacteria are also abundant, and are often described as ' friendly bacteria '. A mucus layer protects the large intestine from attacks from colonic commensal bacteria. Following are the most common diseases or disorders of the colon: Colonoscopy is the endoscopic examination of the large intestine and the distal part of the small bowel with a CCD camera or a fiber optic camera on a flexible tube passed through the anus. It can provide a visual diagnosis (e.g. ulceration, polyps) and grants the opportunity for biopsy or removal of suspected colorectal cancer lesions. Colonoscopy can remove polyps as small as one millimetre or less. Once polyps are removed, they can be studied with the aid of a microscope to determine if they are precancerous or not. It takes 15 years or less for a polyp to turn cancerous. Colonoscopy is similar to sigmoidoscopy -- the difference being related to which parts of the colon each can examine. A colonoscopy allows an examination of the entire colon (1200 -- 1500 mm in length). A sigmoidoscopy allows an examination of the distal portion (about 600 mm) of the colon, which may be sufficient because benefits to cancer survival of colonoscopy have been limited to the detection of lesions in the distal portion of the colon. A sigmoidoscopy is often used as a screening procedure for a full colonoscopy, often done in conjunction with a fecal occult blood test (FOBT). About 5 % of these screened patients are referred to colonoscopy. Virtual colonoscopy, which uses 2D and 3D imagery reconstructed from computed tomography (CT) scans or from nuclear magnetic resonance (MR) scans, is also possible, as a totally non-invasive medical test, although it is not standard and still under investigation regarding its diagnostic abilities. Furthermore, virtual colonoscopy does not allow for therapeutic maneuvers such as polyp / tumour removal or biopsy nor visualization of lesions smaller than 5 millimeters. If a growth or polyp is detected using CT colonography, a standard colonoscopy would still need to be performed. Additionally, surgeons have lately been using the term pouchoscopy to refer to a colonoscopy of the ileo - anal pouch. The large intestine is truly distinct only in tetrapods, in which it is almost always separated from the small intestine by an ileocaecal valve. In most vertebrates, however, it is a relatively short structure running directly to the anus, although noticeably wider than the small intestine. Although the caecum is present in most amniotes, only in mammals does the remainder of the large intestine develop into a true colon. In some small mammals, the colon is straight, as it is in other tetrapods, but, in the majority of mammalian species, it is divided into ascending and descending portions; a distinct transverse colon is typically present only in primates. However, the taeniae coli and accompanying haustra are not found in either carnivorans or ruminants. The rectum of mammals (other than monotremes) is derived from the cloaca of other vertebrates, and is, therefore, not truly homologous with the "rectum '' found in these species. In fish, there is no true large intestine, but simply a short rectum connecting the end of the digestive part of the gut to the cloaca. In sharks, this includes a rectal gland that secretes salt to help the animal maintain osmotic balance with the seawater. The gland somewhat resembles a caecum in structure, but is not a homologous structure. Intestines Colon. Deep dissection. Anterior view. This article incorporates text in the public domain from page 1177 of the 20th edition of Gray 's Anatomy (1918)
what does the word product mean in math
Product (mathematics) - wikipedia In mathematics, a product is the result of multiplying, or an expression that identifies factors to be multiplied. Thus, for instance, 6 is the product of 2 and 3 (the result of multiplication), and x ⋅ (2 + x) (\ displaystyle x \ cdot (2 + x)) is the product of x (\ displaystyle x) and (2 + x) (\ displaystyle (2 + x)) (indicating that the two factors should be multiplied together). The order in which real or complex numbers are multiplied has no bearing on the product; this is known as the commutative law of multiplication. When matrices or members of various other associative algebras are multiplied, the product usually depends on the order of the factors. Matrix multiplication, for example, and multiplication in other algebras is in general non-commutative. There are many different kinds of products in mathematics: besides being able to multiply just numbers, polynomials or matrices, one can also define products on many different algebraic structures. An overview of these different kinds of products is given here. Placing several stones into a rectangular pattern with r (\ displaystyle r) rows and s (\ displaystyle s) columns gives stones. Integers allow positive and negative numbers. The two numbers are multiplied just like natural numbers, except we need an additional rule for the signs: In words, we have: Two fractions can be multiplied by multiplying their numerators and denominators: For a rigorous definition of the product of two real numbers see Construction of the real numbers. Two complex numbers can be multiplied by the distributive law and the fact that i 2 = − 1 (\ displaystyle \ mathrm (i) ^ (2) = - 1), as follows: Complex numbers can be written in polar coordinates: Furthermore, The geometric meaning is that we multiply the magnitudes and add the angles. The product of two quaternions can be found in the article on quaternions. However, it is interesting to note that in this case, a ⋅ b (\ displaystyle a \ cdot b) and b ⋅ a (\ displaystyle b \ cdot a) are in general different. The product operator for the product of a sequence is denoted by the capital Greek letter pi ∏ (in analogy to the use of the capital Sigma ∑ as summation symbol). The product of a sequence consisting of only one number is just that number itself. The product of no factors at all is known as the empty product, and is equal to 1. Commutative rings have a product operation. Residue classes in the rings Z / N Z (\ displaystyle \ mathbb (Z) / N \ mathbb (Z)) can be added: and multiplied: Functions to the real numbers can be added or multiplied by adding or multiplying their outputs: Two functions from the reals to itself can be multiplied in another way, called the convolution. If then the integral is well defined and is called the convolution. Under the Fourier transform, convolution becomes point-wise function multiplication. The product of two polynomials is given by the following: with There are many different kinds of products in linear algebra; some of these have confusingly similar names (outer product, exterior product) but have very different meanings. Others have very different names (outer product, tensor product, Kronecker product) but convey essentially the same idea. A brief overview of these is given here. By the very definition of a vector space, one can form the product of any scalar with any vector, giving a map R × V → V (\ displaystyle \ mathbb (R) \ times V \ rightarrow V). A scalar product is a bilinear map: with the following conditions, that v ⋅ v > 0 (\ displaystyle v \ cdot v > 0) for all 0 ≠ v ∈ V (\ displaystyle 0 \ not = v \ in V). From the scalar product, one can define a norm by letting ∥ v ∥: = v ⋅ v (\ displaystyle \ v \: = (\ sqrt (v \ cdot v))). The scalar product also allows one to define an angle between two vectors: In n (\ displaystyle n) - dimensional Euclidean space, the standard scalar product (called the dot product) is given by: The cross product of two vectors in 3 - dimensions is a vector perpendicular to the two factors, with length equal to the area of the parallelogram spanned by the two factors. The cross product can also be expressed as the formal determinant: A linear mapping can be defined as a function f between two vector spaces V and W with underlying field F, satisfying If one only considers finite dimensional vector spaces, then in which b andb denote the bases of V and W, and v denotes the component of v on b, and Einstein summation convention is applied. Now we consider the composition of two linear mappings between finite dimensional vector spaces. Let the linear mapping f map V to W, and let the linear mapping g map W to U. Then one can get Or in matrix form: in which the i - row, j - column element of F, denoted by F, is f, and G = g. The composition of more than two linear mappings can be similarly represented by a chain of matrix multiplication. Given two matrices their product is given by There is a relationship between the composition of linear functions and the product of two matrices. To see this, let r = dim (U), s = dim (V) and t = dim (W) be the (finite) dimensions of vector spaces U, V and W. Let U = (u 1,... u r) (\ displaystyle (\ mathcal (U)) = \ (u_ (1), \ ldots u_ (r) \)) be a basis of U, V = (v 1,... v s) (\ displaystyle (\ mathcal (V)) = \ (v_ (1), \ ldots v_ (s) \)) be a basis of V and W = (w 1,... w t) (\ displaystyle (\ mathcal (W)) = \ (w_ (1), \ ldots w_ (t) \)) be a basis of W. In terms of this basis, let A = M V U (f) ∈ R s × r (\ displaystyle A = M_ (\ mathcal (V)) ^ (\ mathcal (U)) (f) \ in \ mathbb (R) ^ (s \ times r)) be the matrix representing f: U → V and B = M W V (g) ∈ R r × t (\ displaystyle B = M_ (\ mathcal (W)) ^ (\ mathcal (V)) (g) \ in \ mathbb (R) ^ (r \ times t)) be the matrix representing g: V → W. Then is the matrix representing g ∘ f: U → W (\ displaystyle g \ circ f: U \ rightarrow W). In other words: the matrix product is the description in coordinates of the composition of linear functions. Given two finite dimensional vector spaces V and W, the tensor product of them can be defined as a (2, 0) - tensor satisfying: where V and W denote the dual spaces of V and W. For infinite - dimensional vector spaces, one also has the: The tensor product, outer product and Kronecker product all convey the same general idea. The differences between these are that the Kronecker product is just a tensor product of matrices, with respect to a previously - fixed basis, whereas the tensor product is usually given in its intrinsic definition. The outer product is simply the Kronecker product, limited to vectors (instead of matrices). In general, whenever one has two mathematical objects that can be combined in a way that behaves like a linear algebra tensor product, then this can be most generally understood as the internal product of a monoidal category. That is, the monoidal category captures precisely the meaning of a tensor product; it captures exactly the notion of why it is that tensor products behave the way they do. More precisely, a monoidal category is the class of all things (of a given type) that have a tensor product. Other kinds of products in linear algebra include: In set theory, a Cartesian product is a mathematical operation which returns a set (or product set) from multiple sets. That is, for sets A and B, the Cartesian product A × B is the set of all ordered pairs (a, b) where a ∈ A and b ∈ B. The class of all things (of a given type) that have Cartesian products is called a Cartesian category. Many of these are Cartesian closed categories. Sets are an example of such objects. The empty product on numbers and most algebraic structures has the value of 1 (the identity element of multiplication) just like the empty sum has the value of 0 (the identity element of addition). However, the concept of the empty product is more general, and requires special treatment in logic, set theory, computer programming and category theory. Products over other kinds of algebraic structures include: A few of the above products are examples of the general notion of an internal product in a monoidal category; the rest are describable by the general notion of a product in category theory. All of the previous examples are special cases or examples of the general notion of a product. For the general treatment of the concept of a product, see product (category theory), which describes how to combine two objects of some kind to create an object, possibly of a different kind. But also, in category theory, one has:
race the power of an illusion movie summary
Race: the power of an Illusion - wikipedia Race: The Power of an Illusion is a three - part documentary series produced by California Newsreel that investigates the idea of race in society, science and history. The educational documentary originally screened on American public television and was primarily funded by the Corporation for Public Broadcasting, the Ford Foundation and PBS. The division of people into distinct categories -- "white '', "black '', "yellow '', "red '' -- has become so widely accepted and so deeply rooted that most people do not think to question its veracity. This three - hour documentary challenges the idea of race as biology and traces our current notions to the 19th century. It also demonstrates how race nevertheless has a continuing impact through institutions and social policies. Examines the contemporary science - including genetics - that challenges our common - sense assumptions that human beings can be bundled into three or four fundamentally different groups according to their physical traits. Uncovers the roots of the race concept in North America, the 19th - century science that legitimated it, and how it came to be held so fiercely in the Western imagination. The episode is an eye - opening tale of how race served to rationalize, even justify, American social inequalities as "natural. '' Asks, if race is not biology, what is it? This episode uncovers how race resides not in nature but in politics, economics and culture. It reveals how our social institutions "make '' race by disproportionately channeling resources, power, status and wealth to white people.
when a person is at rest approximately how much blood is being held within the veins
Blood - wikipedia Blood is a body fluid in humans and other animals that delivers necessary substances such as nutrients and oxygen to the cells and transports metabolic waste products away from those same cells. In vertebrates, it is composed of blood cells suspended in blood plasma. Plasma, which constitutes 55 % of blood fluid, is mostly water (92 % by volume), and contains proteins, glucose, mineral ions, hormones, carbon dioxide (plasma being the main medium for excretory product transportation), and blood cells themselves. Albumin is the main protein in plasma, and it functions to regulate the colloidal osmotic pressure of blood. The blood cells are mainly red blood cells (also called RBCs or erythrocytes), white blood cells (also called WBCs or leukocytes) and platelets (also called thrombocytes). The most abundant cells in vertebrate blood are red blood cells. These contain hemoglobin, an iron - containing protein, which facilitates oxygen transport by reversibly binding to this respiratory gas and greatly increasing its solubility in blood. In contrast, carbon dioxide is mostly transported extracellularly as bicarbonate ion transported in plasma. Vertebrate blood is bright red when its hemoglobin is oxygenated and dark red when it is deoxygenated. Some animals, such as crustaceans and mollusks, use hemocyanin to carry oxygen, instead of hemoglobin. Insects and some mollusks use a fluid called hemolymph instead of blood, the difference being that hemolymph is not contained in a closed circulatory system. In most insects, this "blood '' does not contain oxygen - carrying molecules such as hemoglobin because their bodies are small enough for their tracheal system to suffice for supplying oxygen. Jawed vertebrates have an adaptive immune system, based largely on white blood cells. White blood cells help to resist infections and parasites. Platelets are important in the clotting of blood. Arthropods, using hemolymph, have hemocytes as part of their immune system. Blood is circulated around the body through blood vessels by the pumping action of the heart. In animals with lungs, arterial blood carries oxygen from inhaled air to the tissues of the body, and venous blood carries carbon dioxide, a waste product of metabolism produced by cells, from the tissues to the lungs to be exhaled. Medical terms related to blood often begin with hemo - or hemato - (also spelled haemo - and haemato -) from the Greek word αἷμα (haima) for "blood ''. In terms of anatomy and histology, blood is considered a specialized form of connective tissue, given its origin in the bones and the presence of potential molecular fibers in the form of fibrinogen. Blood performs many important functions within the body, including: Blood accounts for 7 % of the human body weight, with an average density around 1060 kg / m, very close to pure water 's density of 1000 kg / m. The average adult has a blood volume of roughly 5 litres (11 US pt), which is composed of plasma and several kinds of cells. These blood cells (which are also called corpuscles or "formed elements '') consist of erythrocytes (red blood cells, RBCs), leukocytes (white blood cells), and thrombocytes (platelets). By volume, the red blood cells constitute about 45 % of whole blood, the plasma about 54.3 %, and white cells about 0.7 %. Whole blood (plasma and cells) exhibits non-Newtonian fluid dynamics. If all human hemoglobin were free in the plasma rather than being contained in RBCs, the circulatory fluid would be too viscous for the cardiovascular system to function effectively. Human blood fractioned by centrifugation: Plasma (upper, yellow layer), buffy coat (middle, thin white layer) and erythrocyte layer (bottom, red layer) can be seen. Blood circulation: Red = oxygenated, blue = deoxygenated Illustration depicting formed elements of blood Two tubes of EDTA - anticoagulated blood. Left tube: after standing, the RBCs have settled at the bottom of the tube. Right tube: Freshly drawn blood One microliter of blood contains: 45 ± 7 (38 -- 52 %) for males 42 ± 5 (37 -- 47 %) for females Oxygenated: 98 -- 99 % Deoxygenated: 75 % About 55 % of blood is blood plasma, a fluid that is the blood 's liquid medium, which by itself is straw - yellow in color. The blood plasma volume totals of 2.7 -- 3.0 liters (2.8 -- 3.2 quarts) in an average human. It is essentially an aqueous solution containing 92 % water, 8 % blood plasma proteins, and trace amounts of other materials. Plasma circulates dissolved nutrients, such as glucose, amino acids, and fatty acids (dissolved in the blood or bound to plasma proteins), and removes waste products, such as carbon dioxide, urea, and lactic acid. Other important components include: The term serum refers to plasma from which the clotting proteins have been removed. Most of the proteins remaining are albumin and immunoglobulins. Blood pH is regulated to stay within the narrow range of 7.35 to 7.45, making it slightly basic. Blood that has a pH below 7.35 is too acidic, whereas blood pH above 7.45 is too basic. Blood pH, partial pressure of oxygen (pO), partial pressure of carbon dioxide (pCO), and bicarbonate (HCO) are carefully regulated by a number of homeostatic mechanisms, which exert their influence principally through the respiratory system and the urinary system to control the acid - base balance and respiration. An arterial blood gas test measures these. Plasma also circulates hormones transmitting their messages to various tissues. The list of normal reference ranges for various blood electrolytes is extensive. Human blood is typical of that of mammals, although the precise details concerning cell numbers, size, protein structure, and so on, vary somewhat between species. In non-mammalian vertebrates, however, there are some key differences: Blood is circulated around the body through blood vessels by the pumping action of the heart. In humans, blood is pumped from the strong left ventricle of the heart through arteries to peripheral tissues and returns to the right atrium of the heart through veins. It then enters the right ventricle and is pumped through the pulmonary artery to the lungs and returns to the left atrium through the pulmonary veins. Blood then enters the left ventricle to be circulated again. Arterial blood carries oxygen from inhaled air to all of the cells of the body, and venous blood carries carbon dioxide, a waste product of metabolism by cells, to the lungs to be exhaled. However, one exception includes pulmonary arteries, which contain the most deoxygenated blood in the body, while the pulmonary veins contain oxygenated blood. Additional return flow may be generated by the movement of skeletal muscles, which can compress veins and push blood through the valves in veins toward the right atrium. The blood circulation was famously described by William Harvey in 1628. In vertebrates, the various cells of blood are made in the bone marrow in a process called hematopoiesis, which includes erythropoiesis, the production of red blood cells; and myelopoiesis, the production of white blood cells and platelets. During childhood, almost every human bone produces red blood cells; as adults, red blood cell production is limited to the larger bones: the bodies of the vertebrae, the breastbone (sternum), the ribcage, the pelvic bones, and the bones of the upper arms and legs. In addition, during childhood, the thymus gland, found in the mediastinum, is an important source of T lymphocytes. The proteinaceous component of blood (including clotting proteins) is produced predominantly by the liver, while hormones are produced by the endocrine glands and the watery fraction is regulated by the hypothalamus and maintained by the kidney. Healthy erythrocytes have a plasma life of about 120 days before they are degraded by the spleen, and the Kupffer cells in the liver. The liver also clears some proteins, lipids, and amino acids. The kidney actively secretes waste products into the urine. About 98.5 % of the oxygen in a sample of arterial blood in a healthy human breathing air at sea - level pressure is chemically combined with the hemoglobin. About 1.5 % is physically dissolved in the other blood liquids and not connected to hemoglobin. The hemoglobin molecule is the primary transporter of oxygen in mammals and many other species (for exceptions, see below). Hemoglobin has an oxygen binding capacity between 1.36 and 1.40 ml O per gram hemoglobin, which increases the total blood oxygen capacity seventyfold, compared to if oxygen solely were carried by its solubility of 0.03 ml O per liter blood per mm Hg partial pressure of oxygen (about 100 mm Hg in arteries). With the exception of pulmonary and umbilical arteries and their corresponding veins, arteries carry oxygenated blood away from the heart and deliver it to the body via arterioles and capillaries, where the oxygen is consumed; afterwards, venules and veins carry deoxygenated blood back to the heart. Under normal conditions in adult humans at rest, hemoglobin in blood leaving the lungs is about 98 -- 99 % saturated with oxygen, achieving an oxygen delivery between 950 and 1150 ml / min to the body. In a healthy adult at rest, oxygen consumption is approximately 200 -- 250 ml / min, and deoxygenated blood returning to the lungs is still roughly 75 % (70 to 78 %) saturated. Increased oxygen consumption during sustained exercise reduces the oxygen saturation of venous blood, which can reach less than 15 % in a trained athlete; although breathing rate and blood flow increase to compensate, oxygen saturation in arterial blood can drop to 95 % or less under these conditions. Oxygen saturation this low is considered dangerous in an individual at rest (for instance, during surgery under anesthesia). Sustained hypoxia (oxygenation less than 90 %), is dangerous to health, and severe hypoxia (saturations less than 30 %) may be rapidly fatal. A fetus, receiving oxygen via the placenta, is exposed to much lower oxygen pressures (about 21 % of the level found in an adult 's lungs), so fetuses produce another form of hemoglobin with a much higher affinity for oxygen (hemoglobin F) to function under these conditions. CO is carried in blood in three different ways. (The exact percentages vary depending whether it is arterial or venous blood). Most of it (about 70 %) is converted to bicarbonate ions HCO by the enzyme carbonic anhydrase in the red blood cells by the reaction CO + H O → H CO → H + HCO; about 7 % is dissolved in the plasma; and about 23 % is bound to hemoglobin as carbamino compounds. Hemoglobin, the main oxygen - carrying molecule in red blood cells, carries both oxygen and carbon dioxide. However, the CO bound to hemoglobin does not bind to the same site as oxygen. Instead, it combines with the N - terminal groups on the four globin chains. However, because of allosteric effects on the hemoglobin molecule, the binding of CO decreases the amount of oxygen that is bound for a given partial pressure of oxygen. The decreased binding to carbon dioxide in the blood due to increased oxygen levels is known as the Haldane effect, and is important in the transport of carbon dioxide from the tissues to the lungs. A rise in the partial pressure of CO or a lower pH will cause offloading of oxygen from hemoglobin, which is known as the Bohr effect. Some oxyhemoglobin loses oxygen and becomes deoxyhemoglobin. Deoxyhemoglobin binds most of the hydrogen ions as it has a much greater affinity for more hydrogen than does oxyhemoglobin. In mammals, blood is in equilibrium with lymph, which is continuously formed in tissues from blood by capillary ultrafiltration. Lymph is collected by a system of small lymphatic vessels and directed to the thoracic duct, which drains into the left subclavian vein where lymph rejoins the systemic blood circulation. Blood circulation transports heat throughout the body, and adjustments to this flow are an important part of thermoregulation. Increasing blood flow to the surface (e.g., during warm weather or strenuous exercise) causes warmer skin, resulting in faster heat loss. In contrast, when the external temperature is low, blood flow to the extremities and surface of the skin is reduced and to prevent heat loss and is circulated to the important organs of the body, preferentially. Rate of blood flow varies greatly between different organs. Liver has the most abundant blood supply with an approximate flow of 1350 ml / min. Kidney and brain are the second and the third most supplied organs, with 1100 ml / min and ~ 700 ml / min, respectively. Relative rates of blood flow per 100 g of tissue are different, with kidney, adrenal gland and thyroid being the first, second and third most supplied tissues, respectively. The restriction of blood flow can also be used in specialized tissues to cause engorgement, resulting in an erection of that tissue; examples are the erectile tissue in the penis and clitoris. Another example of a hydraulic function is the jumping spider, in which blood forced into the legs under pressure causes them to straighten for a powerful jump, without the need for bulky muscular legs. In insects, the blood (more properly called hemolymph) is not involved in the transport of oxygen. (Openings called tracheae allow oxygen from the air to diffuse directly to the tissues.) Insect blood moves nutrients to the tissues and removes waste products in an open system. Other invertebrates use respiratory proteins to increase the oxygen - carrying capacity. Hemoglobin is the most common respiratory protein found in nature. Hemocyanin (blue) contains copper and is found in crustaceans and mollusks. It is thought that tunicates (sea squirts) might use vanabins (proteins containing vanadium) for respiratory pigment (bright - green, blue, or orange). In many invertebrates, these oxygen - carrying proteins are freely soluble in the blood; in vertebrates they are contained in specialized red blood cells, allowing for a higher concentration of respiratory pigments without increasing viscosity or damaging blood filtering organs like the kidneys. Giant tube worms have unusual hemoglobins that allow them to live in extraordinary environments. These hemoglobins also carry sulfides normally fatal in other animals. The coloring matter of blood (hemochrome) is largely due to the protein in the blood responsible for oxygen transport. Different groups of organisms use different proteins. Hemoglobin is the principal determinant of the color of blood in vertebrates. Each molecule has four heme groups, and their interaction with various molecules alters the exact color. In vertebrates and other hemoglobin - using creatures, arterial blood and capillary blood are bright red, as oxygen imparts a strong red color to the heme group. Deoxygenated blood is a darker shade of red; this is present in veins, and can be seen during blood donation and when venous blood samples are taken. This is because the spectrum of light absorbed by hemoglobin differs between the oxygenated and deoxygenated states. Blood in carbon monoxide poisoning is bright red, because carbon monoxide causes the formation of carboxyhemoglobin. In cyanide poisoning, the body can not utilize oxygen, so the venous blood remains oxygenated, increasing the redness. There are some conditions affecting the heme groups present in hemoglobin that can make the skin appear blue -- a symptom called cyanosis. If the heme is oxidized, methemoglobin, which is more brownish and can not transport oxygen, is formed. In the rare condition sulfhemoglobinemia, arterial hemoglobin is partially oxygenated, and appears dark red with a bluish hue. Veins close to the surface of the skin appear blue for a variety of reasons. However, the factors that contribute to this alteration of color perception are related to the light - scattering properties of the skin and the processing of visual input by the visual cortex, rather than the actual color of the venous blood. Skinks in the genus Prasinohaema have green blood due to a buildup of the waste product biliverdin. The blood of most mollusks -- including cephalopods and gastropods -- as well as some arthropods, such as horseshoe crabs, is blue, as it contains the copper - containing protein hemocyanin at concentrations of about 50 grams per liter. Hemocyanin is colorless when deoxygenated and dark blue when oxygenated. The blood in the circulation of these creatures, which generally live in cold environments with low oxygen tensions, is grey - white to pale yellow, and it turns dark blue when exposed to the oxygen in the air, as seen when they bleed. This is due to change in color of hemocyanin when it is oxidized. Hemocyanin carries oxygen in extracellular fluid, which is in contrast to the intracellular oxygen transport in mammals by hemoglobin in RBCs. The blood of most annelid worms and some marine polychaetes use chlorocruorin to transport oxygen. It is green in color in dilute solutions. Hemerythrin is used for oxygen transport in the marine invertebrates sipunculids, priapulids, brachiopods, and the annelid worm, magelona. Hemerythrin is violet - pink when oxygenated. The blood of some species of ascidians and tunicates, also known as sea squirts, contains proteins called vanadins. These proteins are based on vanadium, and give the creatures a concentration of vanadium in their bodies 100 times higher than the surrounding sea water. Unlike hemocyanin and hemoglobin, hemovanadin is not an oxygen carrier. When exposed to oxygen, however, vanadins turn a mustard yellow. Substances other than oxygen can bind to hemoglobin; in some cases this can cause irreversible damage to the body. Carbon monoxide, for example, is extremely dangerous when carried to the blood via the lungs by inhalation, because carbon monoxide irreversibly binds to hemoglobin to form carboxyhemoglobin, so that less hemoglobin is free to bind oxygen, and fewer oxygen molecules can be transported throughout the blood. This can cause suffocation insidiously. A fire burning in an enclosed room with poor ventilation presents a very dangerous hazard, since it can create a build - up of carbon monoxide in the air. Some carbon monoxide binds to hemoglobin when smoking tobacco. Blood for transfusion is obtained from human donors by blood donation and stored in a blood bank. There are many different blood types in humans, the ABO blood group system, and the Rhesus blood group system being the most important. Transfusion of blood of an incompatible blood group may cause severe, often fatal, complications, so crossmatching is done to ensure that a compatible blood product is transfused. Other blood products administered intravenously are platelets, blood plasma, cryoprecipitate, and specific coagulation factor concentrates. Many forms of medication (from antibiotics to chemotherapy) are administered intravenously, as they are not readily or adequately absorbed by the digestive tract. After severe acute blood loss, liquid preparations, generically known as plasma expanders, can be given intravenously, either solutions of salts (NaCl, KCl, CaCl etc.) at physiological concentrations, or colloidal solutions, such as dextrans, human serum albumin, or fresh frozen plasma. In these emergency situations, a plasma expander is a more effective life - saving procedure than a blood transfusion, because the metabolism of transfused red blood cells does not restart immediately after a transfusion. In modern evidence - based medicine, bloodletting is used in management of a few rare diseases, including hemochromatosis and polycythemia. However, bloodletting and leeching were common unvalidated interventions used until the 19th century, as many diseases were incorrectly thought to be due to an excess of blood, according to Hippocratic medicine. According to the Oxford English Dictionary, the word "blood '' dates to the oldest English, circa 1000 CE. The word is derived from Middle English, which is derived from the Old English word blôd, which is akin to the Old High German word bluot, meaning blood. The modern German word is (das) Blut. Fåhræus (a Swedish physician who devised the erythrocyte sedimentation rate) suggested that the Ancient Greek system of humorism, wherein the body was thought to contain four distinct bodily fluids (associated with different temperaments), were based upon the observation of blood clotting in a transparent container. When blood is drawn in a glass container and left undisturbed for about an hour, four different layers can be seen. A dark clot forms at the bottom (the "black bile ''). Above the clot is a layer of red blood cells (the "blood ''). Above this is a whitish layer of white blood cells (the "phlegm ''). The top layer is clear yellow serum (the "yellow bile ''). The ABO blood group system was discovered in the year 1900 by Karl Landsteiner. Jan Janský is credited with the first classification of blood into the four types (A, B, AB, and O) in 1907, which remains in use today. In 1907 the first blood transfusion was performed that used the ABO system to predict compatibility. The first non-direct transfusion was performed on March 27, 1914. The Rhesus factor was discovered in 1937. Due to its importance to life, blood is associated with a large number of beliefs. One of the most basic is the use of blood as a symbol for family relationships through birth / parentage; to be "related by blood '' is to be related by ancestry or descendence, rather than marriage. This bears closely to bloodlines, and sayings such as "blood is thicker than water '' and "bad blood '', as well as "Blood brother ''. Blood is given particular emphasis in the Jewish and Christian religions, because Leviticus 17: 11 says "the life of a creature is in the blood. '' This phrase is part of the Levitical law forbidding the drinking of blood or eating meat with the blood still intact instead of being poured off. Mythic references to blood can sometimes be connected to the life - giving nature of blood, seen in such events as childbirth, as contrasted with the blood of injury or death. In many indigenous Australian Aboriginal peoples ' traditions, ochre (particularly red) and blood, both high in iron content and considered Maban, are applied to the bodies of dancers for ritual. As Lawlor states: In many Aboriginal rituals and ceremonies, red ochre is rubbed all over the naked bodies of the dancers. In secret, sacred male ceremonies, blood extracted from the veins of the participant 's arms is exchanged and rubbed on their bodies. Red ochre is used in similar ways in less - secret ceremonies. Blood is also used to fasten the feathers of birds onto people 's bodies. Bird feathers contain a protein that is highly magnetically sensitive. Lawlor comments that blood employed in this fashion is held by these peoples to attune the dancers to the invisible energetic realm of the Dreamtime. Lawlor then connects these invisible energetic realms and magnetic fields, because iron is magnetic. Among the Germanic tribes, blood was used during their sacrifices; the Blóts. The blood was considered to have the power of its originator, and, after the butchering, the blood was sprinkled on the walls, on the statues of the gods, and on the participants themselves. This act of sprinkling blood was called blóedsian in Old English, and the terminology was borrowed by the Roman Catholic Church becoming to bless and blessing. The Hittite word for blood, ishar was a cognate to words for "oath '' and "bond '', see Ishara. The Ancient Greeks believed that the blood of the gods, ichor, was a substance that was poisonous to mortals. As a relic of Germanic Law, the cruentation, an ordeal where the corpse of the victim was supposed to start bleeding in the presence of the murderer, was used until the early 17th century. In Genesis 9: 4, God prohibited Noah and his sons from eating blood (see Noahide Law). This command continued to be observed by the Eastern Orthodox. It is also found in the Bible that when the Angel of Death came around to the Hebrew house that the first - born child would not die if the angel saw lamb 's blood wiped across the doorway. At the Council of Jerusalem, the apostles prohibited certain Christians from consuming blood -- this is documented in Acts 15: 20 and 29. This chapter specifies a reason (especially in verses 19 -- 21): It was to avoid offending Jews who had become Christians, because the Mosaic Law Code prohibited the practice. Christ 's blood is the means for the atonement of sins. Also, ''... the blood of Jesus Christ his (God) Son cleanseth us from all sin. '' (1 John 1: 7), "... Unto him (God) that loved us, and washed us from our sins in his own blood. '' (Revelation 1: 5), and "And they overcame him (Satan) by the blood of the Lamb (Jesus the Christ), and by the word of their testimony... '' (Revelation 12: 11). Some Christian churches, including Roman Catholicism, Eastern Orthodoxy, Oriental Orthodoxy, and the Assyrian Church of the East teach that, when consecrated, the Eucharistic wine actually becomes the blood of Jesus for worshippers to drink. Thus in the consecrated wine, Jesus becomes spiritually and physically present. This teaching is rooted in the Last Supper, as written in the four gospels of the Bible, in which Jesus stated to his disciples that the bread that they ate was his body, and the wine was his blood. "This cup is the new testament in my blood, which is shed for you. '' (Luke 22: 20). Most forms of Protestantism, especially those of a Wesleyan or Presbyterian lineage, teach that the wine is no more than a symbol of the blood of Christ, who is spiritually but not physically present. Lutheran theology teaches that the body and blood is present together "in, with, and under '' the bread and wine of the Eucharistic feast. In Judaism, animal blood may not be consumed even in the smallest quantity (Leviticus 3: 17 and elsewhere); this is reflected in Jewish dietary laws (Kashrut). Blood is purged from meat by rinsing and soaking in water (to loosen clots), salting and then rinsing with water again several times. Eggs must also be checked and any blood spots removed before consumption. Although blood from fish is biblically kosher, it is rabbinically forbidden to consume fish blood to avoid the appearance of breaking the Biblical prohibition. Another ritual involving blood involves the covering of the blood of fowl and game after slaughtering (Leviticus 17: 13); the reason given by the Torah is: "Because the life of the animal is (in) its blood '' (ibid 17: 14). In relation to human beings, Kabbalah expounds on this verse that the animal soul of a person is in the blood, and that physical desires stem from it. Likewise, the mystical reason for salting temple sacrifices and slaughtered meat is to remove the blood of animal - like passions from the person. By removing the animal 's blood, the animal energies and life - force contained in the blood are removed, making the meat fit for human consumption. Consumption of food containing blood is forbidden by Islamic dietary laws. This is derived from the statement in the Qur'an, sura Al - Ma'ida (5: 3): "Forbidden to you (for food) are: dead meat, blood, the flesh of swine, and that on which has been invoked the name of other than Allah. '' Blood is considered unclean, hence there are specific methods to obtain physical and ritual status of cleanliness once bleeding has occurred. Specific rules and prohibitions apply to menstruation, postnatal bleeding and irregular vaginal bleeding. When an animal has been slaughtered, the animal 's neck is cut in a way to ensure that the spine is not severed, hence the brain may send commands to the heart to pump blood to it for oxygen. In this way, blood is removed from the body, and the meat is generally now safe to cook and eat. In modern times, blood transfusions are generally not considered against the rules. Based on their interpretation of scriptures such as Acts 15: 28, 29 ("Keep abstaining... from blood. ''), many Jehovah 's Witnesses neither consume blood nor accept transfusions of whole blood or its major components: red blood cells, white blood cells, platelets (thrombocytes), and plasma. Members may personally decide whether they will accept medical procedures that involve their own blood or substances that are further fractionated from the four major components. In south East Asian popular culture, it is often said that if a man 's nose produces a small flow of blood, he is experiencing sexual desire. This often appears in Chinese - language and Hong Kong films as well as in Japanese and Korean culture parodied in anime, manga, and drama. Characters, mostly males, will often be shown with a nosebleed if they have just seen someone nude or in little clothing, or if they have had an erotic thought or fantasy; this is based on the idea that a male 's blood pressure will spike dramatically when aroused. Vampires are mythical creatures that drink blood directly for sustenance, usually with a preference for human blood. Cultures all over the world have myths of this kind; for example the ' Nosferatu ' legend, a human who achieves damnation and immortality by drinking the blood of others, originates from Eastern European folklore. Ticks, leeches, female mosquitoes, vampire bats, and an assortment of other natural creatures do consume the blood of other animals, but only bats are associated with vampires. This has no relation to vampire bats, which are new world creatures discovered well after the origins of the European myths. Blood residue can help forensic investigators identify weapons, reconstruct a criminal action, and link suspects to the crime. Through bloodstain pattern analysis, forensic information can also be gained from the spatial distribution of bloodstains. Blood residue analysis is also a technique used in archeology. Blood is one of the body fluids that has been used in art. In particular, the performances of Viennese Actionist Hermann Nitsch, Istvan Kantor, Franko B, Lennie Lee, Ron Athey, Yang Zhichao, Lucas Abela and Kira O ' Reilly, along with the photography of Andres Serrano, have incorporated blood as a prominent visual element. Marc Quinn has made sculptures using frozen blood, including a cast of his own head made using his own blood. The term blood is used in genealogical circles to refer to one 's ancestry, origins, and ethnic background as in the word bloodline. Other terms where blood is used in a family history sense are blue - blood, royal blood, mixed - blood and blood relative.
when do you start to feel the effects of altitude
Altitude sickness - wikipedia Altitude sickness -- also known as acute mountain sickness (AMS), is a pathological effect of high altitude on humans, caused by acute exposure to low partial pressure of oxygen at high altitude. Although minor symptoms such as breathlessness may occur at altitudes of 1,500 metres (5,000 ft), AMS commonly occurs above 2,400 metres (8,000 ft). It presents as a collection of nonspecific symptoms, acquired at high altitude or in low air pressure, resembling a case of "flu, carbon monoxide poisoning, or a hangover ''. It is hard to determine who will be affected by altitude sickness, as there are no specific factors that correlate with a susceptibility to altitude sickness. However, most people can ascend to 2,400 metres (8,000 ft) without difficulty. Acute mountain sickness can progress to high altitude pulmonary edema (HAPE) or high altitude cerebral edema (HACE), both of which are potentially fatal, and can only be cured by immediate descent to lower altitude or oxygen administration. Chronic mountain sickness is a different condition that only occurs after long term exposure to high altitude. People have different susceptibilities to altitude sickness; for some otherwise healthy people, acute altitude sickness can begin to appear at around 2,000 metres (6,600 ft) above sea level, such as at many mountain ski resorts, equivalent to a pressure of 80 kilopascals (0.79 atm). This is the most frequent type of altitude sickness encountered. Symptoms often manifest themselves six to ten hours after ascent and generally subside in one to two days, but they occasionally develop into the more serious conditions. Symptoms include headache, fatigue, stomach illness, dizziness, and sleep disturbance. Exertion aggravates the symptoms. Those individuals with the lowest initial partial pressure of end - tidal pCO (the lowest concentration of carbon dioxide at the end of the respiratory cycle, a measure of a higher alveolar ventilation) and corresponding high oxygen saturation levels tend to have a lower incidence of acute mountain sickness than those with high end - tidal pCO and low oxygen saturation levels. Headaches are the primary symptom used to diagnose altitude sickness, although a headache is also a symptom of dehydration. A headache occurring at an altitude above 2,400 metres (7,900 ft) -- a pressure of 76 kilopascals (0.75 atm) -- combined with any one or more of the following symptoms, may indicate altitude sickness: Symptoms that may indicate life - threatening altitude sickness include: The most serious symptoms of altitude sickness arise from edema (fluid accumulation in the tissues of the body). At very high altitude, humans can get either high altitude pulmonary edema (HAPE), or high altitude cerebral edema (HACE). The physiological cause of altitude - induced edema is not conclusively established. It is currently believed, however, that HACE is caused by local vasodilation of cerebral blood vessels in response to hypoxia, resulting in greater blood flow and, consequently, greater capillary pressures. On the other hand, HAPE may be due to general vasoconstriction in the pulmonary circulation (normally a response to regional ventilation - perfusion mismatches) which, with constant or increased cardiac output, also leads to increases in capillary pressures. For those suffering HACE, dexamethasone may provide temporary relief from symptoms in order to keep descending under their own power. HAPE can progress rapidly and is often fatal. Symptoms include fatigue, severe dyspnea at rest, and cough that is initially dry but may progress to produce pink, frothy sputum. Descent to lower altitudes alleviates the symptoms of HAPE. HACE is a life - threatening condition that can lead to coma or death. Symptoms include headache, fatigue, visual impairment, bladder dysfunction, bowel dysfunction, loss of coordination, paralysis on one side of the body, and confusion. Descent to lower altitudes may save those afflicted with HACE. Altitude sickness can first occur at 1,500 metres, with the effects becoming severe at extreme altitudes (greater than 5,500 metres). Only brief trips above 6,000 metres are possible and supplemental oxygen is needed to avert sickness. As altitude increases, the available amount of oxygen to sustain mental and physical alertness decreases with the overall air pressure, though the relative percentage of oxygen in air, at about 21 %, remains practically unchanged up to 21,000 metres (70,000 ft). The RMS velocities of diatomic nitrogen and oxygen are very similar and thus no change occurs in the ratio of oxygen to nitrogen until stratospheric heights. Dehydration due to the higher rate of water vapor lost from the lungs at higher altitudes may contribute to the symptoms of altitude sickness. The rate of ascent, altitude attained, amount of physical activity at high altitude, as well as individual susceptibility, are contributing factors to the onset and severity of high - altitude illness. Altitude sickness usually occurs following a rapid ascent and can usually be prevented by ascending slowly. In most of these cases, the symptoms are temporary and usually abate as altitude acclimatization occurs. However, in extreme cases, altitude sickness can be fatal. At high altitude, 1,500 to 3,500 metres (4,900 to 11,500 ft), the onset of physiological effects of diminished inspiratory oxygen pressure (PiO) includes decreased exercise performance and increased ventilation (lower arterial partial pressure of carbon dioxide - PCO). While arterial oxygen transport may be only slightly impaired the arterial oxygen saturation, SaO, generally stays above 90 %. Altitude sickness is common between 2,400 and 4,000 m because of the large number of people who ascend rapidly to these altitudes. At very high altitude, 3,500 to 5,500 metres (11,500 to 18,000 ft), maximum SaO falls below 90 % as the arterial PO falls below 60mmHg. Extreme hypoxemia may occur during exercise, during sleep, and in the presence of high altitude pulmonary edema or other acute lung conditions. Severe altitude illness occurs most commonly in this range. Above 5,500 metres (18,000 ft), marked hypoxemia, hypocapnia, and alkalosis are characteristic of extreme altitudes. Progressive deterioration of physiologic function eventually outstrips acclimatization. As a result, no permanent human habitation occurs above 6,000 metres (20,000 ft). A period of acclimatization is necessary when ascending to extreme altitude; abrupt ascent without supplemental oxygen for other than brief exposures invites severe altitude sickness. The physiology of altitude sickness centres around the alveolar gas equation; the atmospheric pressure is low, but there is still 20.9 % Oxygen, water vapour still occupies the same pressure too, this means that there is less oxygen pressure available in the lungs and blood. Compare these two equations comparing the amount of oxygen in blood at altitude: The hypoxia leads to an increase in minute ventilation (hence both low CO, and subsequently bicarbonate), Hb increases through haemoconcentration and erythrogenesis. Alkylosis shifts the haemaglobin dissociation constant to the left, 2, 3 - DPG increases to counter this. Cardiac output increases through an increase in heart rate. The body 's response to high altitude includes the following: Suferers from high - altitude sickness generally have reduced hyperventilator response, impaired gas exchange, fluid retention or increased sympathetic drive. There is thought to be an increase in cerebral venous volume due to increase in cerebral blood flow and hypocapnic cerebral vasoconstriction causing oedema. Ascending slowly is the best way to avoid altitude sickness. Avoiding strenuous activity such as skiing, hiking, etc. in the first 24 hours at high altitude reduces the symptoms of AMS. Alcohol and sleeping pills are respiratory depressants, and thus slow down the acclimatization process and should be avoided. Alcohol also tends to cause dehydration and exacerbates AMS. Thus, avoiding alcohol consumption in the first 24 -- 48 hours at a higher altitude is optimal. Pre-acclimatization is when the body develops tolerance to low oxygen concentrations before ascending to an altitude. It significantly reduces risk because less time has to be spent at altitude to acclimatize in the traditional way. Additionally, because less time has to be spent on the mountain, less food and supplies have to be taken up. Several commercial systems exist that use altitude tents, so called because they mimic altitude by reducing the percentage of oxygen in the air while keeping air pressure constant to the surroundings. Altitude acclimatization is the process of adjusting to decreasing oxygen levels at higher elevations, in order to avoid altitude sickness. Once above approximately 3,000 metres (10,000 ft) -- a pressure of 70 kilopascals (0.69 atm) -- most climbers and high - altitude trekkers take the "climb - high, sleep - low '' approach. For high - altitude climbers, a typical acclimatization regimen might be to stay a few days at a base camp, climb up to a higher camp (slowly), and then return to base camp. A subsequent climb to the higher camp then includes an overnight stay. This process is then repeated a few times, each time extending the time spent at higher altitudes to let the body adjust to the oxygen level there, a process that involves the production of additional red blood cells. Once the climber has acclimatized to a given altitude, the process is repeated with camps placed at progressively higher elevations. The rule of thumb is to ascend no more than 300 m (1,000 ft) per day to sleep. That is, one can climb from 3,000 m (9,800 ft) (70 kPa or 0.69 atm) to 4,500 m (15,000 ft) (58 kPa or 0.57 atm) in one day, but one should then descend back to 3,300 m (10,800 ft) (67.5 kPa or 0.666 atm) to sleep. This process can not safely be rushed, and this is why climbers need to spend days (or even weeks at times) acclimatizing before attempting to climb a high peak. Simulated altitude equipment that produces hypoxic (reduced oxygen) air can be used to acclimate to high altitude, reducing the total time required on the mountain itself. Altitude acclimatization is necessary for some people who move rapidly from lower altitudes to intermediate altitudes (e.g., by aircraft and ground transportation over a few hours), such as from sea level to 8,000 feet (2,400 m) as in many Colorado, USA mountain resorts. Stopping at an intermediate altitude overnight (for example, staying overnight when arriving through Denver, at 5,500 feet (1,700 m), when traveling to the aforementioned Colorado resorts) can alleviate or eliminate occurrences of AMS. The drug acetazolamide (trade name Diamox) may help some people making a rapid ascent to sleeping altitude above 2,700 metres (9,000 ft), and it may also be effective if started early in the course of AMS. Acetazolamide can be taken before symptoms appear as a preventive measure at a dose of 125 mg twice daily. The Everest Base Camp Medical Centre cautions against its routine use as a substitute for a reasonable ascent schedule, except where rapid ascent is forced by flying into high altitude locations or due to terrain considerations. The Centre suggests a dosage of 125 mg twice daily for prophylaxis, starting from 24 hours before ascending until a few days at the highest altitude or on descending; with 250 mg twice daily recommended for treatment of AMS. The Centers for Disease Control and Prevention (CDC) suggest the same dose for prevention of 125 mg acetazolamide every 12 hours. Acetazolamide, a mild diuretic, works by stimulating the kidneys to secrete more bicarbonate in the urine, thereby acidifying the blood. This change in pH stimulates the respiratory center to increase the depth and frequency of respiration, thus speeding the natural acclimatization process. An undesirable side - effect of acetazolamide is a reduction in aerobic endurance performance. Other minor side effects include a tingle - sensation in hands and feet. Although a sulfonamide; acetazolamide is a non-antibiotic and has not been shown to cause life - threatening allergic cross-reactivity in those with a self - reported sulfonamide allergy. Dosage of 1000 mg / day will produce a 25 % decrease in performance, on top of the reduction due to high - altitude exposure. The CDC advises that Dexamethasone be reserved for treatment of severe AMS and HACE during descents, and notes that Nifedipine may prevent HAPE. A single randomized controlled trial found that sumatriptan may help prevent altitude sickness. Despite their popularity, antioxidant treatments have not been found to be effective medications for prevention of AMS. Interest in phosphodiesterase inhibitors such as sildenafil has been limited by the possibility that these drugs might worsen the headache of mountain sickness. A promising possible preventive for altitude sickness is myo - inositol trispyrophosphate (ITPP), which increases the amount of oxygen released by hemoglobin. Prior to the onset of altitude sickness, ibuprofen is a suggested non-steroidal anti-inflammatory and painkiller that can help alleviate both the headache and nausea associated with AMS. It has not been studied for the prevention of cerebral edema (swelling of the brain) associated with extreme symptoms of AMS. For centuries, indigenous peoples of the Americas such as the Aymaras of the Altiplano, have chewed coca leaves to try to alleviate the symptoms of mild altitude sickness. In Chinese and Tibetan traditional medicine, an extract of the root tissue of Radix rhodiola is often taken in order to prevent the same symptoms, though neither of these therapies has been proven effective in clinical study. In high - altitude conditions, oxygen enrichment can counteract the hypoxia related effects of altitude sickness. A small amount of supplemental oxygen reduces the equivalent altitude in climate - controlled rooms. At 3,400 metres (11,200 ft) (67 kPa or 0.66 atm), raising the oxygen concentration level by 5 % via an oxygen concentrator and an existing ventilation system provides an effective altitude of 3,000 m (10,000 ft) (70 kPa or 0.69 atm), which is more tolerable for those unaccustomed to high altitudes. Oxygen from gas bottles or liquid containers can be applied directly via a nasal cannula or mask. Oxygen concentrators based upon pressure swing adsorption (PSA), VSA, or vacuum - pressure swing adsorption (VPSA) can be used to generate the oxygen if electricity is available. Stationary oxygen concentrators typically use PSA technology, which has performance degradations at the lower barometric pressures at high altitudes. One way to compensate for the performance degradation is to utilize a concentrator with more flow capacity. There are also portable oxygen concentrators that can be used on vehicular DC power or on internal batteries, and at least one system commercially available measures and compensates for the altitude effect on its performance up to 4,000 m (13,000 ft). The application of high - purity oxygen from one of these methods increases the partial pressure of oxygen by raising the FiO (fraction of inspired oxygen). Increased water intake may also help in acclimatization to replace the fluids lost through heavier breathing in the thin, dry air found at altitude, although consuming excessive quantities ("over-hydration '') has no benefits and may cause dangerous hyponatremia. The only reliable treatment, and in many cases the only option available, is to descend. Attempts to treat or stabilize the patient in situ (at altitude) are dangerous unless highly controlled and with good medical facilities. However, the following treatments have been used when the patient 's location and circumstances permit:
how many different kinds of plastic are there
Plastic - wikipedia Note 1: The use of this term instead of polymer is a source of confusion and thus is not recommended. Plastic is material consisting of any of a wide range of synthetic or semi-synthetic organic compounds that are malleable and so can be molded into solid objects. Plasticity is the general property of all materials which can deform irreversibly without breaking but, in the class of moldable polymers, this occurs to such a degree that their actual name derives from this specific ability. Plastics are typically organic polymers of high molecular mass and often contain other substances. They are usually synthetic, most commonly derived from petrochemicals, however, an array of variants are made from renewable materials such as polylactic acid from corn or cellulosics from cotton linters. Due to their low cost, ease of manufacture, versatility, and imperviousness to water, plastics are used in a multitude of products of different scale, including paper clips and spacecraft. They have prevailed over traditional materials, such as wood, stone, horn and bone, leather, metal, glass, and ceramic, in some products previously left to natural materials. In developed economies, about a third of plastic is used in packaging and roughly the same in buildings in applications such as piping, plumbing or vinyl siding. Other uses include automobiles (up to 20 % plastic), furniture, and toys. In the developing world, the applications of plastic may differ -- 42 % of India 's consumption is used in packaging. Plastics have many uses in the medical field as well, with the introduction of polymer implants and other medical devices derived at least partially from plastic. The field of plastic surgery is not named for use of plastic materials, but rather the meaning of the word plasticity, with regard to the reshaping of flesh. The world 's first fully synthetic plastic was bakelite, invented in New York in 1907 by Leo Baekeland who coined the term ' plastics '. Many chemists have contributed to the materials science of plastics, including Nobel laureate Hermann Staudinger who has been called "the father of polymer chemistry '' and Herman Mark, known as "the father of polymer physics ''. The success and dominance of plastics starting in the early 20th century led to environmental concerns regarding its slow decomposition rate after being discarded as trash due to its composition of large molecules. Toward the end of the century, one approach to this problem was met with wide efforts toward recycling. The word plastic derives from the Greek πλαστικός (plastikos) meaning "capable of being shaped or molded '' and, in turn, from πλαστός (plastos) meaning "molded ''. The plasticity, or malleability, of the material during manufacture allows it to be cast, pressed, or extruded into a variety of shapes, such as: films, fibers, plates, tubes, bottles, boxes, amongst many others. The common noun plastic should not be confused with the technical adjective plastic. The adjective is applicable to any material which undergoes a plastic deformation, or permanent change of shape, when strained beyond a certain point. For example, aluminum which is stamped or forged exhibits plasticity in this sense, but is not plastic in the common sense. By contrast, some plastics will, in their finished forms, break before deforming and therefore are not plastic in the technical sense. Most plastics contain organic polymers. The vast majority of these polymers are formed from chains of carbon atoms, ' pure ' or with the addition of: oxygen, nitrogen, or sulfur. The chains comprise many repeat units, formed from monomers. Each polymer chain will have several thousand repeating units. The backbone is the part of the chain that is on the "main path '', linking together a large number of repeat units. To customize the properties of a plastic, different molecular groups "hang '' from this backbone. These pendant units are usually "hung '' on the monomers, before the monomers themselves are linked together to form the polymer chain. It is the structure of these side chains that influences the properties of the polymer. The molecular structure of the repeating unit can be fine tuned to influence specific properties in the polymer. Plastics are usually classified by: the chemical structure of the polymer 's backbone and side chains; some important groups in these classifications are: the acrylics, polyesters, silicones, polyurethanes, and halogenated plastics. Plastics can also be classified by: the chemical process used in their synthesis, such as: condensation, polyaddition, and cross-linking. Plastics can also be classified by: their various physical properties, such as: hardness, density, tensile strength, resistance to heat and glass transition temperature, and by their chemical properties, such as the organic chemistry of the polymer and its resistance and reaction to various chemical products and processes, such as: organic solvents, oxidation, and ionizing radiation. In particular, most plastics will melt upon heating to a few hundred degrees celsius. Other classifications are based on qualities that are relevant for manufacturing or product design. Examples of such qualities and classes are: thermoplastics and thermosets, conductive polymers, biodegradable plastics and engineering plastics and other plastics with particular structures, such as elastomers. One important classification of plastics is by the permanence or impermanence of their form, or whether they are: thermoplastics or thermosetting polymers. Thermoplastics are the plastics that, when heated, do not undergo chemical change in their composition and so can be molded again and again. Examples include: polyethylene (PE), polypropylene (PP), polystyrene (PS) and polyvinyl chloride (PVC). Common thermoplastics range from 20,000 to 500,000 amu, while thermosets are assumed to have infinite molecular weight. Thermosets, or thermosetting polymers, can melt and take shape only once: after they have solidified, they stay solid. In the thermosetting process, a chemical reaction occurs that is irreversible. The vulcanization of rubber is an example of a thermosetting process: before heating with sulfur, the polyisoprene is a tacky, slightly runny material; after vulcanization, the product is rigid and non-tacky. Many plastics are completely amorphous, such as: all thermosets; polystyrene and its copolymers; and poly methyl methacrylate. However, some plastics are partially crystalline and partially amorphous in molecular structure, giving them both a melting point, the temperature at which the attractive intermolecular forces are overcome, and also one or more glass transitions, the temperatures above which the extent of localized molecular flexibility is substantially increased. These so - called semi-crystalline plastics include: polyethylene, polypropylene, poly vinyl chloride, polyamides (nylons), polyesters and some polyurethanes. Intrinsically Conducting Polymers (ICP) are organic polymers that conduct electricity. While plastics can be made electrically conductive, with a conductivity of up to 80 kS / cm in stretch - oriented polyacetylene, they are still no match for most metals like copper which have a conductivity of several hundred kS / cm. Nevertheless, this is a developing field. Biodegradable plastics are plastics that degrade, or break down, upon exposure to: sunlight or ultra-violet radiation, water or dampness, bacteria, enzymes or wind abrasion. In some instances, rodent, pest, or insect attack can also be considered as forms of biodegradation or environmental degradation. Some modes of degradation require that the plastic be exposed at the surface (aerobic), whereas other modes will only be effective if certain conditions exist in landfill or composting systems (anaerobic). Some companies produce biodegradable additives, to enhance biodegradation. Plastic can have starch powder added as a filler to allow it to degrade more easily, but this still does not lead to the complete breaking down of the plastic. Some researchers have genetically engineered bacteria to synthesize completely biodegradable plastics, such as Biopol; however, these are expensive at present. While most plastics are produced from petrochemicals, bioplastics are made substantially from renewable plant materials such: as cellulose and starch. Due both to the finite limits of the petrochemical reserves and to the threat of global warming, the development of bioplastics is a growing field. However, bioplastic development begins from a very low base and, as yet, does not compare significantly with petrochemical production. Estimates of the global production capacity for bio-derived materials is put at 327,000 tonnes / year. In contrast, global production of polyethylene (PE) and polypropylene (PP), the world 's leading petrochemical derived polyolefins, was estimated at over 150 million tonnes in 2015. This category includes both commodity plastics, or standard plastics, and engineering plastics. The development of plastics has evolved from the use of natural plastic materials (e.g., chewing gum, shellac) to the use of chemically modified, natural materials (e.g., natural rubber, nitrocellulose, collagen, galalite) and finally to completely synthetic molecules (e.g., bakelite, epoxy, polyvinyl chloride). Early plastics were bio-derived materials such as egg and blood proteins, which are organic polymers. In around 1600 BC, Mesoamericans used natural rubber for balls, bands, and figurines. Treated cattle horns were used as windows for lanterns in the Middle Ages. Materials that mimicked the properties of horns were developed by treating milk - proteins (casein) with lye. In the nineteenth century, as industrial chemistry developed during the Industrial Revolution, many materials were reported. The development of plastics also accelerated with Charles Goodyear 's discovery of vulcanization to thermoset materials derived from natural rubber. Parkesine (nitrocellulose) is considered the first man - made plastic. The plastic material was patented by Alexander Parkes, in Birmingham, England in 1856. It was unveiled at the 1862 Great International Exhibition in London. Parkesine won a bronze medal at the 1862 World 's fair in London. Parkesine was made from cellulose (the major component of plant cell walls) treated with nitric acid as a solvent. The output of the process (commonly known as cellulose nitrate or pyroxilin) could be dissolved in alcohol and hardened into a transparent and elastic material that could be molded when heated. By incorporating pigments into the product, it could be made to resemble ivory. In 1897, the Hanover, Germany mass printing press owner Wilhelm Krische was commissioned to develop an alternative to blackboards. The resultant horn - like plastic made from the milk protein casein was developed in cooperation with the Austrian chemist (Friedrich) Adolph Spitteler (1846 -- 1940). The final result was unsuitable for the original purpose. In 1893, French chemist Auguste Trillat discovered the means to insolubilize casein by immersion in formaldehyde, producing material marketed as galalith. In the early 1900s, Bakelite, the first fully synthetic thermoset, was reported by Belgian chemist Leo Baekeland by using phenol and formaldehyde. After World War I, improvements in chemical technology led to an explosion in new forms of plastics, with mass production beginning in the 1940s and 1950s (around World War II). Among the earliest examples in the wave of new polymers were polystyrene (PS), first produced by BASF in the 1930s, and polyvinyl chloride (PVC), first created in 1872 but commercially produced in the late 1920s. In 1923, Durite Plastics Inc. was the first manufacturer of phenol - furfural resins. In 1933, polyethylene was discovered by Imperial Chemical Industries (ICI) researchers Reginald Gibson and Eric Fawcett. In 1954, polypropylene was discovered by Giulio Natta and began to be manufactured in 1957. In 1954, expanded polystyrene (used for building insulation, packaging, and cups) was invented by Dow Chemical. Polyethylene terephthalate (PET) 's discovery is credited to employees of the Calico Printers ' Association in the UK in 1941; it was licensed to DuPont for the USA and ICI otherwise, and as one of the few plastics appropriate as a replacement for glass in many circumstances, resulting in widespread use for bottles in Europe. Plastics manufacturing is a major part of the chemical industry, and some of the world 's largest chemical companies have been involved since the earliest days, such as the industry leaders BASF and Dow Chemical. In 2014, sales of the top fifty companies amounted to US $ 961,300,000,000. The firms came from some eighteen countries in total, with more than half of the companies on the list being headquartered in the US. Many of the top fifty plastics companies were concentrated in just three countries: BASF was the world 's largest chemical producer for the ninth year in a row. Trade associations which represent the industry in the US include the American Chemistry Council. Many of the properties of plastics are determined by standards specified by ISO, such as: Many of the properties of plastics are determined by the UL Standards, tests specified by Underwriters Laboratories (UL), such as: Blended into most plastics are additional organic or inorganic compounds. The average content of additives is a few percent. Many of the controversies associated with plastics actually relate to the additives: organotin compounds are particularly toxic. Typical additives include: Polymer stabilizers prolong the lifetime of the polymer by suppressing degradation that results from UV - light, oxidation, and other phenomena. Typical stabilzers thus absorb UV light or function as antioxidants. Many plastics contain fillers, to improve performance or reduce production costs. Typically fillers are mineral in origin, e.g., chalk. Other fillers include: starch, cellulose, wood flour, ivory dust and zinc oxide. Plasticizers are, by mass, often the most abundant additives. These oily but nonvolatile compounds are blended in to plastics to improve rheology, as many organic polymers are otherwise too rigid for particular applications. Colorants are another common additive, though their weight contribution is small. Pure plastics have low toxicity due to their insolubility in water and because they are biochemically inert, due to a large molecular weight. Plastic products contain a variety of additives, some of which can be toxic. For example, plasticizers like adipates and phthalates are often added to brittle plastics like polyvinyl chloride to make them pliable enough for use in food packaging, toys, and many other items. Traces of these compounds can leach out of the product. Owing to concerns over the effects of such leachates, the European Union has restricted the use of DEHP (di - 2 - ethylhexyl phthalate) and other phthalates in some applications, and the United States has limited the use of DEHP, DPB, BBP, DINP, DIDP, and DnOP in children 's toys and child care articles with the Consumer Product Safety Improvement Act. Some compounds leaching from polystyrene food containers have been proposed to interfere with hormone functions and are suspected human carcinogens. Other chemicals of potential concern include alkylphenols. Whereas the finished plastic may be non-toxic, the monomers used in the manufacture of the parent polymers may be toxic. In some cases, small amounts of those chemicals can remain trapped in the product unless suitable processing is employed. For example, the World Health Organization 's International Agency for Research on Cancer (IARC) has recognized vinyl chloride, the precursor to PVC, as a human carcinogen. Some polymers may also decompose into the monomers or other toxic substances when heated. In 2011, it was reported that "almost all plastic products '' sampled released chemicals with estrogenic activity, although the researchers identified plastics which did not leach chemicals with estrogenic activity. The primary building block of polycarbonates, bisphenol A (BPA), is an estrogen - like endocrine disruptor that may leach into food. Research in Environmental Health Perspectives finds that BPA leached from the lining of tin cans, dental sealants and polycarbonate bottles can increase body weight of lab animals ' offspring. A more recent animal study suggests that even low - level exposure to BPA results in insulin resistance, which can lead to inflammation and heart disease. As of January 2010, the LA Times newspaper reports that the United States FDA is spending $30 million to investigate indications of BPA being linked to cancer. Bis (2 - ethylhexyl) adipate, present in plastic wrap based on PVC, is also of concern, as are the volatile organic compounds present in new car smell. The European Union has a permanent ban on the use of phthalates in toys. In 2009, the United States government banned certain types of phthalates commonly used in plastic. Most plastics are durable and degrade very slowly, as their chemical structure renders them resistant to many natural processes of degradation. However, microbial species capable of degrading plastics are known to science, and some are potentially useful for the disposal of certain classes of plastic waste. There are differing estimates of how much plastic waste has been produced in the last century. By one estimate, one billion tons of plastic waste has been discarded since the 1950s. Others estimate a cumulative human production of 8.3 billion tons of plastic of which 6.3 billion tons is waste, with a recycling rate of only 9 %. Much of this material may persist for centuries or longer, given the demonstrated persistence of structurally similar natural materials such as amber. The presence of plastics, particularly microplastics, within the food chain is increasing. In the 1960s microplastics were observed in the guts of seabirds, and since then have been found in increasing concentrations. The long - term effects of plastic in the food chain are poorly understood. In 2009, it was estimated that 10 % of modern waste was plastic, although estimates vary according to region. Meanwhile, 50 - 80 % of debris in marine areas is plastic. Prior to the Montreal Protocol, CFCs were commonly used in the manufacture of polystyrene, and as such the production of polystyrene contributed to the depletion of the ozone layer. The effect of plastics on global warming is mixed. Plastics are generally made from petroleum. If the plastic is incinerated, it increases carbon emissions; if it is placed in a landfill, it becomes a carbon sink although biodegradable plastics have caused methane emissions. Due to the lightness of plastic versus glass or metal, plastic may reduce energy consumption. For example, packaging beverages in PET plastic rather than glass or metal is estimated to save 52 % in transportation energy. Production of plastics from crude oil requires 62 to 108 MJ / Kg (taking into account the average efficiency of US utility stations of 35 %). Producing silicon and semiconductors for modern electronic equipment is even more energy consuming: 230 to 235 MJ / Kg of silicon, and about 3,000 MJ / Kg of semiconductors. This is much higher than the energy needed to produce many other materials, e.g. iron (from iron ore) requires 20 - 25 MJ / Kg of energy, glass (from sand, etc.) 18 - 35 MJ / Kg, steel (from iron) 20 - 50 MJ / Kg, paper (from timber) 25 - 50 MJ / Kg. Controlled high - temperature incineration, above 850 ° C for two seconds, performed with selective additional heating, breaks down toxic dioxins and furans from burning plastic, and is widely used in municipal solid waste incineration. Municipal solid waste incinerators also normally include flue gas treatments to reduce pollutants further. This is needed because uncontrolled incineration of plastic produces polychlorinated dibenzo - p - dioxins, a carcinogen (cancer causing chemical). The problem occurs because the heat content of the waste stream varies. Open - air burning of plastic occurs at lower temperatures, and normally releases such toxic fumes. Plastics can be pyrolyzed into hydrocarbon fuels, since plastics include hydrogen and carbon. One kilogram of waste plastic produces roughly a liter of hydrocarbon. Thermoplastics can be remelted and reused, and thermoset plastics can be ground up and used as filler, although the purity of the material tends to degrade with each reuse cycle. There are methods by which plastics can be broken down to a feedstock state. The greatest challenge to the recycling of plastics is the difficulty of automating the sorting of plastic wastes, making it labor - intensive. Typically, workers sort the plastic by looking at the resin identification code, although common containers like soda bottles can be sorted from memory. Typically, the caps for PETE bottles are made from a different kind of plastic which is not recyclable, which presents additional problems for the sorting process. Other recyclable materials such as metals are easier to process mechanically. However, new processes of mechanical sorting are being developed to increase the capacity and efficiency of plastic recycling. While containers are usually made from a single type and color of plastic, making them relatively easy to sort, a consumer product like a cellular phone may have many small parts consisting of over a dozen different types and colors of plastics. In such cases, the resources it would take to separate the plastics far exceed their value and the item is discarded. However, developments are taking place in the field of active disassembly, which may result in more product components being reused or recycled. Recycling certain types of plastics can be unprofitable as well. For example, polystyrene is rarely recycled because the process is usually not cost effective. These unrecycled wastes are typically disposed of in landfills, incinerated or used to produce electricity at waste - to - energy plants. An early success in the recycling of plastics is Vinyloop, an industrial process to separate PVC from other materials through dissolution, filtration and separation of contaminants. A solvent is used in a closed loop to elute PVC from the waste. This makes it possible to recycle composite PVC waste, which is normally incinerated or put in a landfill. Vinyloop - based recycled PVC 's primary energy demand is 46 percent lower than conventionally produced PVC. The global warming potential is 39 percent lower. This is why the use of recycled material leads to a significantly better ecological outcome. This process was used after the Olympic Games in London 2012. Parts of temporary Buildings like the Water Polo Arena and the Royal Artillery Barracks were recycled. In this way, the PVC Policy could be fulfilled, which says that no PVC waste should be left after the games had ended. In 1988, to assist recycling of disposable items, the Plastic Bottle Institute of the U.S. Society of the Plastics Industry devised a now - familiar scheme to mark plastic bottles by plastic type. Under this scheme, a plastic container is marked with a triangle of three "chasing arrows '', which encloses a number denoting the plastic type: The first plastic based on a synthetic polymer was made from phenol and formaldehyde, with the first viable and cheap synthesis methods invented in 1907, by Leo Hendrik Baekeland, a Belgian - born American living in New York state. Baekeland was looking for an insulating shellac to coat wires in electric motors and generators. He found that combining phenol (C H OH) and formaldehyde (HCOH) formed a sticky mass and later found that the material could be mixed with wood flour, asbestos, or slate dust to create strong and fire resistant "composite '' materials. The new material tended to foam during synthesis, requiring that Baekeland build pressure vessels to force out the bubbles and provide a smooth, uniform product, as he announced in 1909, in a meeting of the American Chemical Society. Bakelite was originally used for electrical and mechanical parts, coming into widespread use in consumer goods and jewelry in the 1920s. Bakelite was a purely synthetic material, not derived from living matter. It was also an early thermosetting plastic. Unplasticised polystyrene is a rigid, brittle, inexpensive plastic that has been used to make plastic model kits and similar knick - knacks. It also is the basis for some of the most popular "foamed '' plastics, under the name styrene foam or Styrofoam. Like most other foam plastics, foamed polystyrene can be manufactured in an "open cell '' form, in which the foam bubbles are interconnected, as in an absorbent sponge, and "closed cell '', in which all the bubbles are distinct, like tiny balloons, as in gas - filled foam insulation and flotation devices. In the late 1950s, high impact styrene was introduced, which was not brittle. It finds much current use as the substance of toy figurines and novelties. Polyvinyl chloride (PVC, commonly called "vinyl '') incorporates chlorine atoms. The C - Cl bonds in the backbone are hydrophobic and resist oxidation (and burning). PVC is stiff, strong, heat and weather resistant, properties that recommend its use in devices for plumbing, gutters, house siding, enclosures for computers and other electronics gear. PVC can also be softened with chemical processing, and in this form it is now used for shrink - wrap, food packaging, and rain gear. All PVC polymers are degraded by heat and light. When this happens, hydrogen chloride is released into the atmosphere and oxidation of the compound occurs. Because hydrogen chloride readily combines with water vapor in the air to form hydrochloric acid, polyvinyl chloride is not recommended for long - term archival storage of silver, photographic film or paper (mylar is preferable). The plastics industry was revolutionized in the 1930s with the announcement of polyamide (PA), far better known by its trade name nylon. Nylon was the first purely synthetic fiber, introduced by DuPont Corporation at the 1939 World 's Fair in New York City. In 1927, DuPont had begun a secret development project designated Fiber66, under the direction of Harvard chemist Wallace Carothers and chemistry department director Elmer Keiser Bolton. Carothers had been hired to perform pure research, and he worked to understand the new materials ' molecular structure and physical properties. He took some of the first steps in the molecular design of the materials. His work led to the discovery of synthetic nylon fiber, which was very strong but also very flexible. The first application was for bristles for toothbrushes. However, Du Pont 's real target was silk, particularly silk stockings. Carothers and his team synthesized a number of different polyamides including polyamide 6.6 and 4.6, as well as polyesters. It took DuPont twelve years and US $27 million to refine nylon, and to synthesize and develop the industrial processes for bulk manufacture. With such a major investment, it was no surprise that Du Pont spared little expense to promote nylon after its introduction, creating a public sensation, or "nylon mania ''. Nylon mania came to an abrupt stop at the end of 1941 when the USA entered World War II. The production capacity that had been built up to produce nylon stockings, or just nylons, for American women was taken over to manufacture vast numbers of parachutes for fliers and paratroopers. After the war ended, DuPont went back to selling nylon to the public, engaging in another promotional campaign in 1946 that resulted in an even bigger craze, triggering the so - called nylon riots. Subsequently, polyamides 6, 10, 11, and 12 have been developed based on monomers which are ring compounds; e.g. caprolactam. Nylon 66 is a material manufactured by condensation polymerization. Nylons still remain important plastics, and not just for use in fabrics. In its bulk form it is very wear resistant, particularly if oil - impregnated, and so is used to build gears, plain bearings, valve seats, seals and because of good heat - resistance, increasingly for under - the - hood applications in cars, and other mechanical parts. Poly (methyl methacrylate) (PMMA), also known as acrylic or acrylic glass as well as by the trade names Plexiglas, Acrylite, Lucite, and Perspex among several others (see below), is a transparent thermoplastic often used in sheet form as a lightweight or shatter - resistant alternative to glass. The same material can be utilised as a casting resin, in inks and coatings, and has many other uses. Natural rubber is an elastomer (an elastic hydrocarbon polymer) that originally was derived from latex, a milky colloidal suspension found in specialised vessels in some plants. It is useful directly in this form (indeed, the first appearance of rubber in Europe was cloth waterproofed with unvulcanized latex from Brazil). However, in 1839, Charles Goodyear invented vulcanized rubber; a form of natural rubber heated with sulfur (and a few other chemicals), forming cross-links between polymer chains (vulcanization), improving elasticity and durability. In 1851, Nelson Goodyear added fillers to natural rubber materials to form ebonite. The first fully synthetic rubber was synthesized by Sergei Lebedev in 1910. In World War II, supply blockades of natural rubber from South East Asia caused a boom in development of synthetic rubber, notably styrene - butadiene rubber. In 1941, annual production of synthetic rubber in the U.S. was only 231 tonnes which increased to 840,000 tonnes in 1945. In the space race and nuclear arms race, Caltech researchers experimented with using synthetic rubbers for solid fuel for rockets. Ultimately, all large military rockets and missiles would use synthetic rubber based solid fuels, and they would also play a significant part in the civilian space effort.
what happened in tennessee during the civil war
Tennessee in the American Civil War - Wikipedia To a large extent, the American Civil War was fought in cities and farms of Tennessee, as only Virginia saw more battles. However, Tennessee is the only state to have major battles or skirmishes fought in every single county. Tennessee was the last of the Southern states to declare secession from the Union as a substantial portion of the population were against secession, but saw more than its share of the devastation resulting from years of warring armies criss - crossing the state. Its rivers were key arteries to the Deep South, and, from the early days of the war, Union efforts focused on securing control of those transportation routes, as well as major roads and mountain passes such as the Cumberland Gap. Tennessee was also considered "the Bread Basket '' of the Confederacy, for its rich farmland that fed both armies during the war. A large number of important battles occurred in Tennessee, including the vicious fighting at the Battle of Shiloh, which at the time was the deadliest battle in American history (it was later surpassed by a number of other engagements). Other large battles in Tennessee included Stones River, Chattanooga, Nashville, and Franklin. Tennessee was one of the most divided states in the country at the outset of the war. Before the bombardment of Fort Sumter, Tennessee was actually staunchly pro-Union, though there were still a few secessionist hot beds in the western portion of the state. The situation changed when Fort Sumter was bombarded and Lincoln made the call for 75,000 volunteers to suppress the rebellion. Tennesseans saw this as a threat to their "southern brethren, '' and the only real pockets of pro-Unionism came from the eastern portion of the state. In fact, Tennessee would furnish more troops for the Union than every other Confederate state, combined. However, over three times that number volunteered for the Confederacy. Interestingly, notable general Nathan Bedford Forrest voted against secession, but later fought for his state when it seceded. Initially, most Tennesseans showed little enthusiasm for breaking away from a nation whose struggles it had shared for so long. In 1860, they had voted by a slim margin for the Constitutional Unionist John Bell, a native son and moderate who continued to search for a way out of the crisis. A vocal minority of Tennesseans spoke critically of the Northern states and the Lincoln presidency. "The people of the South are preparing for their next highest duty -- resistance to coercion or invasion, '' wrote the Nashville Daily Gazette on January 5, 1861. The newspaper expressed the view that Florida, Georgia, and Alabama were exercising the highest right of all by taking control of all forts and other military establishments within the area -- the right to self - defense. A pro-secessionist proposal was made in the Memphis Appeal to build a fort at Randolph, Tennessee, on the Mississippi River. Governor Isham G. Harris convened an emergency session of the Tennessee General Assembly in January 1861. During his speech before the legislative body on January 7, he described the secession of the Southern states as a crisis caused by "long continued agitation of the slavery question '' and "actual and threatened aggressions of the Northern States... upon the well - defined constitutions rights of the Southern citizen. '' He also expressed alarm at the growth of the "purely sectional '' Republican Party, which he stated was bound together by the "uncompromising hostility to the rights and institutions of the fifteen Southern states. '' He identified numerous grievances with the Republican Party, blaming them for inducing slaves to run off by means of the Underground Railroad, John Brown 's raids, and high taxes on slave labor. Harris agreed with the idea of popular sovereignty, that only the people within a state can determine whether or not slavery could exist within the boundaries of that state. Furthermore, he regarded laws passed by Congress that made U.S. territories non-slave states as taking territories away from the American people and making them solely for the North, territories from which "Southern men unable to live under a government which may by law recognize the free negro as his equal '' were excluded. Governor Harris proposed holding a State Convention. A series of resolutions were presented in the Tennessee House of Representatives by William H. Wisener against the proposal. He declared passing any law reorganizing and arming the state militia to be inexpedient. The centrality of the question of slavery to the secession movement was not doubted by people at the time of the Civil War, nor was it ignored by the contemporary press. Especially in the case of the pro-slavery papers, this question of the possibility of the eventual granting of equal rights for people of color was not couched in diplomatic phraseology: The election, be it remembered, takes place on the 9th, and the Delegates meet in Convention on the 25th instant. If you desire to wait until you are tied hand and foot then vote for the men who advocate the ' watch and wait ' policy. If you think you have rights and are the superiors of the black man then vote for the men who will not sell you out, body and soul to the Yankee Republicans - for men who would rather see Tennessee independent out of the Union, then in the Union subjugated. (emphasis in original) On February 3, 1861, the pro-Union Knoxville Whig published a "Secret Circular '' that had mistakenly been sent by its authors to a pro-Union Tennessee U.S. Postmaster. In it was revealed a comprehensive plan by pro-slavery Tennesseeans and others to launch a propaganda campaign to convince Tennesseans that the strength of the pro-secessionist movement was overwhelming: Dear Sir -- Our earnest solicitude for the success of the Great Southern Rights movement to secure an immediate release from the overwhelming dangers that imperil our political and social safety, will we trust, be a sufficient apology for the results which we beg to impose on you. The sentiment of the Southern heart is overwhelming in favor of the movement. Light only is wanted that men many see their way clearly and the prayer of every true patriot will eventually be realized. Tennessee will be a unit. Although the time be so very short, this object may yet be accomplished, if a few men only, (the more the better, however) in each county, will devote their entire energies to it during the canvass for Delegates. We earnestly beg your attention, therefore, to the following suggestions: (...) We can, we must, carry our State. Our hearts would link within us, at the bare thought of the degradations and infamy of abandoning our more Southern brethren united to us by all the views so sympathy and interest, and of being chained to the car of Black Republican States, who would themselves despise us for our submission; and worse than all, by moral influences alone, if not by force of legal enactment destroy our entire social fabric, and all real independence of thought and action. Your own good judgment will suggest many things we can not now allude to. (emphasis added) Very Respectfully, (Signed) Wm. Williams, Chm'n. / S.C. Goethall (?), Sec'y, / Andrew Cheatham / J.R. Baus (?) / R.H. Williamson / G.W. Cunningham / H.M. Cheatham, W.S. Peppin States Central Southern Rights Anti-Coercion Committee In Memphis, Unionists held two torchlight processions to honor their cause. The secessionists replied with their own demonstrations and a celebratory ball. That week, on February 9, the state of Tennessee was to vote on whether or not to send delegates to a State Convention that would decide on secession. The General Assembly convened by Governor Isham Harris did not believe it had the authority to call a State Convention without a vote of the people. In February 1861, 54 percent of the state 's voters voted against sending delegates to a secession convention, defeating the proposal for a State Convention by a vote of 69,675 to 57,798. If a State Convention had been held, it would have been very heavily pro-Union. 88,803 votes were cast for Unionist candidates and 22,749 votes were cast for Secession candidates. That day the American flag was displayed in "every section of the city, '' with zeal equal to that which existed during the late 1860 presidential campaign, wrote the Nashville Daily Gazette. The proponents of the slavocracy were embarrassed, demoralized and politically disoriented but not willing to admit defeat: "Whatever may be the result of the difficulties which at present agitate our country - whether we are to be united in our common destiny or whether two Republics shall take the place of that which has stood for nearly a century, the admired of all nations we will still bow with reverence to the sight of the stars and stripes, and recognize it as the standard around which the sons of liberty can rally (...). And if the remonstrances of the people of the South - pleading and begging for redress for years - does not in this critical moment, arouse her brethren of the North to a sense of justice and right, and honor demands a separation, we would still have the same claims upon the ' colors of Washington, great son of the South, and of Virginia, mother of the States. ' Let us not abandon the stars and stripes under which Southern men have so often been led to victory. '' "On the corner across from the newspaper office, a crowd had gathered around a bagpipe player playing Yankee Doodle, after which ex-mayor John Hugh Smith gave a speech that was received with loud cheers. In a letter to Democratic senator Andrew Johnson, the publisher of the Clarksville (TN) Jeffersonian, C.O. Faxon, surmised that the margin by which the "No Convention '' vote won would have been even greater, had Union men not been afraid that if a State Convention were not called then, then Isham Harris would have again called for a State Convention when more state legislators were "infected with the secession epidemic '' (...) "Gov Harris is Check mated (sic). The Union maj (ority) in the State will almost defy computation (.) So far as heard from the disunionist have carried by a single precinct. The Union and American (Nashville, TN pro-secessionist paper) Stands rebuked and damned before the people of the State '' On March 7, the Memphis Daily Appeal wrote that the abolitionists were attempting to deprive the South of territories won during the U.S. - Mexican War. It pointed out that the slave states had furnished twice as many volunteers as the free states and territories, though it did not note that slave states were the ones who most supported the war. On March 19, the editors of the Clarksville Chronicle endorsed a pro-Union candidate for state senator in Robertson, Montgomery, and Stewart counties. On April 2, the Memphis Daily Appeal ran a satirical obituary for Uncle Sam, proclaiming him to have died of "irrepressible conflict disease, '' after having met Abraham Lincoln. One Robertson County slave owner complained that she could not rent her slaves out for "half (of what) they were worth '' because "the negros think when Lincoln takes his last, they will all be free. '' With the attack on Fort Sumter on April 12, 1861, followed by President Abraham Lincoln 's April 15 call for 75,000 volunteers to put the seceded states back into line, public sentiment turned dramatically against the Union. Historian Daniel Crofts thus reports: Governor Isham Harris began military mobilization, submitted an ordinance of secession to the General Assembly, and made direct overtures to the Confederate government. In the June 8, 1861 referendum, East Tennessee held firm against separation, while West Tennessee returned an equally heavy majority in favor. The deciding vote came in Middle Tennessee, which went from 51 percent against secession in February to 88 percent in favor in June. Having ratified by popular vote its connection with the fledgling Confederacy, Tennessee became the last state to declare formally its withdrawal from the Union. Control of the Cumberland and Tennessee Rivers was important in gaining control of Tennessee during the age of steamboats. Tennessee relied on northbound riverboats to receive staple commodities from the Cumberland and Tennessee valleys. The idea of using the rivers to breach the Confederate defense line in the West was well known by the end of 1861; Union gunboats had been scanning Confederate fort - building on the twin rivers for months before the campaign. Ulysses S. Grant and the United States Navy captured control of the Cumberland and Tennessee Rivers in February 1862 and held off the Confederate counterattack at Shiloh in April of the same year. Capture of Memphis and Nashville gave the Union control of the Western and Middle sections. Control was confirmed at the battle of Murfreesboro in early January 1863. After Nashville was captured (the first Confederate state capital to fall) Andrew Johnson, an East Tennessean from Greeneville, was appointed military governor of the state by Lincoln. During this time, the military government abolished slavery (but with questionable legality). The Confederates continued to hold East Tennessee despite the strength of Unionist sentiment there, with the exception of strongly pro-Confederate Sullivan and Rhea Counties. After winning a victory at Chickamauga in September 1863, the Confederates besieged Chattanooga but were finally driven off by Grant in November. Many of the Confederate defeats can be attributed to the poor leadership of General Braxton Bragg, who led the Army of Tennessee from Shiloh to the Confederate defeat at Chattanooga. Historian Thomas Connelly concludes that although Bragg was an able planner and a skillful organizer, he failed repeatedly in operations, in part because he was unable to collaborate effectively with his subordinates. The last major battles came when the General John Bell Hood led the Confederates north in November 1864. He was checked at Franklin, and his army was virtually destroyed by George Thomas 's greatly superior forces at Nashville in December. Fear of subversion was widespread throughout the state. In West and Middle Tennessee it was fear of pro-Union activism, which was countered proactively by numerous local Committees of Safety and Vigilance from 1860 to 1862. They emerged as early as the 1860 Presidential election, and when the war began activists developed an aggressive program to detect and suppress Unionists. The committees set up a spy system, intercepted mail, inspected luggage, forced the enlistment of men into the Confederate Army, confiscated private property, and whenever it seemed necessary lynched enemies of the Confederacy. The committees were disbanded by the Union Army when it took control in 1862. East Tennessee was a stronghold of Unionism; most slaves were house servants -- luxuries -- rather than the base of plantation operations. The dominant mood strongly opposed secession. Tennesseans representing twenty - six East Tennessee counties met twice in Greeneville and Knoxville and agreed to secede from Tennessee (see East Tennessee Convention of 1861.) They petitioned the state legislature in Nashville, which denied their request to secede and sent Confederate troops under Felix Zollicoffer to occupy East Tennessee and prevent secession. The region thus came under Confederate control from 1861 to 1863. Nevertheless East Tennessee supplied significant numbers of troops to the Federal army. (See also Nickajack). Many East Tennesseans engaged in guerrilla warfare against state authorities by burning bridges, cutting telegraph wires, and spying for the North. East Tennessee became an early base for the Republican Party in the South. Strong support for the Union challenged the Confederate commanders who controlled East Tennessee for most of the war. Generals Felix K. Zollicoffer, Edmund Kirby Smith, and Sam Jones oscillated between harsh measures and conciliatory gestures to gain support, but had little success whether they arrested hundreds of Unionist leaders or allowed men to escape the Confederate draft. Union forces finally captured the region in 1863. General William Sherman 's famous March to the Sea saw him personally escorted by the 1st Alabama Cavalry Regiment, which consisted entirely of Unionist southerners. Despite its name, the regiment consisted largely of men from Tennessee. Refugees poured into Nashville during the war, because jobs were plentiful in the depots, warehouses and hospitals serving the war effort, and furthermore the city was much safer place than the countryside. Unionists and Confederate sympathizers both flooded in, as did free blacks and escaped slaves, and businessmen from the North. There was little heavy industry in the South but the Western Iron District in Middle Tennessee was the largest iron producer in the Confederacy in 1861. One of the largest operations was the Cumberland Iron Works, which the Confederate War Department tried and failed to protect. Memphis and Nashville, with very large transient populations, had flourishing red light districts. Union wartime regulations forced prostitutes to purchase licenses and pass medical exams, primarily to protect soldiers from venereal disease. Their trade was deregulated once military control ended. After the war, Tennessee adopted a constitutional amendment forbidding human property on February 22, 1865; ratified the Fourteenth Amendment to the United States Constitution on July 18, 1866; and was the first state readmitted to the Union on July 24, 1866. Because it ratified the Fourteenth Amendment, Tennessee was the only state that seceded from the Union that did not have a military governor during Reconstruction. This did not placate those unhappy with the Confederate defeat. Many white Tennesseans resisted efforts to expand suffrage and other civil rights to the freedmen. For generations white Tennesseans had been raised to believe that slavery was justified. Some could not accept that their former slaves were now equal under the law. When the state Supreme Court upheld the constitutionality of African American suffrage in 1867, the reaction became stronger. The Nashville Republican Banner on January 4, 1868, published an editorial calling for a revolutionary movement of white Southerners to unseat the one - party state rule imposed by the Republican Party and restore the legal inferiority of the region 's black population. "In this State, reconstruction has perfected itself and done its worst. It has organized a government which is as complete a closed corporation as may be found; it has placed the black man over the white as the agent and prime - move of domination; it has constructed a system of machinery by which all free guarantees, privileges and opportunities are removed from the people... The impossibility of casting a free vote in Tennessee short of a revolutionary movement... is an undoubted fact. '' The Banner urged its readers to ignore the presidential election and direct their energy into building "a local movement here at home '' to end Republican rule. According to the 1860 census, African Americans made up only 25 % of Tennessee 's population, which meant they could not dominate politics. Only a few African Americans served in the Tennessee legislature during Reconstruction, and not many more as state and city officers. However, the Nashville Banner may have been reacting to increased participation by African Americans on that city 's council, where they held about one - third of the seats. Tennessee has strong Confederate memories (based in West Tennessee and Middle Tennessee), which focused on the Lost Cause theme of heroic defense of traditional liberties. To a lesser extent Tennesseans celebrate Unionist memories based in East Tennessee and among blacks. Coordinates: 36 ° N 86 ° W  /  36 ° N 86 ° W  / 36; - 86
what episode does pam tells jim she's pregnant
Pam Beesly - wikipedia Pamela Morgan Halpert (née Beesly; born March 25, 1979) is a fictional character on the U.S. television sitcom The Office, played by Jenna Fischer. Her counterpart in the original UK series of The Office is Dawn Tinsley. Her character is initially the receptionist at the paper distribution company Dunder Mifflin, before becoming a saleswoman and eventually office administrator until her termination in the series finale. Her character is shy, growing assertive but amiable, and artistically inclined, and shares romantic interest with Jim Halpert, whom she begins dating in the fourth season and marries and starts a family with as the series continues. The character was originally created to be very similar to the British counterpart, Dawn Tinsley. Even minute details, such as how Pam wore her hair each day, were considered by executive producer, Greg Daniels. "When I went in for The Office, the casting director said to me, ' Please look normal ', '' recalls Jenna Fischer. "Do n't make yourself all pretty, and dare to bore me with your audition. Those were her words. Dare to bore me. '' Heeding the advice, Fischer said little during the auditions, during which she was interviewed in character by show producers, in an improvisational format, to imitate the show 's documentary premise. "My take on the character of Pam was that she did n't have any media training, so she did n't know how to be a good interview. And also, she did n't care about this interview, '' she told NPR. "So, I gave very short one - word answers and I tried very hard not to be funny or clever, because I thought that the comedy would come out of just, you know, the real human reactions to the situation... and they liked that take on it. '' "When I went in to the audition, the first question that they asked me in the character of Pam -- they said, ' Do you like working as a receptionist? ' I said, ' No. ' And that was it. I did n't speak any more than that. And they started laughing. '' Fischer found herself creating a very elaborate backstory for the character. For the first few seasons, she kept a list of the character history revealed on - screen by the creators, as well as her own imaginative thoughts on Pam 's history. She created a rule with the set 's hair and make - up department that it could n't look as though it took Pam more than 30 minutes to do her hair, and she formulated ideas as to who gave Pam each piece of jewelry she wore or where she went to college. Fischer also carefully crafted Pam 's quiet persona. "Well, my character of Pam is really stuck, '' she explained to NPR. "I mean, she 's a subordinate in this office. And so, I think that for her, the only way she can express herself is in the silences, but you can say so much by not saying anything. '' Originally meek and passive, the character grew more assertive as the seasons passed, prompting Fischer to reassess her portrayal. "I have to approach Pam differently (now), '' she explained in Season 4, a defining season in which her character finally begins a long - awaited relationship with Jim and is accepted into the Pratt Institute. "She is in a loving relationship, she has found her voice, she has started taking art classes. All of these things must inform the character and we need to see changes in the way she moves, speaks, dresses, etc. '' At the beginning of the series, Pam and Roy have been dating for eight years and engaged for three years. Their open - ended engagement has become one of Michael 's running gags and a sore spot for Pam. Pam does not want her current job to become permanent, remarking that "I do n't think it 's many little girls ' dream to be a receptionist. '' Pam is apathetic toward her work, evidenced by her frequent games of FreeCell on her office computer. However, in the pilot episode, she breaks down crying when Michael pulls an ill - advised prank by telling her that she will be fired. Michael has criticized Pam for simply forwarding calls to voice mail without answering and (in a deleted scene) for not sounding enthusiastic enough when speaking on the telephone. Pam is usually happy to abandon her work if asked to do something else by Jim. She will do extra, unnecessary work (such as making a casket for a dead bird or paper doves for the Office Olympics) to make other people happy. Despite the abuse she takes from Michael, she never goes any further than calling him a jerk in the pilot. In later seasons, however, she becomes more honest and forward with Michael and will often make sarcastic comments toward him. While engaged to Roy, Pam denies, or is in denial about, having any romantic feelings for Jim. When Jim confesses his love for her at the Dunder Mifflin "Casino Night '' she turns him down. She later talks to her mom on the phone and says Jim is her best friend (though she does n't say his name), and says "Yeah, I think I am '' to an unheard question. She is interrupted by Jim, who enters and kisses her; she responds by kissing back. Season three marks a turning point for Pam 's character: she gains self - confidence and appears less passive and more self - assured as the season progresses. In "Gay Witch Hunt, '' the season 's opener, it is revealed that Pam got cold feet before her wedding and did not marry Roy after all, and that Jim transferred to a different Dunder Mifflin branch, in Stamford, shortly after Pam rejected him a second time, after their kiss. Pam moves into her own apartment, begins taking art classes, a pursuit that Roy had previously dismissed as a waste of time, and buys a new car, a blue Toyota Yaris. Jim returns to Scranton later on as a result of "The Merger '', and brings along a female co-worker, Karen Filippelli, whom he begins dating. Jim and Pam appeared to have ended all communication after Jim transfers to the Stamford branch (aside from an episode in which Jim accidentally calls Pam at the end of the work day), and their episodes together following the branch merge are tense, despite both admitting to still harboring feelings for the other during the presence of the documentary cameras. Meanwhile, Roy vows to win Pam back. Roy 's efforts to improve his relationship with Pam are quite successful, but once Pam and Roy are back together, he falls back into old habits almost immediately. When Roy and Pam attend an after work get - together at a local bar with their co-workers, Pam, feeling that she should be more honest with Roy, tells him about Jim kissing her at, "Casino Night. '' Roy yells, smashes a mirror, and trashes the bar. Pam, frightened and embarrassed by his reaction, breaks up with Roy immediately. Roy vows to kill Jim and in, "The Negotiation '', Roy unsuccessfully tries to attack Jim at work (Jim is saved by Dwight 's intervention), and is subsequently fired. Pam later reluctantly agrees to meet Roy for coffee at his request, and after the polite but brief meeting, it appears that their relationship has ended amicably with Roy encouraging Pam to pursue Jim. Pam participates in an art show, but few people attend. Her co-worker, Oscar, brings his partner along who, not knowing that Pam is standing behind him, criticizes her work by proclaiming that "real art requires courage. '' Oscar then goes on to say that courage is n't one of Pam 's strong points. Affected by this statement, Pam tells the documentary crew that she is going to be more honest, culminating in a dramatic coal walk during the next - to - last episode of the season, "Beach Games '', and a seemingly sincere speech to Jim in front of the entire office about their relationship. Michael also comes to the art show and reveals his erratically kind heart and loyalty by buying, framing and hanging Pam 's drawing of the Dunder Mifflin building in the office. In the season finale, "The Job, '' she leaves a friendly note in Jim 's briefcase and an old memento depicting the ' gold medal ' yogurt lid from the Office Olympics, which he sees during an interview for a job at Corporate in New York City. While he is asked how he "would function here in New York '', Jim is shown to have his mind back in Scranton, still distracted by the thought of Pam. Jim withdraws his name from consideration and drives back to the office, where he interrupts a talking head Pam is doing for the documentary crew by asking her out for dinner. She happily accepts, visibly moved, abandoning a train of thought about how she would be fine if Jim got the job and never came back to Scranton. Karen quits soon after, becoming the regional manager at Dunder Mifflin 's Utica branch. In Season 4, Pam retains the assertiveness she developed in the third season. She wears her hair down and has updated her old dowdy wardrobe. In the season 4 premiere, "Fun Run '', Jim and Pam confess that they have started dating after the camera crew catches them kissing. The office ultimately learns of their relationship in "Dunder Mifflin Infinity ''. In "Chair Model '', after teasing Pam about his impending proposal, Jim tells the documentary crew he is not kidding around about an engagement and shows them a ring he bought a week after he and Pam started dating. In the next few episodes, Jim fake - proposes to Pam multiple times. In "Goodbye, Toby '', Pam discovers she 's been accepted at Pratt Institute, an art and design school in Brooklyn. In an interview later in the episode, Jim announces that he will propose to Pam that evening. Just as Jim is preparing to propose, however, Andy Bernard stands up and makes his own impromptu proposal to Angela. Having had his thunder stolen by Andy, Jim reluctantly puts the ring back in his jacket pocket, leaving Pam visibly disappointed as she was expecting Jim to propose that night. In the Season 5 premiere, "Weight Loss '', Pam begins her three - month course at the Pratt Institute. In this episode, Jim proposes in the pouring rain at a rest stop, saying that he "ca n't wait ''. In "Business Trip '', Pam learns that she is failing one of her classes and will have to remain in New York another three months to retake it. Although Jim is supportive and tells her he will wait for her to come back "the right way '', she ultimately makes the decision to return home, saying that she realized she hated graphic design and missed Scranton. A deleted scene for the episode shows Jim looking through Pam 's graphic design projects, which he thinks are "cool '', as well as a notebook filled with pencil sketches, which he finds a lot more impressive than her graphic design projects, implying her talents lie in hand - drawn works. In "Two Weeks '', Pam agrees to become Michael 's first saleswoman in his not - yet - established company, The Michael Scott Paper Co., as a supportive Jim looks on. When David Wallace makes an offer to buy the company Michael negotiates in order to get their jobs at Dunder Mifflin back instead, including adding Pam to the sales team. In "Company Picnic '', Pam, after dominating the company volleyball tournament, injures her ankle during a game and is taken to the hospital against her wishes. At the hospital, the camera crew is stationed outside an exam room while a doctor updates Jim and Pam on her condition. There is no audio as the camera shows Jim and Pam embrace, looking shocked and ecstatic. It is implied that she is pregnant and is confirmed in the Season 6 premiere, "Gossip ''. Jim and Pam marry early in the season, at Niagara Falls, during the highly anticipated, hour long episode, "Niagara ''. The ending of the episode, in which their co-workers dance down the aisle, is an imitation of a viral YouTube video -- JK Wedding Entrance Dance. Following the wedding, a multi-episode story arc begins in which it is revealed that Michael hooked up with Pam 's mother the night of the wedding. The two break up during "Double Date '', an episode that ends with Pam slapping Michael in response to his actions. In "The Delivery '' of Season 6, Pam and Jim have their first child, a daughter named Cecelia Marie Halpert. Jenna Fischer was granted naming rights by show producers, and chose to name her after her own niece. In "Counseling '', Pam feels inadequate about her poor performance in sales and tricks Gabe into promoting her to a phony new salaried position called office administrator. In "China '', Pam tries to use her authority as office administrator to force building manager Dwight to stop his annoying cost - cutting measures. Pam threatens to move the office to a new building, which Dwight discovers does n't exist. Pam saves face, however, when Dwight secretly has his assistant provide her with a book on building regulations that proves Dwight 's measures were not allowed. The episode is another example of Dwight 's covert protectiveness and fondness for Pam (as previously demonstrated in "The Injury '', "Back from Vacation '', "The Job '' and "Diwali ''); Mindy Kaling said during an online Q&A session that Dwight has a soft spot for her that he does not extend to anyone else at the office. She also uses her position to buy Erin Hannon an expensive desk top computer to replace the terrible one she had to use for years, as well as discreetly giving Andy a new computer, and giving Darell three sick days. At the end of the episode she proudly says that she is, "Full on corrupt. '' In "Goodbye, Michael '', Pam almost misses saying good - bye to Michael, as she spends most of the day out of the office trying to price shredders. Jim figures out Michael 's plan to leave early and tells her by text. Pam reaches the airport in time and is the last person to see Michael before he leaves. At the beginning of Season 8, Pam is revealed to be pregnant with her and Jim 's second child, Philip Halpert. The child coincided with Jenna Fischer 's actual pregnancy. She begins her maternity leave after "Pam 's Replacement ''. Pam returns in "Jury Duty '', where she and Jim bring Cece and Phillip into the office. In both "Tallahassee '' and "Test the Store '' Pam is shown helping, and developing her friendship with Andy. Early in season 9, Jim is restless about his life in Scranton and helps a friend start a Sports Marketing business, Athlead, in Philadelphia but he keeps it a secret from Pam until the third episode "Andy 's Ancestry ''. Although Pam is happy for his decision, she is concerned about the fact that he had kept it a secret from her and she is later disturbed to hear about just how much of their money he had invested. Jim begins spending part of each work week in Philadelphia, but in "Customer Loyalty '', the strain of this on Pam is evident when she breaks down in tears, and is comforted by Brian, the boom mic operator of the film crew. In "Moving On '', Pam interviews for a job in Philadelphia to be closer to Jim, but she is turned off by the idea when her prospective new boss bears a striking resemblance in behavior to Michael Scott. Over dinner, Pam reveals to Jim that she does n't really want to move to Philadelphia after all. However, in "Livin ' the Dream '', when Athlead is bought out and Jim is offered a large sum of money for 3 months to pitch the company across the country, Pam overhears Jim refuse the opportunity because of her and appears to have mixed feelings about this decision. In "A.A.R.M. '', Pam tells Jim that she 's afraid that he will resent her for making him stay and that she might not be enough for him. Jim asks the camera crew to compile documentary footage of the two of them to show her. When she finishes the montage, which shows Jim taking back a letter he intended to give her with his teapot gift during Christmas; Jim finally gives her that letter, and she reads it, visibly moved. In the series finale, which takes place a year later, she reveals to Jim that she secretly put the house on the market, so that they can move to Austin, Texas, and take his job back at Athlead (now Athleap). From her years working the front desk, Pam has become well - acquainted with the Dunder - Mifflin staff and is consistently shown to have a thorough understanding of her coworkers ' personalities, including the more eccentric individuals Dwight Schrute and Michael Scott. She uses this familiarity to manipulate them, often for their and the company 's best interests (such as her giving the staff elaborate instructions on how to handle a heartbroken Michael in "The Chump '') but also occasionally for her own. This familiarity plays a large part in her efficiency as office administrator and was crucial to her being promoted to the previously non-existent position. The "will they or wo n't they '' tension between Jim and Pam is a strong storyline in the early episodes of The Office, encompassing much of Seasons 1 to 3. In the opener of Season 4, the two characters are revealed to be dating, and as such, other character romances, such as the romance between fellow co-workers Dwight Schrute and Angela Martin, begin to move more toward the forefront of episodes. In Season 6, Jim and Pam are married in the season 's 4th and 5th episodes (hour long), a feat considered noteworthy by many television critics, as bringing together the two lead love interests in a television series is often thought to be a risky venture. Their child is born in the second half of the season, during another hour long, "The Delivery ''. Pam and Jim 's second child is born during season 8. In season 9, their marriage becomes strained when Jim takes up a second job in Philadelphia. They ultimately decide to leave Dunder Mifflin together so Jim can pursue his dream job. When the series begins, Pam is engaged to her high school sweetheart Roy Anderson; this engagement is revealed to be three years old and running. They finally set a date, but Pam calls off the wedding at the last minute. They get back together once, briefly, but Pam is much more assertive, and finally breaks up with him after he has a violent outburst. Roy is deeply flawed - he is overbearing, neglectful, dismissive of her desire to be an artist, and offers her sex as a gift on Valentine 's Day. Jim comments in Season 2 that Pam does not like to "bother '' Roy with her "thoughts or feelings ''. He tells the camera crew that the only two problems in Pam 's life seemed to be Roy and her job at Dunder Mifflin. In the early seasons, there is a great deal of tension between Jim and Roy, with Roy often acting threateningly towards Jim. In "Basketball '', when Jim starts to impress Pam with his basketball skills, Roy elbows Jim in the nose. In season 2, when Jim encourages Pam to pursue a graphic arts internship offered by Dunder Mifflin, Roy objects to the opportunity and eventually convinces her that the idea is foolish. Pam ultimately calls off her wedding to Roy, but they remain friendly and he is determined to win her back by being less of a jerk. She reconciles with Roy at Phyllis 's wedding as a response to watching Jim date Karen. In an attempt at a fresh start with Roy, Pam comes clean about Jim kissing her during "Casino Night ''. Roy flies into a violent rage and Pam ends the relationship on the spot. The next day, Roy attempts to attack Jim in the office but is stopped by Dwight 's pepper spray and is summarily fired. After losing his job, Roy meets Pam for coffee and says that even though Jim is dating Karen, she should at least make an effort to date him (inasmuch as she called off the wedding because of him). In season 5, Jim and Roy run into each other at a bar and Roy learns that Jim and Pam are engaged. The mood is somewhat awkward, but Roy is congratulatory, but then makes a somewhat passive - aggressive comment, seemingly meant to make Jim feel insecure about his current role in Pam 's life, which tempts him to drive to Pratt, where she is attending art classes. Jim gets on the freeway, but changes his mind and remembers that he trusts Pam. Jim did n't want to treat Pam the same way Roy treated her. In the series pilot, Michael is overtly rude to Pam and at one point fakes her firing, leaving her in tears. He often makes suggestive if harmless remarks about her beauty and general appearance, and at one point lies to the camera that they used to date (inspiring a horrified "WHAT??? '' from Pam when an interviewer relays the message to her). However, his impulsive attempt to kiss her during Diwali is shot down and marked the end of any romantic dreams for Michael with Pam. Over time, the combination of Michael being supportive of her goals, her transition from a bad relationship with Roy to a great one with Jim as well as her finding a job she not only enjoys but is effective at in the office administrator position and Michael finding his own soulmate in Holly Flax made Pam soften her stance towards Michael, and the experience at the Michael Scott Paper Company further bonded them (as did Michael 's decision to choose Pam instead of Ryan Howard as the only MSPC salesman to keep that job when Michael returned as Branch Manager). Pam was furious at Michael for dating her mom Helene, and excoriated him at length during "The Lover '' before eventually slapping him in "Double Date '', but they once again were able to be civil to each other afterward. Pam does set up boundaries around her personal life that Michael ca n't cross, like telling him that he was n't Cece 's godfather. By Season 7, Pam acts as something of a guardian angel for Michael, steering him away from (numerous) bad ideas and towards his (fewer but real) good ones, such as his successful efforts to propose to Holly. In Michael 's finale "Goodbye, Michael '', Pam spends the whole day looking for a shredder, believing that the next day Michael was leaving. As Michael takes off his microphone and heads down the airport concourse, Pam runs to him with no shoes and hugs him as he kisses her cheek. The two have a nice moment and he walks off, leaving her holding her shoes. She then tells the camera that he was happy, wanting to be an advanced rewards member, and was glad to be going home to see Holly. She then is there to watch Michael 's plane take off. In a deleted scene from "The Inner Circle '', we learn Pam is flattered that Michael named his new puppy "Pamela Beagsley '', and in "The List '' she playfully teases Jim by calling their second child "Little Michael Scott '', further proving that the two have developed a genuine friendship. Toby, the Human Resources Representative for Dunder Mifflin in the Scranton branch, has a secret crush on Pam. In "A Benihana Christmas '' she gives him her Dunder - Mifflin bathrobe, a display of friendly affection, after he spent the day feeling bad that Dwight took his. In "Night Out '', Toby awkwardly rubs her knee while they share a laugh (and while Jim sits just on her other side), and the rest of the office watches in horror. In his mortification, Toby immediately announces that he is moving to Costa Rica before jumping over the locked gate and fleeing. In "Goodbye, Toby '', he purchases a DSLR camera just to get a picture with Pam. On the eve of his departure, she confesses to the cameras that she always thought he was "kind of cute ''. The crush receives less attention after Toby 's return. However, in "Niagara '', Pam and Jim are late for their wedding and he is visibly excited at the prospect that the wedding might not happen. In "Finale '', Pam and Toby dance with each other at Dwight 's wedding, with Toby beginning to cry as Pam comforts him. When she asks, "is it me? '', he replies that "it 's everything! ''. Pam Halpert has appeared in every episode with the exceptions of "Business Ethics '' (except for the deleted scenes), "St. Patrick 's Day '', and "New Leads '' in which only her voice is heard, and several season 8 episodes from "Mrs. California '' to "Pool Party '', where she did not appear at all as Fischer was on maternity leave.
how does the mls all star game work
Major League Soccer All - star game - wikipedia The Major League Soccer All - Star Game is an annual soccer game held by Major League Soccer featuring select players from the league against an international club. MLS initially adopted a traditional all - star game format used by other North American sports leagues where the Eastern Conference squared off against the Western Conference. This eventually evolved into the current system where the league annually invites a club from abroad to play against a league all - star team. The MLS All - Stars hold an 8 -- 4 record in the competition marking the season 's midpoint. Players are awarded rosters spots through a combination of fan voting and selections by the appointed manager and league commissioner. In case of a tie after full - time, the game does not use a 30 - minute extra time period; instead it goes straight to a penalty shoot - out. Major League Soccer 's first all - star game was played at Giants Stadium in the summer of 1996. The game, using the traditional East -- West format with players handpicked by the coaching staffs, was the first game of a doubleheader with the Brazilian national team defeating a team of FIFA World All - Stars. The matchup between divisions would only be used for six seasons as MLS tried experimenting with different formats. The 1998 All - Star Game placed a team of American MLS players against MLS players from abroad. The 2002 game, the first to use a league - wide all - star team, is the only game to feature a national team opponent. Since then (except in 2004), every opponent has been a professional club invited by the league. The MLS All - Stars won their first six games before falling to Everton in penalties in 2009. All games since 2005 have been against teams from Europe, the majority of which have been from England 's Premier League. This is a feasible format because unlike MLS (which runs a spring - summer - fall schedule) the major European leagues all run a fall - winter - spring schedule, meaning the middle of the MLS season coincides with the pre-seasons of most European leagues. From a European perspective, the match is considered to be a pre-season friendly against an MLS XI. For 2014, ten players were chosen by All - Star Game coach Caleb Porter, eleven players were chosen by fan voting (subject to Porter 's approval), and two were selected by MLS commissioner Don Garber.
how did society and culture influence the development of science and technology
Technology and society - wikipedia Technology society and life or technology and culture refers to cyclical co-dependence, co-influence, and co-production of technology and society upon the other (technology upon culture, and vice versa). This synergistic relationship occurred from the dawn of humankind, with the invention of simple tools and continues into modern technologies such as the printing press and computers. The academic discipline studying the impacts of science, technology, and society, and vice versa is called science and technology studies. The importance of stone tools, circa 2.5 million years ago, is considered fundamental in the human development in the hunting hypothesis. Primatologist Richard Wrangham theorizes that the control of fire by early humans and the associated development of cooking was the spark that radically changed human evolution. Texts such as Guns, Germs, and Steel suggest that early advances in plant agriculture and husbandry fundamentally shifted the way that collective groups of individuals, and eventually societies, developed. Technology has become a huge part in society and day - to - day life. When societies know more about the development in a technology, they become able to take advantage of it. When an innovation achieves a certain point after it has been presented and promoted, this technology becomes part of the society. Digital technology has entered each process and activity made by the social system. In fact, it constructed another worldwide communication system in addition to its origin. A 1982 study by The New York Times described a technology assessment study by the Institute for the Future, "peering into the future of an electronic world ''. The study focused on the emerging videotex industry, formed by the marriage of two older technologies, communications and computing. It estimated that 40 percent of American households will have two - way videotex service by the end of the century. By comparison, it took television 16 years to penetrate 90 percent of households from the time commercial service was begun. Since the creation of computers achieved an entire better approach to transmit and store data. Digital technology became commonly used for downloading music and watching movies at home either by DVDs or purchasing it online. Digital music records are not quite the same as traditional recording media. Obviously, because digital ones are reproducible, portable and free. However, although these previous examples only show a few of the positive aspects of technology in society, there are negative side effects as well. Within this virtual realm, social media platforms such as Instagram, Facebook, and Snapchat have altered the way Generation Y culture is understanding the world and thus how they view themselves. In recent years, there has been more research on the development of social media depression in users of sites like these. "Facebook Depression '' is when users are so affected by their friends ' posts and lives that their own jealousy depletes their sense of self - worth. They compare themselves to the posts made by their peers and feel unworthy or monotonous because they know that their life is not nearly as exciting as the lives of others. Another instance of the negative effects of technology in society, is how quickly it is pushing younger generations into maturity. With the world at their fingertips, children can learn anything they wish to. But with the uncensored sources from the internet, without proper supervision, children can be exposed to explicit material at inappropriate ages. This comes in the forms of premature interests in experimenting with makeup or opening an email account or social media page -- all of which can become a window for predators and other dangerous entities that threaten a child 's innocence. Technology has a serious effect on youth 's health. The overuse of technology is said to be associated with sleep deprivation which is linked to obesity and poor academic performance in the lives of adolescents. In ancient history, economics began when occasional, spontaneous exchange of goods and services was replaced over time by deliberate trade structures. Makers of arrowheads, for example, might have realized they could do better by concentrating on making arrowheads and barter for other needs. Regardless of goods and services bartered, some amount of technology was involved -- if no more than in the making of shell and bead jewelry. Even the shaman 's potions and sacred objects can be said to have involved some technology. So, from the very beginnings, technology can be said to have spurred the development of more elaborate economies. In the modern world, superior technologies, resources, geography, and history give rise to robust economies; and in a well - functioning, robust economy, economic excess naturally flows into greater use of technology. Moreover, because technology is such an inseparable part of human society, especially in its economic aspects, funding sources for (new) technological endeavors are virtually illimitable. However, while in the beginning, technological investment involved little more than the time, efforts, and skills of one or a few men, today, such investment may involve the collective labor and skills of many millions. Consequently, the sources of funding for large technological efforts have dramatically narrowed, since few have ready access to the collective labor of a whole society, or even a large part. It is conventional to divide up funding sources into governmental (involving whole, or nearly whole, social enterprises) and private (involving more limited, but generally more sharply focused) business or individual enterprises. The government is a major contributor to the development of new technology in many ways. In the United States alone, many government agencies specifically invest billions of dollars in new technology. (In 1980, the UK government invested just over 6 - million pounds in a four - year program, later extended to six years, called the Microelectronics Education Programme (MEP), which was intended to give every school in Britain at least one computer, software, training materials, and extensive teacher training. Similar programs have been instituted by governments around the world.) Technology has frequently been driven by the military, with many modern applications developed for the military before they were adapted for civilian use. However, this has always been a two - way flow, with industry often developing and adopting a technology only later adopted by the military. Entire government agencies are specifically dedicated to research, such as America 's National Science Foundation, the United Kingdom 's scientific research institutes, America 's Small Business Innovative Research effort. Many other government agencies dedicate a major portion of their budget to research and development. Research and development is one of the smallest areas of investments made by corporations toward new and innovative technology. Many foundations and other nonprofit organizations contribute to the development of technology. In the OECD, about two - thirds of research and development in scientific and technical fields is carried out by industry, and 98 percent and 10 percent respectively by universities and government. But in poorer countries such as Portugal and Mexico the industry contribution is significantly less. The U.S. government spends more than other countries on military research and development, although the proportion has fallen from about 30 percent in the 1980s to less than 10 percent. The 2009 founding of Kickstarter allows individuals to receive funding via crowdsourcing for many technology related products including both new physical creations as well as documentaries, films, and webseries that focus on technology management. This circumvents the corporate or government oversight most inventors and artists struggle against but leaves the accountability of the project completely with the individual receiving the funds. The implementation of technology influences the values of a society by changing expectations and realities. The implementation of technology is also influenced by values. There are (at least) three major, interrelated values that inform, and are informed by, technological innovations: Technology often enables organizational and bureaucratic group structures that otherwise and heretofore were simply not possible. Examples of this might include: Technology enables greater knowledge of international issues, values, and cultures. Due mostly to mass transportation and mass media, the world seems to be a much smaller place, due to the following: Technology provides an understanding, and an appreciation for the world around us. Most modern technological processes produce unwanted by products in addition to the desired products, which is known as industrial waste and pollution. While most material waste is re-used in the industrial process, many forms are released into the environment, with negative environmental side effects, such as pollution and lack of sustainability. Different social and political systems establish different balances between the value they place on additional goods versus the disvalues of waste products and pollution. Some technologies are designed specifically with the environment in mind, but most are designed first for economic or ergonomic effects. Historically, the value of a clean environment and more efficient productive processes has been the result of an increase in the wealth of society, because once people are able to provide for their basic needs, they are able to focus on less - tangible goods such as clean air and water. The effects of technology on the environment are both obvious and subtle. The more obvious effects include the depletion of nonrenewable natural resources (such as petroleum, coal, ores), and the added pollution of air, water, and land. The more subtle effects include debates over long - term effects (e.g., global warming, deforestation, natural habitat destruction, coastal wetland loss.) Each wave of technology creates a set of waste previously unknown by humans: toxic waste, radioactive waste, electronic waste. One of the main problems is the lack of an effective way to remove these pollutants on a large scale expediently. In nature, organisms "recycle '' the wastes of other organisms, for example, plants produce oxygen as a by - product of photosynthesis, oxygen - breathing organisms use oxygen to metabolize food, producing carbon dioxide as a by - product, which plants use in a process to make sugar, with oxygen as a waste in the first place. No such mechanism exists for the removal of technological wastes. Society also controls technology through the choices it makes. These choices not only include consumer demands; they also include: According to Williams and Edge, the construction and shaping of technology includes the concept of choice (and not necessarily conscious choice). Choice is inherent in both the design of individual artifacts and systems, and in the making of those artifacts and systems. The idea here is that a single technology may not emerge from the unfolding of a predetermined logic or a single determinant, technology could be a garden of forking paths, with different paths potentially leading to different technological outcomes. This is a position that has been developed in detail by Judy Wajcman Therefore, choices could have differing implications for society and for particular social groups. In one line of thought, technology develops autonomously, in other words, technology seems to feed on itself, moving forward with a force irresistible by humans. To these individuals, technology is "inherently dynamic and self - augmenting. '' Jacques Ellul is one proponent of the irresistibleness of technology to humans. He espouses the idea that humanity can not resist the temptation of expanding our knowledge and our technological abilities. However, he does not believe that this seeming autonomy of technology is inherent. But the perceived autonomy is because humans do not adequately consider the responsibility that is inherent in technological processes. Langdon Winner critiques the idea that technological evolution is essentially beyond the control of individuals or society in his book Autonomous Technology. He argues instead that the apparent autonomy of technology is a result of "technological somnambulism, '' the tendency of people to uncritically and unreflectively embrace and utilize new technologies without regard for their broader social and political effects. Individuals rely on governmental assistance to control the side effects and negative consequences of technology. Recently, the social shaping of technology has had new influence in the fields of e-science and e-social science in the United Kingdom, which has made centers focusing on the social shaping of science and technology a central part of their funding programs.
what was the primary complaint of the rebels in the whiskey rebellion
Whiskey Rebellion - Wikipedia Government victory The Whiskey Rebellion (also known as the Whiskey Insurrection) was a tax protest in the United States beginning in 1791 during the presidency of George Washington. The so - called "whiskey tax '' was the first tax imposed on a domestic product by the newly formed federal government. It became law in 1791, and was intended to generate revenue for the war debt incurred during the Revolutionary War. The tax applied to all distilled spirits, but American whiskey was by far the country 's most popular distilled beverage in the 18th century, so the excise became widely known as a "whiskey tax ''. Farmers of the western frontier were accustomed to distilling their surplus rye, barley, wheat, corn, or fermented grain mixtures into whiskey. These farmers resisted the tax. In these regions, whiskey often served as a medium of exchange. Many of the resisters were war veterans who believed that they were fighting for the principles of the American Revolution, in particular against taxation without local representation, while the federal government maintained that the taxes were the legal expression of Congressional taxation powers. Throughout Western Pennsylvania counties, protesters used violence and intimidation to prevent federal officials from collecting the tax. Resistance came to a climax in July 1794, when a U.S. marshal arrived in western Pennsylvania to serve writs to distillers who had not paid the excise. The alarm was raised, and more than 500 armed men attacked the fortified home of tax inspector General John Neville. Washington responded by sending peace commissioners to western Pennsylvania to negotiate with the rebels, while at the same time calling on governors to send a militia force to enforce the tax. Washington himself rode at the head of an army to suppress the insurgency, with 13,000 militiamen provided by the governors of Virginia, Maryland, New Jersey, and Pennsylvania. The rebels all went home before the arrival of the army, and there was no confrontation. About 20 men were arrested, but all were later acquitted or pardoned. Most distillers in nearby Kentucky were found to be all but impossible to tax -- in the next six years, over 175 distillers from Kentucky were convicted of violating the tax law. Numerous examples of resistance are recorded in court documents and newspaper accounts. The Whiskey Rebellion demonstrated that the new national government had the will and ability to suppress violent resistance to its laws, though the whiskey excise remained difficult to collect. The events contributed to the formation of political parties in the United States, a process already underway. The whiskey tax was repealed in the early 1800s during the Jefferson administration. A new U.S. federal government began operating in 1789, following the ratification of the United States Constitution. The previous central government under the Articles of Confederation had been unable to levy taxes; it had borrowed money to meet expenses and fund the Revolution, accumulating $54 million in debt. The state governments had amassed an additional $25 million in debt. Secretary of the Treasury Alexander Hamilton sought to use this debt to create a financial system that would promote American prosperity and national unity. In his Report on Public Credit, he urged Congress to consolidate the state and national debts into a single debt that would be funded by the federal government. Congress approved these measures in June and July 1790. A source of government revenue was needed to pay the respectable amount due to the previous bondholders to whom the debt was owed. By December 1790, Hamilton believed that import duties, which were the government 's primary source of revenue, had been raised as high as feasible. He therefore promoted passage of an excise tax on domestically produced distilled spirits. This was to be the first tax levied by the national government on a domestic product. Whiskey was by far the most popular distilled beverage in late 18th - century America, so the excise became known as the "whiskey tax. '' Taxes were politically unpopular, and Hamilton believed that the whiskey excise was a luxury tax and would be the least objectionable tax that the government could levy. In this, he had the support of some social reformers, who hoped that a "sin tax '' would raise public awareness about the harmful effects of alcohol. The whiskey excise act, sometimes known as the "Whiskey Act '', became law in March 1791. George Washington defined the revenue districts, appointed the revenue supervisors and inspectors, and set their pay in November 1791. The population of Western Pennsylvania was 17,000 in 1790. Among the farmers in the region, the whiskey excise was immediately controversial, with many people on the frontier arguing that it unfairly targeted westerners. Whiskey was a popular drink, and farmers often supplemented their incomes by operating small stills. Farmers living west of the Appalachian Mountains distilled their excess grain into whiskey, which was easier and more profitable to transport over the mountains than the more cumbersome grain. A whiskey tax would make western farmers less competitive with eastern grain producers. Additionally, cash was always in short supply on the frontier, so whiskey often served as a medium of exchange. For poorer people who were paid in whiskey, the excise was essentially an income tax that wealthier easterners did not pay. Small - scale farmers also protested that Hamilton 's excise effectively gave unfair tax breaks to large distillers, most of whom were based in the east. There were two methods of paying the whiskey excise: paying a flat fee or paying by the gallon. Large distillers produced whiskey in volume and could afford the flat fee. The more efficient they became, the less tax per gallon they would pay (as low as 6 cents, according to Hamilton). Western farmers who owned small stills did not usually operate them year - round at full capacity, so they ended up paying a higher tax per gallon (9 cents), which made them less competitive. The regressive nature of the tax was further compounded by an additional factor: whiskey sold for considerably less on the cash - poor Western frontier than in the wealthier and more populous East. This meant that, even if all distillers had been required to pay the same amount of tax per gallon, the small - scale frontier distillers would still have to remit a considerably larger proportion of their product 's value than larger Eastern distillers. Small - scale distillers believed that Hamilton deliberately designed the tax to ruin them and promote big business, a view endorsed by some historians. However, historian Thomas Slaughter argued that a "conspiracy of this sort is difficult to document ''. Whether by design or not, large distillers recognized the advantage that the excise gave them and they supported it. Other aspects of the excise law also caused concern. The law required all stills to be registered, and those cited for failure to pay the tax had to appear in distant Federal, rather than local courts. The only Federal courthouse was in Philadelphia, some 300 miles away from the small frontier settlement of Pittsburgh. From the beginning, the Federal government had little success in collecting the whiskey tax along the frontier. Many small western distillers simply refused to pay the tax. Federal revenue officers and local residents who assisted them bore the brunt of the protester 's ire. Tax rebels harassed several whiskey tax collectors and threatened or beat those who offered them office space or housing. As a result, many western counties never had a resident Federal tax official. In addition to the whiskey tax, westerners had a number of other grievances with the national government, chief among which was the perception that the government was not adequately protecting the residents living in western frontier. The Northwest Indian War was going badly for the United States, with major losses in 1791. Furthermore, westerners were prohibited by Spain (which then owned Louisiana) from using the Mississippi River for commercial navigation. Until these issues were addressed, westerners felt that the government was ignoring their security and economic welfare. Adding the whiskey excise to these existing grievances only increased tensions on the frontier. Many residents of the western frontier petitioned against passage of the whiskey excise. When that failed, some western Pennsylvanians organized extralegal conventions to advocate repeal of the law. Opposition to the tax was particularly prevalent in four southwestern counties: Allegheny, Fayette, Washington, and Westmoreland. A preliminary meeting held on July 27, 1791 at Redstone Old Fort in Fayette County called for the selection of delegates to a more formal assembly, which convened in Pittsburgh in early September 1791. The Pittsburgh convention was dominated by moderates such as Hugh Henry Brackenridge, who hoped to prevent the outbreak of violence. The convention sent a petition for redress of grievances to the Pennsylvania Assembly and the U.S House of Representatives, both located in Philadelphia. As a result of this and other petitions, the excise law was modified in May 1792. Changes included a 1 - cent reduction in the tax that was advocated by William Findley, a congressman from western Pennsylvania, but the new excise law was still unsatisfactory to many westerners. Appeals to nonviolent resistance were unsuccessful. On September 11, 1791, a recently appointed tax collector named Robert Johnson was tarred and feathered by a disguised gang in Washington County. A man sent by officials to serve court warrants to Johnson 's attackers was whipped, tarred, and feathered. Because of these and other violent attacks, the tax went uncollected in 1791 and early 1792. The attackers modeled their actions on the protests of the American Revolution. Supporters of the excise argued that there was a difference between taxation without representation in colonial America, and a tax laid by the elected representatives of the American people. Older accounts of the Whiskey Rebellion portrayed it as being confined to western Pennsylvania, yet there was opposition to the whiskey tax in the western counties of every other state in Appalachia (Maryland, Virginia, North Carolina, South Carolina, and Georgia). The whiskey tax went uncollected throughout the frontier state of Kentucky, where no one could be convinced to enforce the law or prosecute evaders. In 1792, Hamilton advocated military action to suppress violent resistance in western North Carolina, but Attorney General Edmund Randolph argued that there was insufficient evidence to legally justify such a reaction. In August 1792, a second convention was held in Pittsburgh to discuss resistance to the whiskey tax. This meeting was more radical than the first convention; moderates such as Brackenridge and Findley were not in attendance. Future Secretary of the Treasury Albert Gallatin was one moderate who did attend, to his later regret. A militant group known as the Mingo Creek Association dominated the convention and issued radical demands. As some of them had done in the American Revolution, they raised liberty poles, formed committees of correspondence, and took control of the local militia. They created an extralegal court and discouraged lawsuits for debt collection and foreclosures. Hamilton regarded the second Pittsburgh convention as a serious threat to the operation of the laws of the federal government. In September 1792, he sent Pennsylvania tax official George Clymer to western Pennsylvania to investigate. Clymer only increased tensions with a clumsy attempt at traveling in disguise and attempting to intimidate local officials. His somewhat exaggerated report greatly influenced the decisions made by the Washington administration. Washington and Hamilton viewed resistance to federal laws in Pennsylvania as particularly embarrassing, since the national capital was then located in the same state. On his own initiative, Hamilton drafted a presidential proclamation denouncing resistance to the excise laws and submitted it to Attorney General Randolph, who toned down some of the language. Washington signed the proclamation on September 15, 1792, and it was published as a broadside and printed in many newspapers. Federal tax inspector for western Pennsylvania General John Neville was determined to enforce the excise law. He was a prominent politician and wealthy planter -- and also a large - scale distiller. He had initially opposed the whiskey tax, but subsequently changed his mind, a reversal that angered some western Pennsylvanians. In August 1792, Neville rented a room in Pittsburgh for his tax office, but the landlord turned him out after being threatened with violence by the Mingo Creek Association. From this point on, tax collectors were not the only people targeted in Pennsylvania; those who cooperated with federal tax officials also faced harassment. Anonymous notes and newspaper articles signed by "Tom the Tinker '' threatened those who complied with the whiskey tax. Those who failed to heed the warnings might have their barns burned or their stills destroyed. Resistance to the excise tax continued through 1793 in the frontier counties of Appalachia. Opposition remained especially strident in western Pennsylvania. In June, Neville was burned in effigy by a crowd of about 100 people in Washington County. On the night of November 22, 1793, men broke into the home of tax collector Benjamin Wells in Fayette County. Wells was, like Neville, one of the wealthier men in the region. At gunpoint, the intruders forced him to surrender his commission. President Washington offered a reward for the arrest of the assailants, to no avail. The resistance came to a climax in 1794. In May of that year, federal district attorney William Rawle issued subpoenas for more than 60 distillers in Pennsylvania who had not paid the excise tax. Under the law then in effect, distillers who received these writs would be obligated to travel to Philadelphia to appear in federal court. For farmers on the western frontier, such a journey was expensive, time - consuming, and beyond their means. At the urging of William Findley, Congress modified this law on June 5, 1794, allowing excise trials to be held in local state courts. But by that time, U.S. marshal David Lenox had already been sent to serve the writs summoning delinquent distillers to Philadelphia. Attorney General William Bradford later maintained that the writs were meant to compel compliance with the law, and that the government did not actually intend to hold trials in Philadelphia. The timing of these events later proved to be controversial. Findley was a bitter political foe of Hamilton, and he maintained in his book on the insurrection that the treasury secretary had deliberately provoked the uprising by issuing the subpoenas just before the law was made less onerous. In 1963, historian Jacob Cooke, an editor of Hamilton 's papers, regarded this charge as "preposterous '', calling it a "conspiracy thesis '' that overstated Hamilton 's control of the federal government. In 1986, historian Thomas Slaughter argued that the outbreak of the insurrection at this moment was due to "a string of ironic coincidences '', although "the question about motives must always remain ''. In 2006, William Hogeland argued that Hamilton, Bradford, and Rawle intentionally pursued a course of action that would provoke "the kind of violence that would justify federal military suppression ''. According to Hogeland, Hamilton had been working towards this moment since the Newburgh Crisis in 1783, where he conceived of using military force to crush popular resistance to direct taxation for the purpose of promoting national unity and enriching the creditor class at the expense of common taxpayers. Historian S.E. Morison believed that Hamilton, in general, wished to enforce the excise law "more as a measure of social discipline than as a source of revenue ''. Federal Marshal Lenox delivered most of the writs without incident. On July 15, he was joined on his rounds by General Neville, who had offered to act as his guide in Allegheny County. That evening, warning shots were fired at the men at the Miller farm, about 10 mi (16 km) south of Pittsburgh. Neville returned home while Lenox retreated to Pittsburgh. On July 16, at least 30 Mingo Creek militiamen surrounded Neville 's fortified home of Bower Hill. They demanded the surrender of the federal marshal, whom they believed to be inside. Neville responded by firing a gunshot that mortally wounded Oliver Miller, one of the "rebels ''. The rebels opened fire but were unable to dislodge Neville, who had his slaves ' help to defend the house. The rebels retreated to nearby Couch 's Fort to gather reinforcements. The next day, the rebels returned to Bower Hill. Their force had swelled to nearly 600 men, now commanded by Major James McFarlane, a veteran of the Revolutionary War. Neville had also received reinforcements: 10 U.S. Army soldiers from Pittsburgh under the command of Major Abraham Kirkpatrick, Neville 's brother - in - law. Before the rebel force arrived, Kirkpatrick had Neville leave the house and hide in a nearby ravine. David Lenox and General Neville 's son Presley Neville also returned to the area, though they could not get into the house and were captured by the rebels. Following some fruitless negotiations, the women and children were allowed to leave the house, and then both sides began firing. After about an hour, McFarlane called a ceasefire; according to some, a white flag had been waved in the house. As McFarlane stepped into the open, a shot rang out from the house, and he fell mortally wounded. The enraged rebels then set fire to the house, including the slave quarters, and Kirkpatrick surrendered. The number of casualties at Bower Hill is unclear; McFarlane and one or two other militiamen were killed; one U.S. soldier may have died from wounds received in the fight. The rebels sent the U.S. soldiers away. Kirkpatrick, Lenox, and Presley Neville were kept as prisoners, but they later escaped. McFarlane was given a hero 's funeral on July 18. His "murder '', as the rebels saw it, further radicalized the countryside. Moderates such as Brackenridge were hard - pressed to restrain the populace. Radical leaders emerged, such as David Bradford, urging violent resistance. On July 26, a group headed by Bradford robbed the U.S. mail as it left Pittsburgh, hoping to discover who in that town opposed them and finding several letters that condemned the rebels. Bradford and his band called for a military assembly to meet at Braddock 's Field, about 8 mi (13 km) east of Pittsburgh. On August 1, about 7,000 people gathered at Braddock 's Field. The crowd consisted primarily of poor people who owned no land, and most did not own whiskey stills. The furor over the whiskey excise had unleashed anger about other economic grievances. By this time, the victims of violence were often wealthy property owners who had no connection to the whiskey tax. Some of the most radical protesters wanted to march on Pittsburgh, which they called "Sodom '', loot the homes of the wealthy, and then burn the town to the ground. Others wanted to attack Fort Fayette. There was praise for the French Revolution and calls for bringing the guillotine to America. David Bradford, it was said, was comparing himself to Robespierre, a leader of the French Reign of Terror. At Braddock 's Field, there was talk of declaring independence from the United States and of joining with Spain or Great Britain. Radicals flew a specially designed flag that proclaimed their independence. The flag had six stripes, one for each county represented at the gathering: the Pennsylvania counties of Allegheny, Bedford, Fayette, Washington, and Westmoreland, and Virginia 's Ohio County. Pittsburgh citizens helped to defuse the threat by banishing three men whose intercepted letters had given offense to the rebels, and by sending a delegation to Braddock 's Field that expressed support for the gathering. Brackenridge prevailed upon the crowd to limit the protest to a defiant march through the town. In Pittsburgh, Major Kirkpatrick 's barns were burned, but nothing else. A convention was held on August 14 of 226 whiskey rebels from the six counties, held at Parkison 's Ferry (now known as Whiskey Point) in present - day Monongahela. The convention considered resolutions which were drafted by Brackenridge, Gallatin, David Bradford, and an eccentric preacher named Herman Husband, a delegate from Bedford County. Husband was a well - known local figure and a radical champion of democracy who had taken part in the Regulator movement in North Carolina 25 years earlier. The Parkison 's Ferry convention also appointed a committee to meet with the peace commissioners who had been sent west by President Washington. There, Gallatin presented an eloquent speech in favor of peace and against proposals from Bradford to further revolt. President Washington was confronted with what appeared to be an armed insurrection in western Pennsylvania, and he proceeded cautiously while determined to maintain governmental authority. He did not want to alienate public opinion, so he asked his cabinet for written opinions about how to deal with the crisis. The cabinet recommended the use of force, except for Secretary of State Edmund Randolph who urged reconciliation. Washington did both: he sent commissioners to meet with the rebels while raising a militia army. Washington privately doubted that the commissioners could accomplish anything, and believed that a military expedition would be needed to suppress further violence. For this reason, historians have sometimes charged that the peace commission was sent only for the sake of appearances, and that the use of force was never in doubt. Historians Stanley Elkins and Eric McKitrick argued that the military expedition was "itself a part of the reconciliation process '', since a show of overwhelming force would make further violence less likely. Meanwhile, Hamilton began publishing essays under the name of "Tully '' in Philadelphia newspapers, denouncing mob violence in western Pennsylvania and advocating military action. Democratic - Republican Societies had been formed throughout the country, and Washington and Hamilton believed that they were the source of civic unrest. "Historians are not yet agreed on the exact role of the societies '' in the Whiskey Rebellion, wrote historian Mark Spencer in 2003, "but there was a degree of overlap between society membership and the Whiskey Rebels ''. Before troops could be raised, the Militia Act of 1792 required a justice of the United States Supreme Court to certify that law enforcement was beyond the control of local authorities. On August 4, 1794, Justice James Wilson delivered his opinion that western Pennsylvania was in a state of rebellion. On August 7, Washington issued a presidential proclamation announcing, with "the deepest regret '', that the militia would be called out to suppress the rebellion. He commanded insurgents in western Pennsylvania to disperse by September 1. In early August 1794, Washington dispatched three commissioners to the west, all of them Pennsylvanians: Attorney General William Bradford, Justice Jasper Yeates of the Pennsylvania Supreme Court, and Senator James Ross. Beginning on August 21, the commissioners met with a committee of westerners that included Brackenridge and Gallatin. The government commissioners told the committee that it must unanimously agree to renounce violence and submit to U.S. laws, and that a popular referendum must be held to determine if the local people supported the decision. Those who agreed to these terms would be given amnesty from further prosecution. The committee was divided between radicals and moderates, and narrowly passed a resolution agreeing to submit to the government 's terms. The popular referendum was held on September 11 and also produced mixed results. Some townships overwhelmingly supported submitting to U.S. law, but opposition to the government remained strong in areas where poor and landless people predominated. The final report of the commissioners recommended the use of the military to enforce the laws. The trend was towards submission, however, and westerners dispatched representatives William Findley and David Redick to meet with Washington and to halt the progress of the oncoming army. Washington and Hamilton declined, arguing that violence was likely to re-emerge if the army turned back. Under the authority of the recently passed federal militia law, the state militias were called up by the governors of New Jersey, Maryland, Virginia, and Pennsylvania. The federalized militia force of 12,950 men was a large army by American standards of the time, comparable to Washington 's armies during the Revolution. Relatively few men volunteered for militia service, so a draft was used to fill out the ranks. Draft evasion was widespread, and conscription efforts resulted in protests and riots, even in eastern areas. Three counties in eastern Virginia were the scenes of armed draft resistance. In Maryland, Governor Thomas Sim Lee sent 800 men to quash an antidraft riot in Hagerstown; about 150 people were arrested. Liberty poles were raised in various places as the militia was recruited, worrying federal officials. A liberty pole was raised in Carlisle, Pennsylvania on September 11, 1794. The federalized militia arrived in that town later that month and rounded up suspected pole - raisers. Two civilians were killed in these operations. On September 29, an unarmed boy was shot by an officer whose pistol accidentally fired. Two days later, an "Itinerant Person '' was "Bayoneted '' to death by a soldier while resisting arrest (the man had tried to wrest the rifle from the soldier he confronted; it is possible he had been a member of a 500 - strong Irish work crew nearby who were "digging, a canal into the Sculkill '' (sic); at least one of that work gang 's members protested the killing so vigorously that he was "put under guard ''). President Washington ordered the arrest of the two soldiers and had them turned over to civilian authorities. A state judge determined that the deaths had been accidental, and the soldiers were released. In October 1794, Washington traveled west to review the progress of the military expedition. According to historian Joseph Ellis, this was "the first and only time a sitting American president led troops in the field ''. Jonathan Forman led the Third Infantry Regiment of New Jersey troops against the Whiskey Rebellion, he wrote about his encounter with Washington: October 3d Marched early in the morning for Harrisburgh, where we arrived about 12 O'clock. About 1 O'Clock recd. information of the Presidents approach on which, I had the regiment paraded, timely for his reception, & considerably to my satisfaction. Being afterwards invited to his quarters he made enquiry into the circumstances of the man (an incident between an "Itinerant Person '' and "an Old Soldier '' mentioned earlier in the journal (p. 3)) & seemed satisfied with the information. Washington met with the western representatives in Bedford, Pennsylvania on October 9 before going to Fort Cumberland in Maryland to review the southern wing of the army. He was convinced that the federalized militia would meet little resistance, and he placed the army under the command of the Virginia Governor Henry "Lighthorse Harry '' Lee, a hero of the Revolutionary War. Washington returned to Philadelphia; Hamilton remained with the army as civilian adviser. Daniel Morgan, a general key to the winning of the American Revolution, was called up to lead a force to suppress the protest. It was at this time (1794) that Morgan was promoted to Major General. Serving under General "Light - Horse Harry '' Lee, Morgan led one wing of the militia army into Western Pennsylvania. The massive show of force brought an end to the protests without a shot being fired. After the uprising had been suppressed, Morgan commanded the remnant of the army that remained until 1795 in Pennsylvania, some 1,200 militiamen, one of whom was Meriwether Lewis. The insurrection collapsed as the federal army marched west into western Pennsylvania in October 1794. Some of the most prominent leaders of the insurrection, such as David Bradford, fled westward to safety. It took six months for those who were charged to be tried. Most were acquitted due to mistaken identity, unreliable testimony and lack of witnesses. The only two convicted of treason and sentenced to hang were John Mitchell and Philip Wigle. They were later pardoned by Washington. Immediately before the arrests "... as many as 2,000 of (the rebels) -- had fled into the mountains, beyond the reach of the militia. It was a great disappointment to Hamilton, who had hoped to bring rebel leaders such as David Bradford to trial in Philadelphia -- and possibly see them hanged for treason. Instead, when the militia at last turned back, out of all the suspects they had seized a mere twenty were selected to serve as examples, They were at worst bit players in the uprising, but they were better than nothing. '' The captured participants and the Federal militia arrived in Philadelphia on Christmas Day. Some artillery was fired and church bells were heard as "... a huge throng lined Broad Street to cheer the troops and mock the rebels... (Presley) Neville said he ' could not help feeling sorry for them. The captured rebels were paraded down Broad Street being ' humiliated, bedragged, (and) half - starved... ' '' Other accounts describe the indictment of 24 men for high treason. Most of the accused had eluded capture, so only ten men stood trial for treason in federal court. Of these, only Philip Wigle and John Mitchell were convicted. Wigle had beaten up a tax collector and burned his house; Mitchell was a simpleton who had been convinced by David Bradford to rob the U.S. mail. Both men were sentenced to death by hanging, but they were pardoned by President Washington. Pennsylvania state courts were more successful in prosecuting lawbreakers, securing numerous convictions for assault and rioting. While violent opposition to the whiskey tax ended, political opposition to the tax continued. Opponents of internal taxes rallied around the candidacy of Thomas Jefferson and helped him defeat President John Adams in the election of 1800. By 1802, Congress repealed the distilled spirits excise tax and all other internal Federal taxes. Until the War of 1812, the Federal government would rely solely on import tariffs for revenue, which quickly grew with the Nation 's expanding foreign trade. The Washington administration 's suppression of the Whiskey Rebellion met with widespread popular approval. The episode demonstrated that the new national government had the willingness and ability to suppress violent resistance to its laws. It was, therefore, viewed by the Washington administration as a success, a view that has generally been endorsed by historians. The Washington administration and its supporters usually did not mention, however, that the whiskey excise remained difficult to collect, and that many westerners continued to refuse to pay the tax. The events contributed to the formation of political parties in the United States, a process already underway. The whiskey tax was repealed after Thomas Jefferson 's Republican Party came to power in 1801, which opposed the Federalist Party of Hamilton and Washington. The Rebellion raised the question of what kinds of protests were permissible under the new Constitution. Legal historian Christian G. Fritz argued that there was not yet a consensus about sovereignty in the United States, even after ratification of the Constitution. Federalists believed that the government was sovereign because it had been established by the people; radical protest actions were permissible during the American Revolution but were no longer legitimate, in their thinking. But the Whiskey Rebels and their defenders believed that the Revolution had established the people as a "collective sovereign '', and the people had the collective right to change or challenge the government through extra-constitutional means. Historian Steven Boyd argued that the suppression of the Whiskey Rebellion prompted anti-Federalist westerners to finally accept the Constitution and to seek change by voting for Republicans rather than resisting the government. Federalists, for their part, came to accept the public 's role in governance and no longer challenged the freedom of assembly and the right to petition. Soon after the Whiskey Rebellion, actress - playwright Susanna Rowson wrote a stage musical about the insurrection entitled The Volunteers, with music by composer Alexander Reinagle. The play is now lost, but the songs survive and suggest that Rowson 's interpretation was pro-Federalist. The musical celebrates as American heroes the militiamen who put down the rebellion, the "volunteers '' of the title. President Washington and Martha Washington attended a performance of the play in Philadelphia in January 1795. W.C. Fields recorded a comedy track in Les Paul 's studio in 1946, shortly before his death, entitled "The Temperance Lecture '' for the album W.C. Fields... His Only Recording Plus 8 Songs by Mae West. The bit discussed Washington and his role in putting down the Whiskey Rebellion, and Fields wondered aloud whether "George put down a little of the vile stuff too. '' L. Neil Smith wrote the alternate history novel The Probability Broach in 1980 as part of his North American Confederacy Series. In it, Albert Gallatin joins the rebellion in 1794 to benefit the farmers, rather than the fledgling US Government as he did in reality. This results in the rebellion becoming a Second American Revolution. This eventually leads to George Washington being overthrown and executed for treason, the abrogation of the Constitution, and Gallatin being proclaimed the second president and serving as president until 1812. David Liss ' 2008 novel The Whiskey Rebels covers many of the circumstances during 1788 - 92 that led to the 1794 Rebellion. The fictional protagonists are cast against an array of historical persons, including Alexander Hamilton, William Duer, Anne Bingham, Hugh Henry Brackenridge, Aaron Burr, and Philip Freneau. In 2011, the Whiskey Rebellion Festival was started in Washington, Pennsylvania. This annual event is held in July and includes live music, food, and historic reenactments, featuring the "tar and feathering '' of the tax collector. Other works which include events of the Whiskey Rebellion: Much primary source historical material has been preserved and exists in archives. A list of institutions that possess holdings and examples are: Sir: I am extremely thrilled that you printed my song in your folk singing article. I love music and Joan Baez. Copper Kettle was written in 1953 as part of my opera Go Lightly Stranger. A.F. Beddoe, Staten Island, N.Y.
is the army corp of engineers part of the army
United States Army Corps of Engineers - wikipedia The United States Army Corps of Engineers (USACE) is a U.S. federal agency under the Department of Defense and a major Army command made up of some 37,000 civilian and military personnel, making it one of the world 's largest public engineering, design, and construction management agencies. Although generally associated with dams, canals and flood protection in the United States, USACE is involved in a wide range of public works throughout the world. The Corps of Engineers provides outdoor recreation opportunities to the public, and provides 24 % of U.S. hydropower capacity. The corps ' mission is to "Deliver vital public and military engineering services; partnering in peace and war to strengthen our Nation 's security, energize the economy and reduce risks from disasters. '' Their most visible missions include: The history of United States Army Corps of Engineers can be traced back to 16 June 1775, when the Continental Congress organized an army with a chief engineer and two assistants. Colonel Richard Gridley became General George Washington 's first chief engineer. One of his first tasks was to build fortifications near Boston at Bunker Hill. The Continental Congress recognized the need for engineers trained in military fortifications and asked the government of King Louis XVI of France for assistance. Many of the early engineers in the Continental Army were former French officers. Louis Lebègue Duportail, a lieutenant colonel in the French Royal Corps of Engineers, was secretly sent to America in March 1777 to serve in Washington 's Continental Army. In July 1777 he was appointed colonel and commander of all engineers in the Continental Army, and in November 17, 1777, he was promoted to brigadier general. When the Continental Congress created a separate Corps of Engineers in May 1779 Duportail was designated as its commander. In late 1781 he directed the construction of the allied U.S. - French siege works at the Battle of Yorktown. From 1794 to 1802 the engineers were combined with the artillery as the Corps of Artillerists and Engineers. The Corps of Engineers, as it is known today, came into existence on 16 March 1802, when President Thomas Jefferson signed the Military Peace Establishment Act whose aim was to "organize and establish a Corps of Engineers... that the said Corps... shall be stationed at West Point in the State of New York and shall constitute a military academy. '' Until 1866, the superintendent of the United States Military Academy was always an officer of engineer. The General Survey Act of 1824 authorized the use of Army engineers to survey road and canal routes. That same year, Congress passed an "Act to Improve the Navigation of the Ohio and Mississippi Rivers '' and to remove sand bars on the Ohio and "planters, sawyers, or snags '' (trees fixed in the riverbed) on the Mississippi, for which the Corps of Engineers was the responsible agency. Separately authorized on 4 July 1838, the U.S. Army Corps of Topographical Engineers consisted only of officers and was used for mapping and the design and construction of federal civil works and other coastal fortifications and navigational routes. It was merged with the Corps of Engineers on 31 March 1863, at which point the Corps of Engineers also assumed the Lakes Survey District mission for the Great Lakes. In 1841, Congress created the Lake Survey. The survey, based in Detroit, Mich., was charged with conducting a hydrographical survey of the Northern and Northwestern Lakes and preparing and publishing nautical charts and other navigation aids. The Lake Survey published its first charts in 1852. In the mid-19th century, Corps of Engineers ' officers ran Lighthouse Districts in tandem with U.S. Naval officers. The Army Corps of Engineers played a significant role in the American Civil War. Many of the men who would serve in the top leadership in this institution were West Point graduates who rose to military fame and power during the Civil War. Some of these men were Union Generals George McClellan, Henry Halleck, George Meade, and Confederate generals Robert E. Lee, Joseph Johnston, and P.G.T. Beauregard. The versatility of officers in the Army Corps of Engineers contributed to the success of numerous missions throughout the Civil War. They were responsible for building pontoon and railroad bridges, forts and batteries, the destruction of enemy supply lines, and the construction of roads. The Union forces were not the only ones to employ the use of engineers throughout the war, and on 6 March 1861, once the South had seceded from the Union, among the different acts passed at the time, a provision was included that called for the creation of a Confederate Corps of Engineers. The progression of the war demonstrated the South 's disadvantage in engineering expertise; of the initial 65 cadets who resigned from West Point to accept positions with the Confederate Army, only seven were placed in the Corps of Engineers. To overcome this obstacle, the Confederate Congress passed legislation that gave a company of engineers to every division in the field; by 1865, they actually had more engineer officers serving in the field of action than the Union Army. The Army Corps of Engineers served as a main function in making the war effort logistically feasible. One of the main projects for the Army Corps of Engineers was constructing railroads and bridges, which Union forces took advantage of because railroads and bridges provided access to resources and industry. One area where the Confederate engineers were able to outperform the Union Army was in the ability to build fortifications that were used both offensively and defensively along with trenches that made them harder to penetrate. This method of building trenches was known as the zigzag pattern. From the beginning, many politicians wanted the Corps of Engineers to contribute to both military construction and works of a civil nature. Assigned the military construction mission on 1 December 1941 after the Quartermaster Department struggled with the expanding mission, the Corps built facilities at home and abroad to support the U.S. Army and Air Force. During World War II the mission grew to more than 27,000 military and industrial projects in a $15.3 billion mobilization program. Included were aircraft, tank assembly, and ammunition plants, camps for 5.3 million soldiers, depots, ports, and hospitals, as well as the Manhattan Project, and the Pentagon. In civilian projects, the Corps of Engineers became the lead federal flood control agency and significantly expanded its civil works activities, becoming among other things, a major provider of hydroelectric energy and the country 's leading provider of recreation; its role in responding to natural disasters also grew dramatically. In the late 1960s, the agency became a leading environmental preservation and restoration agency. In 1944, specially trained army combat engineers were assigned to blow up underwater obstacles and clear defended ports during the invasion of Normandy. During World War II, the Army Corps of Engineers in the European Theater of Operations was responsible for building numerous bridges, including the first and longest floating tactical bridge across the Rhine at Remagen, and building or maintaining roads vital to the Allied advance across Europe into the heart of Germany. In the Pacific theater, the Pioneer troops were formed, a hand - selected unit of volunteer Army combat engineers trained in jungle warfare, knife fighting, and unarmed jujitsu (hand - to - hand combat) techniques. Working in camouflage, the Pioneers cleared jungle and prepared routes of advance and established bridgeheads for the infantry as well as demolishing enemy installations. Five commanding generals (chiefs of staff after the 1903 reorganization) of the United States Army held engineer commissions early in their careers. All transferred to other branches before rising to the top. They were Alexander Macomb, George B. McClellan, Henry W. Halleck, Douglas MacArthur, and Maxwell D. Taylor. Occasional civil disasters, including the Great Mississippi Flood of 1927, resulted in greater responsibilities for the Corps of Engineers. The aftermath of Hurricane Katrina in New Orleans provides another example of this. The Chief of Engineers and Commanding General of U.S. Army Corps of Engineers works under the civilian oversight of the Assistant Secretary of the Army (Civil Works). Three deputy commanding generals report to the chief of engineers, who have the following titles: Deputy Commanding General, Deputy Commanding General for Civil and Emergency Operation, and Deputy Commanding General for Military and International Operations. The Corps of Engineers headquarters is located in Washington, D.C. The headquarters staff is responsible for Corps of Engineers policy and plans the future direction of all other USACE organizations. It comprises the executive office and 17 staff principals. USACE has two directors who head up Military Programs and Civil Works, Director of Military Programs and Director of Civil Works. The U.S. Army Corps of Engineers is organized geographically into eight permanent divisions, one provisional division, one provisional district, and one research command reporting directly to the HQ. Within each division, there are several districts. Districts are defined by watershed boundaries for civil works projects and by political boundaries for military projects. U.S. Army Engineer units outside of USACE Districts and not listed below fall under the Engineer Regiment of the U.S. Army Corps of Engineers. Army engineers include both combat engineers and support engineers more focused on construction and sustainment. The vast majority of military personnel in the United States Army Corps of Engineers serve in this Engineer Regiment. The Engineer Regiment is headquartered at Fort Leonard Wood, MO and is commanded by the Engineer Commandant, currently a position filled by an Army Brigadier General from the Engineer Branch. The Engineer Regiment includes the U.S. Army Engineer School (USAES) which publishes its mission as: Generate the military engineer capabilities the Army needs: training and certifying Soldiers with the right knowledge, skills, and critical thinking; growing and educating professional leaders; organizing and equipping units; establishing a doctrinal framework for employing capabilities; and remaining an adaptive institution in order to provide Commanders with the freedom of action they need to successfully execute Unified Land Operations. There are several other organizations within the Corps of Engineers: USACE provides support directly and indirectly to the warfighting effort. They build and help maintain much of the infrastructure that the Army and the Air Force use to train, house, and deploy troops. USACE built and maintained navigation systems and ports provide the means to deploy vital equipment and other material. Corps of Engineers Research and Development (R&D) facilities help develop new methods and measures for deployment, force protection, terrain analysis, mapping, and other support. USACE directly supports the military in the battle zone, making expertise available to commanders to help solve or avoid engineering (and other) problems. Forward Engineer Support Teams, FEST - A 's or FEST - M 's, may accompany combat engineers to provide immediate support, or to reach electronically into the rest of USACE for the necessary expertise. A FEST - A team is an eight - person detachment; a FEST - M is approximately 36. These teams are designed to provide immediate technical - engineering support to the warfighter or in a disaster area. Corps of Engineers ' professionals use the knowledge and skills honed on both military and civil projects to support the U.S. and local communities in the areas of real estate, contracting, mapping, construction, logistics, engineering, and management experience. This work currently includes support for rebuilding Iraq, establishing Afghanistan infrastructure, and supporting international and inter-agency services. In addition, the work of almost 26,000 civilians on civil - works programs throughout USACE provide a training ground for similar capabilities worldwide. USACE civilians volunteer for assignments worldwide. For example, hydropower experts have helped repair, renovate, and run hydropower dams in Iraq in an effort to help get Iraqis to become self - sustaining. USACE supports the United States ' Department of Homeland Security and the Federal Emergency Management Agency (FEMA) through its security planning, force protection, research and development, disaster preparedness efforts, and quick response to emergencies and disasters. The CoE conducts its emergency response activities under two basic authorities -- the Flood Control and Coastal Emergency Act (Pub. L. 84 -- 99), and the Stafford Disaster Relief and Emergency Assistance Act (Pub. L. 93 -- 288). In a typical year, the Corps of Engineers responds to more than 30 Presidential disaster declarations, plus numerous state and local emergencies. Emergency responses usually involve cooperation with other military elements and Federal agencies in support of State and local efforts. Work comprises engineering and management support to military installations, global real estate support, civil works support (including risk and priorities), operations and maintenance of Federal navigation and flood control projects, and monitoring of dams and levees. More than 67 percent of the goods consumed by Americans and more than half of the nation 's oil imports are processed through deepwater ports maintained by the Corps of Engineers, which maintains more than 12,000 miles (19,000 km) of commercially navigable channels across the U.S. In both its Civil Works mission and Military Construction program, the Corps of Engineers is responsible for billions of dollars of the nation 's infrastructure. For example, USACE maintains direct control of 609 dams, maintains or operates 257 navigation locks, and operates 75 hydroelectric facilities generating 24 % of the nation 's hydropower and three percent of its total electricity. USACE inspects over 2,000 Federal and non-Federal levees every two years. Four billion gallons of water per day are drawn from the Corps of Engineers ' 136 multi-use flood control projects comprising 9,800,000 acre feet (12.1 km) of water storage, making it one of the United States ' largest water supply agencies. The 249th Engineer Battalion (Prime Power), the only active duty unit in USACE, generates and distributes prime electrical power in support of warfighting, disaster relief, stability and support operations as well as provides advice and technical assistance in all aspects of electrical power and distribution systems. The battalion deployed in support of recovery operations after 9 / 11 and was instrumental in getting Wall Street back up and running within a week. The battalion also deployed in support of post-Katrina operations. All of this work represents a significant investment in the nation 's resources. Through its Civil Works program, USACE carries out a wide array of projects that provide coastal protection, flood protection, hydropower, navigable waters and ports, recreational opportunities, and water supply. Work includes coastal protection and restoration, including a new emphasis on a more holistic approach to risk management. As part of this work, USACE is the number one provider of outdoor recreation in the U.S., so there is a significant emphasis on water safety. Army involvement in works "of a civil nature, '' including water resources, goes back almost to the origins of the U.S. Over the years, as the nation 's needs have changed, so have the Army 's Civil Works missions. Major areas of emphasis include the following: The U.S. Army Corps of Engineers environmental mission has two major focus areas: restoration and stewardship. The Corps supports and manages numerous environmental programs, that run the gamut from cleaning up areas on former military installations contaminated by hazardous waste or munitions to helping establish / reestablish wetlands that helps endangered species survive. Some of these programs include Ecosystem Restoration, Formerly Used Defense Sites, Environmental Stewardship, EPA Superfund, Abandoned Mine Lands, Formerly Utilized Sites Remedial Action Program, Base Realignment and Closure, 2005, and Regulatory. This mission includes education as well as regulation and cleanup. The U.S. Army Corps of Engineers has an active environmental program under both its Military and Civil Programs. The Civil Works environmental mission that ensures all USACE projects, facilities and associated lands meet environmental standards. The program has four functions: compliance, restoration, prevention, and conservation. The Corps also regulates all work in wetlands and waters of the United States. The Military Programs Environmental Program manages design and execution of a full range of cleanup and protection activities: The following are major areas of environmental emphasis: See also Environmental Enforcement below. Summary of facts and figures as of 2007, provided by the Corps of Engineers: The regulatory program is authorized to protect the nation 's aquatic resources. USACE personnel evaluate permit applications for essentially all construction activities that occur in the nation 's waters, including wetlands. Two primary authorities granted to the Army Corps of Engineers by Congress fall under Section 10 of the Rivers and Harbors Act and Section 404 of the Clean Water Act. Section 10 of the Rivers and Harbors Act of 1899 (codified in Chapter 33, Section 403 of the United States Code) gave the Corps authority over navigable waters of the United States, defined as "those waters that are subject to the ebb and flow of the tide and / or are presently being used, or have been used in the past, or may be susceptible for use to transport interstate or foreign commerce. '' Section 10 covers construction, excavation, or deposition of materials in, over, or under such waters, or any work that would affect the course, location, condition or capacity of those waters. Actions requiring section 10 permits include structures (e.g., piers, wharfs, breakwaters, bulkheads, jetties, weirs, transmission lines) and work such as dredging or disposal of dredged material, or excavation, filling or other modifications to the navigable waters of the United States. The Coast Guard also has responsibility for permitting the erection or modification of bridges over navigable waters of the U.S. Another of the major responsibilities of the Army Corps of Engineers is administering the permitting program under Section 404 of the Federal Water Pollution Control Act of 1972, also known as the Clean Water Act. The Secretary of the Army is authorized under this act to issue permits for the discharge of dredged and fill material in waters of the United States, including adjacent wetlands. The geographic extent of waters of the United States subject to section 404 permits fall under a broader definition and include tributaries to navigable waters and adjacent wetlands. The engineers must first determine if the waters at the project site are jurisdictional and subject to the requirements of the section 404 permitting program. Once jurisdiction has been established, permit review and authorization follows a sequence process that encourages avoidance of impacts, followed by minimizing impacts and, finally, requiring mitigation for unavoidable impacts to the aquatic environment. This sequence is described in the section 404 (b) (1) guidelines. There are three types of permits issued by the Corps of Engineers: Nationwide, Regional General, and Individual. 80 % of the permits issued are nationwide permits, which include 50 general type of activities for minimal impacts to waters of the United States, as published in the Federal Register. Nationwide permits are subject to a reauthorization process every 5 years, with the most recent reauthorization occurring in March, 2012. To gain authorization under a nationwide permit, an applicant must comply with the terms and conditions of the nationwide permit. Select nationwide permits require preconstruction notification to the applicable corps district office notifying them of his or her intent, type and amount of impact and fill in waters, and a site map. Although the nationwide process is fairly simple, corps approval must be obtained before commencing with any work in waters of the United States. Regional general permits are specific to each corps district office. Individual permits are generally required for projects that impact greater than 0.5 acres (2,000 m) of waters of the United States. Individual permits are required for activities that result in more than minimal impacts to the aquatic environment. The Corps of Engineers has two research organizations, the Engineer Research and Development Center (ERDC) and the Army Geospatial Center (AGC). ERDC provides science, technology, and expertise in engineering and environmental sciences to support both military and civil / civilian customers. ERDC research support includes: AGC coordinates, integrates, and synchronizes geospatial information requirements and standards across the Army and provides direct geospatial support and products to warfighters. See also Geospatial Information Officer. The Corps of Engineers branch insignia, the Corps Castle, is believed to have originated on an informal basis. In 1841, cadets at West Point wore insignia of this type. In 1902, the Castle was formally adopted by the Corps of Engineers as branch insignia. The "castle '' is actually the Pershing Barracks at the United States Military Academy in West Point, New York A current tradition was established with the "Gold Castles '' branch insignia of General of the Army Douglas MacArthur, West Point Class of 1903, who served in the Corps of Engineers early in his career and had received the two pins as a graduation gift of his family. In 1945, near the conclusion of World War II, General MacArthur gave his personal pins to his Chief Engineer, General Leif J. Sverdrup. On 2 May 1975, upon the 200th anniversary of the Corps of Engineers, retired General Sverdrup, who had civil engineering projects including the landmark 17 - mile (27 km) - long Chesapeake Bay Bridge - Tunnel to his credit, presented the Gold Castles to then - Chief of Engineers Lieutenant General William C. Gribble, Jr., who had also served under General MacArthur in the Pacific. General Gribble then announced a tradition of passing the insignia along to future Chiefs of Engineers, and it has been done so since. Some of the Corps of Engineers ' civil works projects have been characterized in the press as being pork barrel or boondoggles such as the New Madrid Floodway Project and the New Orleans flood protection. Projects have allegedly been justified based on flawed or manipulated analyses during the planning phase. Some projects are said to have created profound detrimental environmental effects or provided questionable economic benefit such as the Mississippi River -- Gulf Outlet in southeast Louisiana. Faulty design and substandard construction have been cited in the failure of levees in the wake of Hurricane Katrina that caused flooding of 80 % of the city of New Orleans. Review of Corps of Engineers ' projects has also been criticized for its lack of impartiality. The investigation of levee failure in New Orleans during Hurricane Katrina was sponsored by the American Society of Civil Engineers (ASCE) but funded by the Corps of Engineers and involved its employees. Corps of Engineers projects can be found in all fifty states, and are specifically authorized and funded directly by Congress. Local citizen, special interest, and political groups lobby Congress for authorization and appropriations for specific projects in their area. Senator Russ Feingold and Senator John McCain sponsored an amendment requiring peer review of Corps projects to the Water Resources Development Act of 2006, proclaiming "efforts to reform and add transparency to the way the U.S. Army Corps of Engineers receives funding for and undertakes water projects. '' A similar bill, the Water Resources Development Act of 2007, which included the text of the original Corps ' peer review measure, was eventually passed by Congress in 2007, overriding Presidential veto. A number of Army camps and facilities designed by the Corps of Engineers, including the former Camp O'Ryan in New York State, have reportedly had a negative impact on the surrounding communities. Camp O'Ryan, with its rifle range, has possibly contaminated well and storm runoff water with lead. This runoff water eventually runs into the Niagara River and Lake Ontario, sources of drinking water to millions of people. This situation is exacerbated by a failure to locate the engineering and architectural plans for the camp, which were produced by the New York District in 1949. Bunnatine "Bunny '' Greenhouse, a formerly high - ranking official in the Corps of Engineers, won a lawsuit against the United States government in July 2011. Greenhouse had objected to the Corps accepting cost projections from KBR in a no - bid, noncompetitive contract. After she complained, Greenhouse was demoted from her Senior Executive Service position, stripped of her top secret security clearance, and even, according to Greenhouse, had her office booby - trapped with a trip - wire from which she sustained a knee injury. A U.S. District court awarded Greenhouse $970,000 in full restitution of lost wages, compensatory damages, and attorney fees.
what is the makeup of the supreme court
Supreme Court of the United States - wikipedia The Supreme Court of the United States (sometimes colloquially referred to by the acronym SCOTUS) is the highest federal court of the United States. Established pursuant to Article Three of the United States Constitution in 1789, it has ultimate (and largely discretionary) appellate jurisdiction over all federal courts and state court cases involving issues of federal law plus original jurisdiction over a small range of cases. In the legal system of the United States, the Supreme Court is generally the final interpreter of federal law including the United States Constitution, but it may act only within the context of a case in which it has jurisdiction. The Court may decide cases having political overtones but does not have power to decide nonjusticiable political questions, and its enforcement arm is in the executive rather than judicial branch of government. According to federal statute, the Court normally consists of the Chief Justice of the United States and eight associate justices who are nominated by the President and confirmed by the Senate. Once appointed, justices have lifetime tenure unless they resign, retire, or are removed after impeachment (though no justice has ever been removed). In modern discourse, the justices are often categorized as having conservative, moderate, or liberal philosophies of law and of judicial interpretation. Each justice has one vote, and it is worth noting that while a far greater number of cases in recent history have been decided unanimously, decisions in cases of the highest profile have often come down to just one single vote, thereby exposing the justices ' ideological beliefs that track with those philosophical or political categories. The Court meets in the Supreme Court Building in Washington, D.C. The ratification of the United States Constitution established the Supreme Court in 1789. Its powers are detailed in Article Three of the Constitution. The Supreme Court was the only court specifically established by the Constitution while all other federal courts were created by Congress. Congress is also responsible for conferring the title of "justice '' to its members, who are known to scold lawyers for inaccurately referring to them as "judge '', even though it is the term used in the Constitution. The Court first convened on February 2, 1790, with six judges where only five of its six initial positions were filled. According to historian Fergus Bordewich, in its first session: "(T) he Supreme Court convened for the first time at the Royal Exchange Building on Broad Street, a few steps from Federal Hall. Symbolically, the moment was pregnant with promise for the republic, this birth of a new national institution whose future power, admittedly, still existed only in the eyes and minds of just a few visionary Americans. Impressively bewigged and swathed in their robes of office, Chief Justice John Jay and three associate justices -- William Cushing of Massachusetts, James Wilson of Pennsylvania, and John Blair of Virginia -- sat augustly before a throng of spectators and waited for something to happen. Nothing did. They had no cases to consider. After a week of inactivity, they adjourned until September, and everyone went home. '' The sixth member, James Iredell, was not confirmed until May 12, 1790. Because the full Court had only six members, every decision that it made by a majority was also made by two - thirds (voting four to two). However, Congress has always allowed less than the Court 's full membership to make decisions, starting with a quorum of four justices in 1789. Under Chief Justices Jay, Rutledge, and Ellsworth (1789 -- 1801), the Court heard few cases; its first decision was West v. Barnes (1791), a case involving a procedural issue. The Court lacked a home of its own and had little prestige, a situation not helped by the highest - profile case of the era, Chisholm v. Georgia (1793), which was reversed within two years by the adoption of the Eleventh Amendment. The Court 's power and prestige grew substantially during the Marshall Court (1801 -- 35). Under Marshall, the Court established the power of judicial review over acts of Congress, including specifying itself as the supreme expositor of the Constitution (Marbury v. Madison) and made several important constitutional rulings giving shape and substance to the balance of power between the federal government and the states (prominently, Martin v. Hunter 's Lessee, McCulloch v. Maryland and Gibbons v. Ogden). The Marshall Court also ended the practice of each justice issuing his opinion seriatim, a remnant of British tradition, and instead issuing a single majority opinion. Also during Marshall 's tenure, although beyond the Court 's control, the impeachment and acquittal of Justice Samuel Chase in 1804 -- 05 helped cement the principle of judicial independence. The Taney Court (1836 -- 64) made several important rulings, such as Sheldon v. Sill, which held that while Congress may not limit the subjects the Supreme Court may hear, it may limit the jurisdiction of the lower federal courts to prevent them from hearing cases dealing with certain subjects. Nevertheless, it is primarily remembered for its ruling in Dred Scott v. Sandford, which helped precipitate the Civil War. In the Reconstruction era, the Chase, Waite, and Fuller Courts (1864 -- 1910) interpreted the new Civil War amendments to the Constitution and developed the doctrine of substantive due process (Lochner v. New York; Adair v. United States). Under the White and Taft Courts (1910 -- 30), the Court held that the Fourteenth Amendment had incorporated some guarantees of the Bill of Rights against the states (Gitlow v. New York), grappled with the new antitrust statutes (Standard Oil Co. of New Jersey v. United States), upheld the constitutionality of military conscription (Selective Draft Law Cases) and brought the substantive due process doctrine to its first apogee (Adkins v. Children 's Hospital). During the Hughes, Stone, and Vinson Courts (1930 -- 53), the Court gained its own accommodation in 1935 and changed its interpretation of the Constitution, giving a broader reading to the powers of the federal government to facilitate President Franklin Roosevelt 's New Deal (most prominently West Coast Hotel Co. v. Parrish, Wickard v. Filburn, United States v. Darby and United States v. Butler). During World War II, the Court continued to favor government power, upholding the internment of Japanese citizens (Korematsu v. United States) and the mandatory pledge of allegiance (Minersville School District v. Gobitis). Nevertheless, Gobitis was soon repudiated (West Virginia State Board of Education v. Barnette), and the Steel Seizure Case restricted the pro-government trend. The Warren Court (1953 -- 69) dramatically expanded the force of Constitutional civil liberties. It held that segregation in public schools violates equal protection (Brown v. Board of Education, Bolling v. Sharpe and Green v. County School Bd.) and that traditional legislative district boundaries violated the right to vote (Reynolds v. Sims). It created a general right to privacy (Griswold v. Connecticut), limited the role of religion in public school (most prominently Engel v. Vitale and Abington School District v. Schempp), incorporated most guarantees of the Bill of Rights against the States -- prominently Mapp v. Ohio (the exclusionary rule) and Gideon v. Wainwright (right to appointed counsel), -- and required that criminal suspects be apprised of all these rights by police (Miranda v. Arizona); At the same time, however, the Court limited defamation suits by public figures (New York Times v. Sullivan) and supplied the government with an unbroken run of antitrust victories. The Burger Court (1969 -- 86) marked a conservative shift. It also expanded Griswold 's right to privacy to strike down abortion laws (Roe v. Wade), but divided deeply on affirmative action (Regents of the University of California v. Bakke) and campaign finance regulation (Buckley v. Valeo), and dithered on the death penalty, ruling first that most applications were defective (Furman v. Georgia), then that the death penalty itself was not unconstitutional (Gregg v. Georgia). The Rehnquist Court (1986 -- 2005) was noted for its revival of judicial enforcement of federalism, emphasizing the limits of the Constitution 's affirmative grants of power (United States v. Lopez) and the force of its restrictions on those powers (Seminole Tribe v. Florida, City of Boerne v. Flores). It struck down single - sex state schools as a violation of equal protection (United States v. Virginia), laws against sodomy as violations of substantive due process (Lawrence v. Texas), and the line item veto (Clinton v. New York), but upheld school vouchers (Zelman v. Simmons - Harris) and reaffirmed Roe 's restrictions on abortion laws (Planned Parenthood v. Casey). The Court 's decision in Bush v. Gore, which ended the electoral recount during the presidential election of 2000, was especially controversial. The Roberts Court (2005 -- present) is regarded by some as more conservative than the Rehnquist Court. Some of its major rulings have concerned federal preemption (Wyeth v. Levine), civil procedure (Twombly - Iqbal), abortion (Gonzales v. Carhart), climate change (Massachusetts v. EPA), same - sex marriage (United States v. Windsor and Obergefell v. Hodges) and the Bill of Rights, notably in Citizens United v. Federal Election Commission (First Amendment), Heller - McDonald (Second Amendment) and Baze v. Rees (Eighth Amendment). Article III of the United States Constitution does not specify the number of justices. The Judiciary Act of 1789 called for the appointment of six "judges ''. Although an 1801 act would have reduced the size of the court to five members upon its next vacancy, an 1802 act promptly negated the 1801 act, legally restoring the court 's size to six members before any such vacancy occurred. As the nation 's boundaries grew, Congress added justices to correspond with the growing number of judicial circuits: seven in 1807, nine in 1837, and ten in 1863. In 1866, at the behest of Chief Justice Chase, Congress passed an act providing that the next three justices to retire would not be replaced, which would thin the bench to seven justices by attrition. Consequently, one seat was removed in 1866 and a second in 1867. In 1869, however, the Circuit Judges Act returned the number of justices to nine, where it has since remained. President Franklin D. Roosevelt attempted to expand the Court in 1937. His proposal envisioned appointment of one additional justice for each incumbent justice who reached the age of 70 years 6 months and refused retirement, up to a maximum bench of 15 justices. The proposal was ostensibly to ease the burden of the docket on elderly judges, but the actual purpose was widely understood as an effort to "pack '' the Court with justices who would support Roosevelt 's New Deal. The plan, usually called the "court - packing plan '', failed in Congress. Nevertheless, the Court 's balance began to shift within months when Justice van Devanter retired and was replaced by Senator Hugo Black. By the end of 1941, Roosevelt had appointed seven justices and elevated Harlan Fiske Stone to Chief Justice. The U.S. Constitution states that the President "shall nominate, and by and with the Advice and Consent of the Senate, shall appoint Judges of the Supreme Court. '' Most presidents nominate candidates who broadly share their ideological views, although a justice 's decisions may end up being contrary to a president 's expectations. Because the Constitution sets no qualifications for service as a justice, a president may nominate anyone to serve, subject to Senate confirmation. In modern times, the confirmation process has attracted considerable attention from the press and advocacy groups, which lobby senators to confirm or to reject a nominee depending on whether their track record aligns with the group 's views. The Senate Judiciary Committee conducts hearings and votes on whether the nomination should go to the full Senate with a positive, negative or neutral report. The committee 's practice of personally interviewing nominees is relatively recent. The first nominee to appear before the committee was Harlan Fiske Stone in 1925, who sought to quell concerns about his links to Wall Street, and the modern practice of questioning began with John Marshall Harlan II in 1955. Once the committee reports out the nomination, the full Senate considers it. Rejections are relatively uncommon; the Senate has explicitly rejected twelve Supreme Court nominees, most recently Robert Bork, nominated by President Ronald Reagan in 1987. Although Senate rules do not necessarily allow a negative vote in committee to block a nomination, prior to 2017 a nomination could be blocked by filibuster once debate had begun in the full Senate. President Lyndon Johnson 's nomination of sitting Associate Justice Abe Fortas to succeed Earl Warren as Chief Justice in 1968 was the first successful filibuster of a Supreme Court nominee. It included both Republican and Democratic senators concerned with Fortas 's ethics. President Donald Trump 's nomination of Neil Gorsuch to the seat vacated by Antonin Scalia was the second. Unlike the Fortas filibuster, however, only Democratic Senators voted against cloture on the Gorsuch nomination, citing his perceived conservative judicial philosophy, and the Republican majority 's prior refusal to take up President Barack Obama 's nomination of Merrick Garland to fill the vacancy. This led the Republican majority to change the rules and eliminate the filibuster for Supreme Court nominations. Not every Supreme Court nominee has received a floor vote in the Senate. A president may withdraw a nomination before an actual confirmation vote occurs, typically because it is clear that the Senate will reject the nominee; this occurred most recently with the nomination of Harriet Miers in 2006. The Senate may also fail to act on a nomination, which expires at the end of the session. For example, President Dwight Eisenhower 's first nomination of John Marshall Harlan II in November 1954 was not acted on by the Senate; Eisenhower re-nominated Harlan in January 1955, and Harlan was confirmed two months later. Most recently, as previously noted, the Senate failed to act on the March 2016 nomination of Merrick Garland; the nomination expired in January 2017, and the vacancy was later filled by President Trump 's appointment of Neil Gorsuch. Once the Senate confirms a nomination, the president must prepare and sign a commission, to which the Seal of the Department of Justice must be affixed, before the new justice can take office. The seniority of an associate justice is based on the commissioning date, not the confirmation or swearing - in date. Before 1981, the approval process of justices was usually rapid. From the Truman through Nixon administrations, justices were typically approved within one month. From the Reagan administration to the present, however, the process has taken much longer. Some believe this is because Congress sees justices as playing a more political role than in the past. According to the Congressional Research Service, the average number of days from nomination to final Senate vote since 1975 is 67 days (2.2 months), while the median is 71 days (or 2.3 months). When the Senate is in recess, a president may make temporary appointments to fill vacancies. Recess appointees hold office only until the end of the next Senate session (less than two years). The Senate must confirm the nominee for them to continue serving; of the two chief justices and eleven associate justices who have received recess appointments, only Chief Justice John Rutledge was not subsequently confirmed. No president since Dwight D. Eisenhower has made a recess appointment to the Court, and the practice has become rare and controversial even in lower federal courts. In 1960, after Eisenhower had made three such appointments, the Senate passed a "sense of the Senate '' resolution that recess appointments to the Court should only be made in "unusual circumstances. '' Such resolutions are not legally binding but are an expression of Congress 's views in the hope of guiding executive action. The 2014 Supreme Court ruling in National Labor Relations Board v. Noel Canning limited the ability of the President to make recess appointments (including appointments to the Supreme Court), ruling that the Senate decides when the Senate is in session (or in recess). Justice Breyer writing for the Court, stated, "We hold that, for purposes of the Recess Appointments Clause, the Senate is in session when it says it is, provided that, under its own rules, it retains the capacity to transact Senate business. '' This ruling allows the Senate to prevent recess appointments through the use of pro-forma sessions. The Constitution provides that justices "shall hold their offices during good behavior '' (unless appointed during a Senate recess). The term "good behavior '' is understood to mean justices may serve for the remainder of their lives, unless they are impeached and convicted by Congress, resign, or retire. Only one justice has been impeached by the House of Representatives (Samuel Chase, March 1804), but he was acquitted in the Senate (March 1805). Moves to impeach sitting justices have occurred more recently (for example, William O. Douglas was the subject of hearings twice, in 1953 and again in 1970; and Abe Fortas resigned while hearings were being organized in 1969), but they did not reach a vote in the House. No mechanism exists for removing a justice who is permanently incapacitated by illness or injury, but unable (or unwilling) to resign. Because justices have indefinite tenure, timing of vacancies can be unpredictable. Sometimes vacancies arise in quick succession, as in the early 1970s when Lewis Franklin Powell, Jr. and William Rehnquist were nominated to replace Hugo Black and John Marshall Harlan II, who retired within a week of each other. Sometimes a great length of time passes between nominations, such as the eleven years between Stephen Breyer 's nomination in 1994 to succeed Harry Blackmun and the nomination of John Roberts in 2005 to fill the seat of Sandra Day O'Connor (though Roberts ' nomination was withdrawn and resubmitted for the role of Chief Justice after Rehnquist died). Despite the variability, all but four presidents have been able to appoint at least one justice. William Henry Harrison died a month after taking office, though his successor (John Tyler) made an appointment during that presidential term. Likewise, Zachary Taylor died 16 months after taking office, but his successor (Millard Fillmore) also made a Supreme Court nomination before the end of that term. Andrew Johnson, who became president after the assassination of Abraham Lincoln, was denied the opportunity to appoint a justice by a reduction in the size of the Court. Jimmy Carter is the only person elected president to have left office after at least one full term without having the opportunity to appoint a justice. Somewhat similarly, presidents James Monroe, Franklin D. Roosevelt, and George W. Bush each served a full term without an opportunity to appoint a justice, but made appointments during their subsequent terms in office. No president who has served more than one full term has gone without at least one opportunity to make an appointment. Three presidents have appointed justices who together served more than a century. Andrew Jackson, Abraham Lincoln, and Franklin D. Roosevelt. The court is currently filled with nine Justices. The most recent justice to join to the court was Neil Gorsuch, who was nominated by President Donald Trump on January 31, 2017, and confirmed on April 7, 2017, by the U.S. Senate. Roberts, John John Roberts (Chief Justice) Kennedy, Anthony Anthony Kennedy Thomas, Clarence Clarence Thomas Ginsburg, Ruth Bader Ruth Bader Ginsburg Breyer, Stephen Stephen Breyer Alito, Samuel Samuel Alito Sotomayor, Sonia Sonia Sotomayor Kagan, Elena Elena Kagan Gorsuch, Neil Neil Gorsuch The Court currently has six male and three female justices. Among the nine justices, there is one African - American (Justice Thomas) and one Hispanic (Justice Sotomayor). Two of the justices were born to at least one immigrant parent: Justice Alito 's parents were born in Italy, and Justice Ginsburg 's father was born in Russia. At least five justices are Roman Catholics and three are Jewish; it is unclear whether Neil Gorsuch considers himself a Catholic or an Episcopalian. The average age is 67 years and 4 months. Every current justice has an Ivy League background. Four justices are from the state of New York, two from California, one from New Jersey, one from Georgia, and one from Colorado. In the 19th century, every justice was a man of European descent (usually Northern European), and almost always Protestant. Concerns about diversity focused on geography, to represent all regions of the country, rather than religious, ethnic, or gender diversity. Most justices have been Protestants, including 36 Episcopalians, 19 Presbyterians, 10 Unitarians, 5 Methodists, and 3 Baptists. The first Catholic justice was Roger Taney in 1836, and 1916 saw the appointment of the first Jewish justice, Louis Brandeis. Several Catholic and Jewish justices have since been appointed, and in recent years the situation has reversed. The Court currently has at least five Catholic justices, and three Jewish justices. Racial, ethnic, and gender diversity began to increase in the late 20th century. Thurgood Marshall became the first African American justice in 1967. Sandra Day O'Connor became the first female justice in 1981. Marshall was succeeded by African - American Clarence Thomas in 1991. O'Connor was joined by Ruth Bader Ginsburg in 1993. After O'Connor's retirement Ginsburg was joined in 2009 by Sonia Sotomayor, the first Hispanic and Latina justice; and in 2010 by Elena Kagan, for a total of four female justices in the Court 's history. There have been six foreign - born justices in the Court 's history: James Wilson (1789 -- 1798), born in Caskardy, Scotland; James Iredell (1790 -- 1799), born in Lewes, England; William Paterson (1793 -- 1806), born in County Antrim, Ireland; David Brewer (1889 -- 1910), born in Smyrna, Turkey; George Sutherland (1922 -- 1939), born in Buckinghamshire, England; and Felix Frankfurter (1939 -- 1962), born in Vienna, Austria. There are currently three living retired justices of the Supreme Court of the United States: John Paul Stevens, Sandra Day O'Connor and David Souter. As retired justices, they no longer participate in the work of the Supreme Court, but may be designated for temporary assignments to sit on lower federal courts, usually the United States Courts of Appeals. Such assignments are formally made by the Chief Justice, on request of the chief judge of the lower court and with the consent of the retired justice. In recent years, Justice O'Connor has sat with several Courts of Appeals around the country, and Justice Souter has frequently sat on the First Circuit, the court of which he was briefly a member before joining the Supreme Court. The status of a retired justice is analogous to that of a circuit or district court judge who has taken senior status, and eligibility of a supreme court justice to assume retired status (rather than simply resign from the bench) is governed by the same age and service criteria. In recent times, justices tend to strategically plan their decisions to leave the bench with personal, institutional, ideological, partisan and sometimes even political factors playing a role. The fear of mental decline and death often motivates justices to step down. The desire to maximize the Court 's strength and legitimacy through one retirement at a time, when the Court is in recess, and during non-presidential election years suggests a concern for institutional health. Finally, especially in recent decades, many justices have timed their departure to coincide with a philosophically compatible president holding office, to ensure that a like - minded successor would be appointed. John Paul Stevens Sandra Day O'Connor David Souter Many of the internal operations of the Court are organized by seniority of justices; the chief justice is considered the most senior member of the court, regardless of the length of his or her service. The associate justices are then ranked by the length of their service. During Court sessions, the justices sit according to seniority, with the Chief Justice in the center, and the Associate Justices on alternating sides, with the most senior Associate Justice on the Chief Justice 's immediate right, and the most junior Associate Justice seated on the left farthest away from the Chief Justice. Therefore, the current court sits as follows from left to right, from the perspective of those facing the Court: Kagan, Alito, Ginsburg, Kennedy (most senior Associate Justice), Roberts (Chief Justice), Thomas, Breyer, Sotomayor, and Gorsuch. In the official yearly Court photograph, justices are arranged similarly, with the five most senior members sitting in the front row in the same order as they would sit during Court sessions (The most recent photograph includes Ginsburg, Kennedy, Roberts, Thomas, Breyer), and the four most junior justices standing behind them, again in the same order as they would sit during Court sessions (Kagan, Alito, Sotomayor, Gorsuch). In the justices ' private conferences, current practice is for them to speak and vote in order of seniority to begin with the chief justice first and end with the most junior associate justice. The most junior associate justice in these conferences is charged with any menial tasks the justices may require as they convene alone, such as answering the door of their conference room, serving beverages and transmitting orders of the court to the clerk. Justice Joseph Story served the longest as junior justice, from February 3, 1812, to September 1, 1823, for a total of 4,228 days. Justice Stephen Breyer follows very closely behind serving from August 3, 1994, to January 31, 2006, for a total of 4,199 days. Justice Elena Kagan comes in at a distant third serving from August 6, 2010, to April 10, 2017, for a total of 2439 days. As of 2017, associate justices are paid $251,800 and the chief justice $263,300. Article III, Section 1 of the U.S. Constitution prohibits Congress from reducing the pay for incumbent justices. Once a justice meets age and service requirements, the justice may retire. Judicial pensions are based on the same formula used for federal employees, but a justice 's pension, as with other federal courts judges, can never be less than their salary at the time of retirement. Although justices are nominated by the president in power, justices do not represent or receive official endorsements from political parties, as is accepted practice in the legislative and executive branches. Jurists are, however, informally categorized in legal and political circles as being judicial conservatives, moderates, or liberals. Such leanings, however, generally refer to legal outlook rather than a political or legislative one. The nominations of justices are endorsed by individual politicians in the legislative branch who vote their approval or disapproval of the nominated justice. Following the confirmation of Neil Gorsuch in 2017, the Court consists of five justices appointed by Republican presidents and four appointed by Democratic presidents. It is popularly accepted that Chief Justice Roberts and associate justices Thomas, Alito, and Gorsuch, and (appointed by Republican presidents) comprise the Court 's conservative wing. Justices Ginsburg, Breyer, Sotomayor and Kagan (appointed by Democratic presidents) comprise the Court 's liberal wing. Justice Kennedy (appointed by President Reagan) is generally considered "a conservative who has occasionally voted with liberals '', and up until Justice Scalia 's death, he was often the swing vote that determined the outcome of cases divided between the conservative and liberal wings. Gorsuch had a track record as a reliably conservative judge in the 10th circuit. Tom Goldstein argued in an article in SCOTUSblog in 2010, that the popular view of the Supreme Court as sharply divided along ideological lines and each side pushing an agenda at every turn is "in significant part a caricature designed to fit certain preconceptions. '' He pointed out that in the 2009 term, almost half the cases were decided unanimously, and only about 20 % were decided by a 5 - to - 4 vote. Barely one in ten cases involved the narrow liberal / conservative divide (fewer if the cases where Sotomayor recused herself are not included). He also pointed to several cases that defied the popular conception of the ideological lines of the Court. Goldstein further argued that the large number of pro-criminal - defendant summary dismissals (usually cases where the justices decide that the lower courts significantly misapplied precedent and reverse the case without briefing or argument) were an illustration that the conservative justices had not been aggressively ideological. Likewise, Goldstein stated that the critique that the liberal justices are more likely to invalidate acts of Congress, show inadequate deference to the political process, and be disrespectful of precedent, also lacked merit: Thomas has most often called for overruling prior precedent (even if long standing) that he views as having been wrongly decided, and during the 2009 term Scalia and Thomas voted most often to invalidate legislation. According to statistics compiled by SCOTUSblog, in the twelve terms from 2000 to 2011, an average of 19 of the opinions on major issues (22 %) were decided by a 5 -- 4 vote, with an average of 70 % of those split opinions decided by a Court divided along the traditionally perceived ideological lines (about 15 % of all opinions issued). Over that period, the conservative bloc has been in the majority about 62 % of the time that the Court has divided along ideological lines, which represents about 44 % of all the 5 -- 4 decisions. In the October 2010 term, the Court decided 86 cases, including 75 signed opinions and 5 summary reversals (where the Court reverses a lower court without arguments and without issuing an opinion on the case). Four were decided with unsigned opinions, two cases affirmed by an equally divided Court, and two cases were dismissed as improvidently granted. Justice Kagan recused herself from 26 of the cases due to her prior role as United States Solicitor General. Of the 80 cases, 38 (about 48 %, the highest percentage since the October 2005 term) were decided unanimously (9 -- 0 or 8 -- 0), and 16 decisions were made by a 5 -- 4 vote (about 20 %, compared to 18 % in the October 2009 term, and 29 % in the October 2008 term). However, in fourteen of the sixteen 5 -- 4 decisions, the Court divided along the traditional ideological lines (with Ginsburg, Breyer, Sotomayor, and Kagan on the liberal side, and Roberts, Scalia, Thomas, and Alito on the conservative, and Kennedy providing the "swing vote ''). This represents 87 % of those 16 cases, the highest rate in the past 10 years. The conservative bloc, joined by Kennedy, formed the majority in 63 % of the 5 -- 4 decisions, the highest cohesion rate of that bloc in the Roberts Court. In the October 2011 term, the Court decided 75 cases. Of these, 33 (44 %) were decided unanimously, and 15 (20 %, the same percentage as in the previous term) were decided by a vote of 5 -- 4. Of the latter 15, the Court divided along the perceived ideological lines 10 times with Justice Kennedy joining the conservative justices (Roberts, Scalia, Thomas and Alito) five times and with the liberal justices (Ginsburg, Breyer, Sotomayor and Kagan) five times. In the October 2012 term, the Court decided 78 cases. Five of them were decided in unsigned opinions. 38 out of the 78 decisions (representing 49 % of the decisions) were unanimous in judgement, with 24 decisions being completely unanimous (a single opinion with every justice that participated joining it). This was the largest percentage of unanimous decisions that the Court had in ten years, since the October 2002 term (when 51 % of the decisions handed down were unanimous). The Court split 5 -- 4 in 23 cases (29 % of the total); of these, 16 broke down along the traditionally perceived ideological lines, with Chief Justice Roberts and Justices Scalia, Thomas, and Alito on one side, Justices Ginsburg, Breyer, Sotomayor and Kagan on the other, and Justice Kennedy holding the balance. Of these 16 cases, Justice Kennedy sided with the conservatives on 10 cases, and with the liberals on 6. Three cases were decided by an interesting alignment of justices, with Chief Justice Roberts joined by Justices Kennedy, Thomas, Breyer and Alito in the majority, with Justices Scalia, Ginsburg, Sotomayor, and Kagan in the minority. The greatest agreement between justices was between Ginsburg and Kagan, who agreed on 72 of the 75 (96 %) cases, in which both voted; the lowest agreement between justices was between Ginsburg and Alito, who agreed only on 45 out of 77 (54 %) cases, in which they both participated. Justice Kennedy was in the majority of 5 -- 4 decisions on 20 out of 24 (83 %) cases, and in 71 of 78 (91 %) cases during the term, in line with his position as the "swing vote '' of the Court. The Supreme Court first met on February 1, 1790, at the Merchants ' Exchange Building in New York City. When Philadelphia became the capital, the Court met briefly in Independence Hall before settling in Old City Hall from 1791 until 1800. After the government moved to Washington, D.C., the Court occupied various spaces in the United States Capitol building until 1935, when it moved into its own purpose - built home. The four - story building was designed by Cass Gilbert in a classical style sympathetic to the surrounding buildings of the Capitol and Library of Congress, and is clad in marble. The building includes the courtroom, justices ' chambers, an extensive law library, various meeting spaces, and auxiliary services including a gymnasium. The Supreme Court building is within the ambit of the Architect of the Capitol, but maintains its own police force separate from the Capitol Police. Located across First Street from the United States Capitol at One First Street NE and Maryland Avenue, the building is open to the public from 9 am to 4: 30 pm weekdays but closed on weekends and holidays. Visitors may not tour the actual courtroom unaccompanied. There is a cafeteria, a gift shop, exhibits, and a half - hour informational film. When the Court is not in session, lectures about the courtroom are held hourly from 9: 30 am to 3: 30 pm and reservations are not necessary. When the Court is in session the public may attend oral arguments, which are held twice each morning (and sometimes afternoons) on Mondays, Tuesdays, and Wednesdays in two - week intervals from October through late April, with breaks during December and February. Visitors are seated on a first - come first - served basis. One estimate is there are about 250 seats available. The number of open seats varies from case to case; for important cases, some visitors arrive the day before and wait through the night. From mid-May until the end of June, the court releases orders and opinions beginning at 10 am, and these 15 to 30 - minute sessions are open to the public on a similar basis. Supreme Court Police are available to answer questions. Congress is authorized by Article III of the federal Constitution to regulate the Supreme Court 's appellate jurisdiction. The Supreme Court has original and exclusive jurisdiction over cases between two or more states, but may decline to hear such cases. It also possesses original, but not exclusive, jurisdiction to hear "all actions or proceedings to which ambassadors, other public ministers, consuls, or vice consuls of foreign states are parties; all controversies between the United States and a State; and all actions or proceedings by a State against the citizens of another State or against aliens. '' In 1906, the Court asserted its original jurisdiction to prosecute individuals for contempt of court in United States v. Shipp. The resulting proceeding remains the only contempt proceeding and only criminal trial in the Court 's history. The contempt proceeding arose from the lynching of Ed Johnson in Chattanooga, Tennessee the evening after Justice John Marshall Harlan granted Johnson a stay of execution to allow his lawyers to file an appeal. Johnson was removed from his jail cell by a lynch mob -- aided by the local sheriff who left the prison virtually unguarded -- and hung from a bridge, after which a deputy sheriff pinned a note on Johnson 's body reading: "To Justice Harlan. Come get your nigger now. '' The local sheriff, John Shipp, cited the Supreme Court 's intervention as the rationale for the lynching. The Court appointed its deputy clerk as special master to preside over the trial in Chattanooga with closing arguments made in Washington before the Supreme Court justices, who found nine individuals guilty of contempt, sentencing three to 90 days in jail and the rest to 60 days in jail. In all other cases, however, the Court has only appellate jurisdiction, including the ability to issue writs of mandamus and writs of prohibition to lower courts. It considers cases based on its original jurisdiction very rarely; almost all cases are brought to the Supreme Court on appeal. In practice, the only original jurisdiction cases heard by the Court are disputes between two or more states. The Court 's appellate jurisdiction consists of appeals from federal courts of appeal (through certiorari, certiorari before judgment, and certified questions), the United States Court of Appeals for the Armed Forces (through certiorari), the Supreme Court of Puerto Rico (through certiorari), the Supreme Court of the Virgin Islands (through certiorari), the District of Columbia Court of Appeals (through certiorari), and "final judgments or decrees rendered by the highest court of a State in which a decision could be had '' (through certiorari). In the last case, an appeal may be made to the Supreme Court from a lower state court if the state 's highest court declined to hear an appeal or lacks jurisdiction to hear an appeal. For example, a decision rendered by one of the Florida District Courts of Appeal can be appealed to the U.S. Supreme Court if (a) the Supreme Court of Florida declined to grant certiorari, e.g. Florida Star v. B.J.F., or (b) the district court of appeal issued a per curiam decision simply affirming the lower court 's decision without discussing the merits of the case, since the Supreme Court of Florida lacks jurisdiction to hear appeals of such decisions. The power of the Supreme Court to consider appeals from state courts, rather than just federal courts, was created by the Judiciary Act of 1789 and upheld early in the Court 's history, by its rulings in Martin v. Hunter 's Lessee (1816) and Cohens v. Virginia (1821). The Supreme Court is the only federal court that has jurisdiction over direct appeals from state court decisions, although there are several devices that permit so - called "collateral review '' of state cases. It has to be noted that this "collateral review '' often only applies to individuals on death row and not through the regular judicial system. Since Article Three of the United States Constitution stipulates that federal courts may only entertain "cases '' or "controversies '', the Supreme Court can not decide cases that are moot and it does not render advisory opinions, as the supreme courts of some states may do. For example, in DeFunis v. Odegaard, 416 U.S. 312 (1974), the Court dismissed a lawsuit challenging the constitutionality of a law school affirmative action policy because the plaintiff student had graduated since he began the lawsuit, and a decision from the Court on his claim would not be able to redress any injury he had suffered. However, the Court recognizes some circumstances where it is appropriate to hear a case that is seemingly moot. If an issue is "capable of repetition yet evading review '', the Court will address it even though the party before the Court would not himself be made whole by a favorable result. In Roe v. Wade, 410 U.S. 113 (1973), and other abortion cases, the Court addresses the merits of claims pressed by pregnant women seeking abortions even if they are no longer pregnant because it takes longer than the typical human gestation period to appeal a case through the lower courts to the Supreme Court. Another mootness exception is voluntary cessation of unlawful conduct, in which the Court considers the probability of recurrence and plaintiff 's need for relief. The United States is divided into thirteen circuit courts of appeals, each of which is assigned a "circuit justice '' from the Supreme Court. Although this concept has been in continuous existence throughout the history of the republic, its meaning has changed through time. Under the Judiciary Act of 1789, each justice was required to "ride circuit '', or to travel within the assigned circuit and consider cases alongside local judges. This practice encountered opposition from many justices, who cited the difficulty of travel. Moreover, there was a potential for a conflict of interest on the Court if a justice had previously decided the same case while riding circuit. Circuit riding was abolished in 1891. Today, the circuit justice for each circuit is responsible for dealing with certain types of applications that, under the Court 's rules, may be addressed by a single justice. These include applications for emergency stays (including stays of execution in death - penalty cases) and injunctions pursuant to the All Writs Act arising from cases within that circuit, as well as routine requests such as requests for extensions of time. In the past, circuit justices also sometimes ruled on motions for bail in criminal cases, writs of habeas corpus, and applications for writs of error granting permission to appeal. Ordinarily, a justice will resolve such an application by simply endorsing it "granted '' or "denied '' or entering a standard form of order. However, the justice may elect to write an opinion -- referred to as an in - chambers opinion -- in such matters if he or she wishes. A circuit justice may sit as a judge on the Court of Appeals of that circuit, but over the past hundred years, this has rarely occurred. A circuit justice sitting with the Court of Appeals has seniority over the chief judge of the circuit. The chief justice has traditionally been assigned to the District of Columbia Circuit, the Fourth Circuit (which includes Maryland and Virginia, the states surrounding the District of Columbia), and since it was established, the Federal Circuit. Each associate justice is assigned to one or two judicial circuits. As of June 27, 2017, the allotment of the justices among the circuits is: Four of the current justices are assigned to circuits on which they previously sat as circuit judges: Chief Justice Roberts (D.C. Circuit), Justice Breyer (First Circuit), Justice Alito (Third Circuit), and Justice Kennedy (Ninth Circuit). A term of the Supreme Court commences on the first Monday of each October, and continues until June or early July of the following year. Each term consists of alternating periods of around two weeks known as "sittings '' and "recesses. '' Justices hear cases and deliver rulings during sittings; they discuss cases and write opinions during recesses. Nearly all cases come before the court by way of petitions for writs of certiorari, commonly referred to as "cert ''. The Court may review any case in the federal courts of appeals "by writ of certiorari granted upon the petition of any party to any civil or criminal case. '' Court may only review "final judgments rendered by the highest court of a state in which a decision could be had '' if those judgments involve a question of federal statutory or constitutional law. The party that appealed to the Court is the petitioner and the non-mover is the respondent. All case names before the Court are styled petitioner v. respondent, regardless of which party initiated the lawsuit in the trial court. For example, criminal prosecutions are brought in the name of the state and against an individual, as in State of Arizona v. Ernesto Miranda. If the defendant is convicted, and his conviction then is affirmed on appeal in the state supreme court, when he petitions for cert the name of the case becomes Miranda v. Arizona. There are situations where the Court has original jurisdiction, such as when two states have a dispute against each other, or when there is a dispute between the United States and a state. In such instances, a case is filed with the Supreme Court directly. Examples of such cases include United States v. Texas, a case to determine whether a parcel of land belonged to the United States or to Texas, and Virginia v. Tennessee, a case turning on whether an incorrectly drawn boundary between two states can be changed by a state court, and whether the setting of the correct boundary requires Congressional approval. Although it has not happened since 1794 in the case of Georgia v. Brailsford, parties in an action at law in which the Supreme Court has original jurisdiction may request that a jury determine issues of fact. Two other original jurisdiction cases involve colonial era borders and rights under navigable waters in New Jersey v. Delaware, and water rights between riparian states upstream of navigable waters in Kansas v. Colorado. A cert petition is voted on at a session of the court called a conference. A conference is a private meeting of the nine Justices by themselves; the public and the Justices ' clerks are excluded. The rule of four permits four of the nine justices to grant a writ of certiorari. If it is granted, the case proceeds to the briefing stage; otherwise, the case ends. Except in death penalty cases and other cases in which the Court orders briefing from the respondent, the respondent may, but is not required to, file a response to the cert petition. The court grants a petition for cert only for "compelling reasons '', spelled out in the court 's Rule 10. Such reasons include: When a conflict of interpretations arises from differing interpretations of the same law or constitutional provision issued by different federal circuit courts of appeals, lawyers call this situation a "circuit split. '' If the court votes to deny a cert petition, as it does in the vast majority of such petitions that come before it, it does so typically without comment. A denial of a cert petition is not a judgment on the merits of a case, and the decision of the lower court stands as the final ruling in the case. To manage the high volume of cert petitions received by the Court each year (of the more than 7,000 petitions the Court receives each year, it will usually request briefing and hear oral argument in 100 or fewer), the Court employs an internal case management tool known as the "cert pool. '' Currently, all justices except for Justices Alito and Gorsuch participate in the cert pool. When the Court grants a cert petition, the case is set for oral argument. Both parties will file briefs on the merits of the case, as distinct from the reasons they may have argued for granting or denying the cert petition. With the consent of the parties or approval of the Court, amici curiae, or "friends of the court '', may also file briefs. The Court holds two - week oral argument sessions each month from October through April. Each side has thirty minutes to present its argument (the Court may choose to give more time, though this is rare), and during that time, the Justices may interrupt the advocate and ask questions. The petitioner gives the first presentation, and may reserve some time to rebut the respondent 's arguments after the respondent has concluded. Amici curiae may also present oral argument on behalf of one party if that party agrees. The Court advises counsel to assume that the Justices are familiar with and have read the briefs filed in a case. In order to plead before the court, an attorney must first be admitted to the court 's bar. Approximately 4,000 lawyers join the bar each year. The bar contains an estimated 230,000 members. In reality, pleading is limited to several hundred attorneys. The rest join for a one - time fee of $200, earning the court about $750,000 annually. Attorneys can be admitted as either individuals or as groups. The group admission is held before the current justices of the Supreme Court, wherein the Chief Justice approves a motion to admit the new attorneys. Lawyers commonly apply for the cosmetic value of a certificate to display in their office or on their resume. They also receive access to better seating if they wish to attend an oral argument. Members of the Supreme Court Bar are also granted access to the collections of the Supreme Court Library. At the conclusion of oral argument, the case is submitted for decision. Cases are decided by majority vote of the Justices. It is the Court 's practice to issue decisions in all cases argued in a particular Term by the end of that Term. Within that Term, however, the Court is under no obligation to release a decision within any set time after oral argument. At the conclusion of oral argument, the Justices retire to another conference at which the preliminary votes are tallied, and the most senior Justice in the majority assigns the initial draft of the Court 's opinion to a Justice on his or her side. Drafts of the Court 's opinion, as well as any concurring or dissenting opinions, circulate among the Justices until the Court is prepared to announce the judgment in a particular case. Since recording devices are banned inside the courtroom of the United States Supreme Court Building, the delivery of the decision to the media is done via paper copies and is known as the Running of the Interns. It is possible that, through recusals or vacancies, the Court divides evenly on a case. If that occurs, then the decision of the court below is affirmed, but does not establish binding precedent. In effect, it results in a return to the status quo ante. For a case to be heard, there must be a quorum of at least six justices. If a quorum is not available to hear a case and a majority of qualified justices believes that the case can not be heard and determined in the next term, then the judgment of the court below is affirmed as if the Court had been evenly divided. For cases brought to the Supreme Court by direct appeal from a United States District Court, the Chief Justice may order the case remanded to the appropriate U.S. Court of Appeals for a final decision there. This has only occurred once in U.S. history, in the case of United States v. Alcoa (1945). The Court 's opinions are published in three stages. First, a slip opinion is made available on the Court 's web site and through other outlets. Next, several opinions and lists of the court 's orders are bound together in paperback form, called a preliminary print of United States Reports, the official series of books in which the final version of the Court 's opinions appears. About a year after the preliminary prints are issued, a final bound volume of U.S. Reports is issued. The individual volumes of U.S. Reports are numbered so that users may cite this set of reports -- or a competing version published by another commercial legal publisher but containing parallel citations -- to allow those who read their pleadings and other briefs to find the cases quickly and easily. As of the beginning of October 2016 term, there are: As of March 2012, the U.S. Reports have published a total of 30,161 Supreme Court opinions, covering the decisions handed down from February 1790 to March 2012. This figure does not reflect the number of cases the Court has taken up, as several cases can be addressed by a single opinion (see, for example, Parents v. Seattle, where Meredith v. Jefferson County Board of Education was also decided in the same opinion; by a similar logic, Miranda v. Arizona actually decided not only Miranda but also three other cases: Vignera v. New York, Westover v. United States, and California v. Stewart). A more unusual example is The Telephone Cases, which comprise a single set of interlinked opinions that take up the entire 126th volume of the U.S. Reports. Opinions are also collected and published in two unofficial, parallel reporters: Supreme Court Reporter, published by West (now a part of Thomson Reuters), and United States Supreme Court Reports, Lawyers ' Edition (simply known as Lawyers ' Edition), published by LexisNexis. In court documents, legal periodicals and other legal media, case citations generally contain cites from each of the three reporters; for example, citation to Citizens United v. Federal Election Commission is presented as Citizens United v. Federal Election Com'n, 585 U.S. 50, 130 S. Ct. 876, 175 L. Ed. 2d 753 (2010), with "S. Ct. '' representing the Supreme Court Reporter, and "L. Ed. '' representing the Lawyers ' Edition. Lawyers use an abbreviated format to cite cases, in the form "vol U.S. page, pin (year) '', where vol is the volume number, page is the page number on which the opinion begins, and year is the year in which the case was decided. Optionally, pin is used to "pinpoint '' to a specific page number within the opinion. For instance, the citation for Roe v. Wade is 410 U.S. 113 (1973), which means the case was decided in 1973 and appears on page 113 of volume 410 of U.S. Reports. For opinions or orders that have not yet been published in the preliminary print, the volume and page numbers may be replaced with "___ ''. The Federal court system and the judicial authority to interpret the Constitution received little attention in the debates over the drafting and ratification of the Constitution. The power of judicial review, in fact, is nowhere mentioned in it. Over the ensuing years, the question of whether the power of judicial review was even intended by the drafters of the Constitution was quickly frustrated by the lack of evidence bearing on the question either way. Nevertheless, the power of judiciary to overturn laws and executive actions it determines are unlawful or unconstitutional is a well - established precedent. Many of the Founding Fathers accepted the notion of judicial review; in Federalist No. 78, Alexander Hamilton wrote: "A Constitution is, in fact, and must be regarded by the judges, as a fundamental law. It therefore belongs to them to ascertain its meaning, as well as the meaning of any particular act proceeding from the legislative body. If there should happen to be an irreconcilable variance between the two, that which has the superior obligation and validity ought, of course, to be preferred; or, in other words, the Constitution ought to be preferred to the statute. '' The Supreme Court firmly established its power to declare laws unconstitutional in Marbury v. Madison (1803), consummating the American system of checks and balances. In explaining the power of judicial review, Chief Justice John Marshall stated that the authority to interpret the law was the particular province of the courts, part of the duty of the judicial department to say what the law is. His contention was not that the Court had privileged insight into constitutional requirements, but that it was the constitutional duty of the judiciary, as well as the other branches of government, to read and obey the dictates of the Constitution. Since the founding of the republic, there has been a tension between the practice of judicial review and the democratic ideals of egalitarianism, self - government, self - determination and freedom of conscience. At one pole are those who view the Federal Judiciary and especially the Supreme Court as being "the most separated and least checked of all branches of government. '' Indeed, federal judges and justices on the Supreme Court are not required to stand for election by virtue of their tenure "during good behavior '', and their pay may "not be diminished '' while they hold their position (Section 1 of Article Three). Though subject to the process of impeachment, only one Justice has ever been impeached and no Supreme Court Justice has been removed from office. At the other pole are those who view the judiciary as the least dangerous branch, with little ability to resist the exhortations of the other branches of government. The Supreme Court, it is noted, can not directly enforce its rulings; instead, it relies on respect for the Constitution and for the law for adherence to its judgments. One notable instance of nonacquiescence came in 1832, when the state of Georgia ignored the Supreme Court 's decision in Worcester v. Georgia. President Andrew Jackson, who sided with the Georgia courts, is supposed to have remarked, "John Marshall has made his decision; now let him enforce it! ''; however, this alleged quotation has been disputed. Some state governments in the South also resisted the desegregation of public schools after the 1954 judgment Brown v. Board of Education. More recently, many feared that President Nixon would refuse to comply with the Court 's order in United States v. Nixon (1974) to surrender the Watergate tapes. Nixon, however, ultimately complied with the Supreme Court 's ruling. Supreme Court decisions can be (and have been) purposefully overturned by constitutional amendment, which has happened on five occasions: When the Court rules on matters involving the interpretation of laws rather than of the Constitution, simple legislative action can reverse the decisions (for example, in 2009 Congress passed the Lilly Ledbetter act, superseding the limitations given in Ledbetter v. Goodyear Tire & Rubber Co. in 2007). Also, the Supreme Court is not immune from political and institutional consideration: lower federal courts and state courts sometimes resist doctrinal innovations, as do law enforcement officials. In addition, the other two branches can restrain the Court through other mechanisms. Congress can increase the number of justices, giving the President power to influence future decisions by appointments (as in Roosevelt 's Court Packing Plan discussed above). Congress can pass legislation that restricts the jurisdiction of the Supreme Court and other federal courts over certain topics and cases: this is suggested by language in Section 2 of Article Three, where the appellate jurisdiction is granted "with such Exceptions, and under such Regulations as the Congress shall make. '' The Court sanctioned such congressional action in the Reconstruction case ex parte McCardle (1869), though it rejected Congress ' power to dictate how particular cases must be decided in United States v. Klein (1871). On the other hand, through its power of judicial review, the Supreme Court has defined the scope and nature of the powers and separation between the legislative and executive branches of the federal government; for example, in United States v. Curtiss - Wright Export Corp. (1936), Dames & Moore v. Regan (1981), and notably in Goldwater v. Carter (1979), (where it effectively gave the Presidency the power to terminate ratified treaties without the consent of Congress or the Senate). The Court 's decisions can also impose limitations on the scope of Executive authority, as in Humphrey 's Executor v. United States (1935), the Steel Seizure Case (1952), and United States v. Nixon (1974). Each Supreme Court justice hires several law Clerks to review petitions for writ of certiorari, research them, prepare bench memorandums, and draft opinions. Associate justices are allowed four clerks. The chief justice is allowed five clerks, but Chief Justice Rehnquist hired only three per year, and Chief Justice Roberts usually hires only four. Generally, law clerks serve a term of one to two years. The first law clerk was hired by Associate Justice Horace Gray in 1882. Oliver Wendell Holmes, Jr. and Louis Brandeis were the first Supreme Court justices to use recent law school graduates as clerks, rather than hiring a "stenographer - secretary ''. Most law clerks are recent law school graduates. The first female clerk was Lucile Lomen, hired in 1944 by Justice William O. Douglas. The first African - American, William T. Coleman, Jr., was hired in 1948 by Justice Felix Frankfurter. A disproportionately large number of law clerks have obtained law degrees from elite law schools, especially Harvard, Yale, the University of Chicago, Columbia, and Stanford. From 1882 to 1940, 62 % of law clerks were graduates of Harvard Law School. Those chosen to be Supreme Court law clerks usually have graduated in the top of their law school class and were often an editor of the law review or a member of the moot court board. By the mid-1970s, clerking previously for a judge in a federal court of appeals had also become a prerequisite to clerking for a Supreme Court justice. Seven Supreme Court justices previously clerked for other justices: Byron White for Frederick M. Vinson, John Paul Stevens for Wiley Rutledge, William Rehnquist for Robert H. Jackson, Stephen Breyer for Arthur Goldberg, John Roberts for William Rehnquist, Elena Kagan for Thurgood Marshall and Neil Gorsuch for both Byron White and Anthony Kennedy. Gorsuch is the first justice to serve alongside a justice for whom he or she clerked. Several current Supreme Court justices have also clerked in the federal courts of appeals: John Roberts for Judge Henry Friendly of the United States Court of Appeals for the Second Circuit, Justice Samuel Alito for Judge Leonard I. Garth of the United States Court of Appeals for the Third Circuit, Elena Kagan for Judge Abner J. Mikva of the United States Court of Appeals for the District of Columbia Circuit, and Neil Gorsuch for Judge David B. Sentelle of the United States Court of Appeals for the District of Columbia. Clerks hired by each of the justices of the Supreme Court are often given considerable leeway in the opinions they draft. "Supreme Court clerkship appeared to be a nonpartisan institution from the 1940s into the 1980s '', according to a study published in 2009 by the law review of Vanderbilt University Law School. "As law has moved closer to mere politics, political affiliations have naturally and predictably become proxies for the different political agendas that have been pressed in and through the courts '', former federal court of appeals judge J. Michael Luttig said. David J. Garrow, professor of history at the University of Cambridge, stated that the Court had thus begun to mirror the political branches of government. "We are getting a composition of the clerk workforce that is getting to be like the House of Representatives '', Professor Garrow said. "Each side is putting forward only ideological purists. '' According to the Vanderbilt Law Review study, this politicized hiring trend reinforces the impression that the Supreme Court is "a superlegislature responding to ideological arguments rather than a legal institution responding to concerns grounded in the rule of law. '' A poll conducted in June 2012 by The New York Times and CBS News showed just 44 % of Americans approve of the job the Supreme Court is doing. Three - quarters said justices ' decisions are sometimes influenced by their political or personal views. The court has been the object of criticisms on a range of issues. Among them: The Supreme Court has been criticized for not keeping within Constitutional bounds by engaging in judicial activism, rather than merely interpreting law and exercising judicial restraint. Claims of judicial activism are not confined to any particular ideology. An often cited example of conservative judicial activism is the 1905 decision in Lochner v. New York, which has been criticized by many prominent thinkers, including Robert Bork, Justice Antonin Scalia, and Chief Justice John Roberts, and which was reversed in the 1930s. An often cited example of liberal judicial activism is Roe v. Wade (1973), which legalized abortion in part on the basis of the "right to privacy '' inferred from the Fourteenth Amendment, a reasoning that some critics argued was circuitous. Legal scholars, justices, and presidential candidates have criticized the Roe decision. The progressive Brown v. Board of Education decision has been criticized by conservatives such as Patrick Buchanan and former presidential contender Barry Goldwater. More recently, Citizens United v. Federal Election Commission was criticized for expanding upon the precedent in First National Bank of Boston v. Bellotti (1978) that the First Amendment applies to corporations. Lincoln warned, referring to the Dred Scott decision, that if government policy became "irrevocably fixed by decisions of the Supreme Court... the people will have ceased to be their own rulers. '' Former justice Thurgood Marshall justified judicial activism with these words: "You do what you think is right and let the law catch up. '' During different historical periods, the Court has leaned in different directions. Critics from both sides complain that activist - judges abandon the Constitution and substitute their own views instead. Critics include writers such as Andrew Napolitano, Phyllis Schlafly, Mark R. Levin, Mark I. Sutherland, and James MacGregor Burns. Past presidents from both parties have attacked judicial activism, including Franklin D. Roosevelt, Richard Nixon, and Ronald Reagan. Failed Supreme Court nominee Robert Bork wrote: "What judges have wrought is a coup d'état, -- slow - moving and genteel, but a coup d'état nonetheless. '' Senator Al Franken quipped that when politicians talk about judicial activism, "their definition of an activist judge is one who votes differently than they would like. '' One law professor claimed in a 1978 article that the Supreme Court is in some respects "certainly a legislative body. '' Court decisions have been criticized for failing to protect individual rights: the Dred Scott (1857) decision upheld slavery; Plessy v Ferguson (1896) upheld segregation under the doctrine of separate but equal; Kelo v. City of New London (2005) was criticized by prominent politicians, including New Jersey governor Jon Corzine, as undermining property rights. Some critics suggest the 2009 bench with a conservative majority has "become increasingly hostile to voters '' by siding with Indiana 's voter identification laws which tend to "disenfranchise large numbers of people without driver 's licenses, especially poor and minority voters '', according to one report. Senator Al Franken criticized the Court for "eroding individual rights. '' However, others argue that the Court is too protective of some individual rights, particularly those of people accused of crimes or in detention. For example, Chief Justice Warren Burger was an outspoken critic of the exclusionary rule, and Justice Scalia criticized the Court 's decision in Boumediene v. Bush for being too protective of the rights of Guantanamo detainees, on the grounds that habeas corpus was "limited '' to sovereign territory. This criticism is related to complaints about judicial activism. George Will wrote that the Court has an "increasingly central role in American governance. '' It was criticized for intervening in bankruptcy proceedings regarding ailing carmaker Chrysler Corporation in 2009. A reporter wrote that "Justice Ruth Bader Ginsburg 's intervention in the Chrysler bankruptcy '' left open the "possibility of further judicial review '' but argued overall that the intervention was a proper use of Supreme Court power to check the executive branch. Warren E. Burger, before becoming Chief Justice, argued that since the Supreme Court has such "unreviewable power '' it is likely to "self - indulge itself '' and unlikely to "engage in dispassionate analysis ''. Larry Sabato wrote "excessive authority has accrued to the federal courts, especially the Supreme Court. '' British constitutional scholar Adam Tomkins sees flaws in the American system of having courts (and specifically the Supreme Court) act as checks on the Executive and Legislative branches; he argues that because the courts must wait, sometimes for years, for cases to navigate their way through the system, their ability to restrain other branches is severely weakened. In contrast, the Federal Constitutional Court of Germany for example, can directly declare a law unconstitutional upon request. There has been debate throughout American history about the boundary between federal and state power. While Framers such as James Madison and Alexander Hamilton argued in The Federalist Papers that their then - proposed Constitution would not infringe on the power of state governments, others argue that expansive federal power is good and consistent with the Framers ' wishes. The Tenth Amendment to the United States Constitution explicitly grants "powers not delegated to the United States by the Constitution, nor prohibited by it to the States, are reserved to the States respectively, or to the people. '' The Supreme Court has been criticized for giving the federal government too much power to interfere with state authority. One criticism is that it has allowed the federal government to misuse the Commerce Clause by upholding regulations and legislation which have little to do with interstate commerce, but that were enacted under the guise of regulating interstate commerce; and by voiding state legislation for allegedly interfering with interstate commerce. For example, the Commerce Clause was used by the Fifth Circuit Court of Appeals to uphold the Endangered Species Act, thus protecting six endemic species of insect near Austin, Texas, despite the fact that the insects had no commercial value and did not travel across state lines; the Supreme Court let that ruling stand without comment in 2005. Chief Justice John Marshall asserted Congress 's power over interstate commerce was "complete in itself, may be exercised to its utmost extent, and acknowledges no limitations, other than are prescribed in the Constitution. '' Justice Alito said congressional authority under the Commerce Clause is "quite broad. '' Modern day theorist Robert B. Reich suggests debate over the Commerce Clause continues today. Advocates of states ' rights such as constitutional scholar Kevin Gutzman have also criticized the Court, saying it has misused the Fourteenth Amendment to undermine state authority. Justice Brandeis, in arguing for allowing the states to operate without federal interference, suggested that states should be laboratories of democracy. One critic wrote "the great majority of Supreme Court rulings of unconstitutionality involve state, not federal, law. '' However, others see the Fourteenth Amendment as a positive force that extends "protection of those rights and guarantees to the state level. '' The Court has been criticized for keeping its deliberations hidden from public view. According to a review of Jeffrey Toobin 's expose The Nine: Inside the Secret World of the Supreme Court; "Its inner workings are difficult for reporters to cover, like a closed ' cartel ', only revealing itself through ' public events and printed releases, with nothing about its inner workings. ' The reviewer writes: "few (reporters) dig deeply into court affairs. It all works very neatly; the only ones hurt are the American people, who know little about nine individuals with enormous power over their lives. '' Larry Sabato complains about the Court 's "insularity. '' A Fairleigh Dickinson University poll conducted in 2010 found that 61 % of American voters agreed that televising Court hearings would "be good for democracy '', and 50 % of voters stated they would watch Court proceedings if they were televised. In recent years, many justices have appeared on television, written books and made public statements to journalists. In a 2009 interview on C - SPAN, journalists Joan Biskupic (of USA Today) and Lyle Denniston (of SCOTUSblog) argued that the Court is a "very open '' institution with only the justices ' private conferences inaccessible to others. In October 2010, the Court began the practice of posting on its website recordings and transcripts of oral arguments on the Friday after they occur. Some Court decisions have been criticized for injecting the Court into the political arena, and deciding questions that are the purview of the other two branches of government. The Bush v. Gore decision, in which the Supreme Court intervened in the 2000 presidential election and effectively chose George W. Bush over Al Gore, has been criticized extensively, particularly by liberals. Another example are Court decisions on apportionment and re-districting: in Baker v. Carr, the court decided it could rule on apportionment questions; Justice Frankfurter in a "scathing dissent '' argued against the court wading into so - called political questions. Senator Arlen Specter said the Court should "decide more cases ''. On the other hand, although Justice Scalia acknowledged in a 2009 interview that the number of cases that the Court hears now is smaller today than when he first joined the Supreme Court, he also stated that he has not changed his standards for deciding whether to review a case, nor does he believe his colleagues have changed their standards. He attributed the high volume of cases in the late 1980s, at least in part, to an earlier flurry of new federal legislation that was making its way through the courts. Critic Larry Sabato wrote: "The insularity of lifetime tenure, combined with the appointments of relatively young attorneys who give long service on the bench, produces senior judges representing the views of past generations better than views of the current day. '' Sanford Levinson has been critical of justices who stayed in office despite medical deterioration based on longevity. James MacGregor Burns stated lifelong tenure has "produced a critical time lag, with the Supreme Court institutionally almost always behind the times. '' Proposals to solve these problems include term limits for justices, as proposed by Levinson and Sabato as well as a mandatory retirement age proposed by Richard Epstein, among others. However, others suggest lifetime tenure brings substantial benefits, such as impartiality and freedom from political pressure. Alexander Hamilton in Federalist 78 wrote "nothing can contribute so much to its firmness and independence as permanency in office. '' The 21st century has seen increased scrutiny of justices accepting expensive gifts and travel. All of the members of the Roberts Court have accepted travel or gifts. In 2012, Justice Sonia Sotomayor received $1.9 million in advances from her publisher Knopf Doubleday. Justice Scalia and others took dozens of expensive trips to exotic locations paid for by private donors. Private events sponsored by partisan groups that are attended by both the justices and those who have an interest in their decisions have raised concerns about access and inappropriate communications. Stephen Spaulding, the legal director at Common Cause, said: "There are fair questions raised by some of these trips about their commitment to being impartial. ''
when was the consumer financial protection bureau established
Consumer Financial Protection Bureau - Wikipedia The Consumer Financial Protection Bureau (CFPB) is an agency of the United States government responsible for consumer protection in the financial sector. CFPB jurisdiction includes banks, credit unions, securities firms, payday lenders, mortgage - servicing operations, foreclosure relief services, debt collectors and other financial companies operating in the United States. The CFPB 's creation was authorized by the Dodd -- Frank Wall Street Reform and Consumer Protection Act, whose passage in 2010 was a legislative response to the financial crisis of 2007 -- 08 and the subsequent Great Recession. The CFPB was established as an independent agency, but this status is being reviewed by the US Court of Appeals. According to Director Richard Cordray, the Bureau 's priorities are mortgages, credit cards and student loans. It was designed to consolidate employees and responsibilities from a number of other federal regulatory bodies, including the Federal Reserve, the Federal Trade Commission, the Federal Deposit Insurance Corporation, the National Credit Union Administration and even the Department of Housing and Urban Development. The bureau is an independent unit located inside and funded by the United States Federal Reserve, with interim affiliation with the U.S. Treasury Department. It writes and enforces rules for financial institutions, examines both bank and non-bank financial institutions, monitors and reports on markets, as well as collects and tracks consumer complaints. Furthermore, as required under Dodd -- Frank and outlined in the 2013 CFPB -- State Supervisory Coordination Framework, the CFPB works closely with state regulators in coordinating supervision and enforcement activities. The CFPB opened its website in early February 2011 to accept suggestions from consumers via YouTube, Twitter, and its own website interface. According to the United States Treasury Department, the bureau is tasked with the responsibility to "promote fairness and transparency for mortgages, credit cards, and other consumer financial products and services ''. According to its web site, the CFPB 's "central mission... is to make markets for consumer financial products and services work for Americans -- whether they are applying for a mortgage, choosing among credit cards, or using any number of other consumer financial products ''. In 2016 alone most of the hundreds of thousands of consumer complaints about their financial services -- including banks and credit card issuers -- were received and compiled by CFPB and are publicly available on a federal government database. The Trump administration wants more control over the CFPB and is considering prohibiting the publication of consumer complaints. Since the CFPB database was established in 2011, more than 730,000 complaints have been published. CFPB supporters include the Consumers Union claim that it is a "vital tool that can help consumers make informed decisions ''. CFPB detractors argue that the CFPB database in a "gotcha game '' and that there is already a database maintained by the Federal Trade Commission although that information is not available to the public. In July 2010, Congress passed the Dodd -- Frank Wall Street Reform and Consumer Protection Act, during the 111th United States Congress in response to the Late - 2000s recession and financial crisis. The agency was originally proposed in 2007 by Harvard Law School professor Elizabeth Warren. The proposed CFPB was actively supported by Americans for Financial Reform, a newly created umbrella organization of some 250 consumer, labor, civil rights and other activist organizations. On September 17, 2010, Obama announced the appointment of Warren as Assistant to the President and Special Advisor to the Secretary of the Treasury on the Consumer Financial Protection Bureau to set up the new agency. The CFPB formally began operation on July 21, 2011, shortly after Obama announced that Warren would be passed over as Director in favor of Richard Cordray, who prior to the nomination had been hired as chief of enforcement for the agency. The Financial CHOICE Act, proposed by House Financial Services Committee, Jeb Hensarling to repeal the Dodd -- Frank Wall Street Reform and Consumer Protection Act, passed the House on June 8, 2017. In June 2017, the Senate was crafting its own reform bill. Elizabeth Warren, who proposed and established the CFPB, was removed from consideration as the bureau 's first formal director after Obama administration officials became convinced Warren "could not overcome strong Republican opposition ''. On July 17, President Obama nominated former Ohio Attorney General and Ohio State Treasurer Richard Cordray to be the first formal director of the CFPB. However, his nomination was immediately in jeopardy due to 44 Senate Republicans vowing to derail any nominee in order to encourage a decentralized structure of the organization. Senate Republicans had also shown a pattern of refusing to consider regulatory agency nominees, purportedly as a method of budget cutting. Due to the way the legislation creating the bureau was written, until the first Director was in place, the agency was not able to write new rules or supervise financial institutions other than banks. On July 21, Senator Richard Shelby wrote an op ‐ ed for the Wall Street Journal affirming his continued opposition to a centralized structure, noting that both the Securities Exchange Commission and Federal Deposit Insurance Corporation had executive boards and that the CFPB should be no different. He noted lessons learned from experiences with Fannie Mae and Freddie Mac as support for his argument. Politico interpreted Shelby 's statements as saying that Cordray 's nomination was "dead on arrival ''. Republican threats of a filibuster to block the nomination in December 2011 led to Senate inaction. On January 4, 2012, Barack Obama issued a recess appointment to install Cordray as director through the end of 2013. This was a highly controversial move as the Senate was still holding pro forma sessions, and the possibility existed that the appointment could be challenged in court. The constitutionality of Cordray 's recess appointment came into question after a January 2013 ruling by the United States Court of Appeals for the District of Columbia Circuit that President Obama 's appointment of three members to the NLRB (at the same time as Cordray) violated the Constitution. On July 16, 2013, the Senate confirmed Cordray as director in a 66 -- 34 vote. On July 11, 2013, the CFPB Rural Designation Petition and Correction Act (H.R. 2672; 113th Congress) was introduced into the House of Representatives. The bill would amend the Dodd -- Frank Wall Street Reform and Consumer Protection Act to direct the CFPB to establish an application process that would allow a person to have their county designated as "rural '' for purposes of federal consumer financial law. One practical effect of having a county designated rural is that people can qualify for some types of mortgages by getting them exempted from the CFPB 's qualified mortgage rule. On September 26, 2013, the Consumer Financial Protection Safety and Soundness Improvement Act of 2013 (H.R. 3193; 113th Congress) was introduced into the United States House of Representatives. If adopted, the bill would have modified the CFPB by transforming it into a five - person commission and removing it from the Federal Reserve System. The CFPB would have been renamed the "Financial Product Safety Commission ''. The bill was also intended to make it easier to override the CFPB decisions. It passed in the House of Representatives on February 27, 2014 and was received by the Senate on March 4. It was never considered in the Democratic controlled Senate. Regulatory implementation regarding mortgages is covered on the bureau website. Topics provided for consumers include, 2013 mortgage rule implementation, resources to help people comply, quick reference charts, supervision and examination materials, and a link for feedback. It also provides additional information that covers rural or under - served counties, HUD - approved housing counselors, and Appendix Q. Appendix Q relates to the debt - to - income ratio that must be possessed for "qualified mortgages '' and provides details about how to determine the factors for that calculation. The standard is set at no more than 43 percent. The CFPB is weighing whether it should take on a role in helping Americans manage retirement savings and regulate savings plans, particularly focusing on investment scams that target the retired and elderly. "That 's one of the things we 've been exploring and are interested in in terms of whether and what authority we have '', bureau director Richard Cordray said in a January 2013 interview. Some conservatives have been critical of this potential role, with William Tucker of the American Media Institute asserting that the agency intends to "control '' retirement savings and force people to buy federal debt. The AARP has encouraged the agency to take an active role, arguing that the bureau will help protect elderly Americans from affinity fraud that often targets senior citizens, ensuring that their investments are less likely to be stolen through securities fraud or malpractice. The main regulator of retirement and benefit plans established by employers and private industry is the U.S. Department of Labor, which enforces the main laws (ERISA, COBRA, and HIPAA), retirement plans (including 401 (k), SIMPLE, 403 (b), and traditional defined - benefit pensions) as well as many aspects of employer group - health plans. The Affordable Care Act, establishing marketplaces selling health plans directly to consumers, adopted the ERISA - style regulatory model, requiring all plans to have standardized documents such as a "summary plan document '' (SPD), but the marketplace was regulated by the individual insurance commissioners of every state, with some states having multiple regulators (California maintains both a Department of Insurance and a Department of Managed Care). IRAs, also directed to consumers, are regulated by type of custodian (the FDIC regulates bank custodians, the IRS regulates non-bank custodians). Annuities, life insurance, and disability insurance purchased directly by consumers, are regulated by individual state insurance commissioners. Marketing to consumers is generally regulated by the FCC and various state laws. Because state commissioners are the main regulators, the CFPB is unlikely to assume a leadership role in retirement investment regulation without further legislation and possibly a US constitutional amendment transferring insurance lawmaking power to the federal government. In 2011, the National Association of Insurance Commissioners identified "the single most significant challenge for state insurance regulators is to be vigilant in the protection of consumers, especially in light of the changes taking place in the financial services marketplace '', and the CFPB may help fill this void. Many have argued the state - based system of insurance regulation (including insurance - like products such as annuities) is highly economically inefficient, results in uneven and non-portable insurance plans, artificially shrinks risk pools (especially in smaller states) resulting in higher premiums, limits mobility of brokers who have to re-certify in new states, complicates the needs of multi-state business, and should be replaced with a national insurance regulator or national insurance charter option as has taken place with the regulation of securities and banking. The CFPB has created a number of personal finance tools for consumers, including Ask CFPB, which compiles plain - language answers to personal finance questions, and Paying for College, which estimates the cost of attending specific universities based on the financial aid offers a student has received. The CFPB has also attempted to help consumers understand virtual currencies such as Bitcoin. Democratic Republican Two lawsuits were filed in the early years of the CFPB; they were both dismissed by federal courts, but one was appealed and is still ongoing. The first one, filed in June 21, 2012, by a Texas bank along with the Competitive Enterprise Institute, challenged the constitutionality of provisions of the CFPB. One year later, in August 2013, a federal judge dismissed the lawsuit because the plaintiffs had failed to show that they had suffered harm. In July 2015, the Court of Appeals for the District of Columbia Circuit affirmed in part and reversed in part, holding that the bank, but not the states that later joined the lawsuit, had standing to challenge the law, and returned the case to Huvelle for further proceedings. A lawsuit filed July 22, 2013, by Morgan Drexen Integrated Systems, a provider of outsourced administrative support services to attorneys, and Connecticut attorney Kimberly A. Pisinski, challenged the constitutionality of the CFPB. The complaint, filed in the U.S. District Court for the District of Columbia, alleged that the "CFPB 's structure insulates it from political accountability and internal checks and balances in violation of the United States Constitution. Unbridled from constitutionally - required accountability, CFPB has engaged in ultra vires and abusive practices, including attempts to regulate the practice of law (a function reserved for state bars), attempts to collect attorney - client protected material, and overreaching demands for, and mining of, personal financial information of American citizens, which has prompted a Government Accountability Office ('' GAO ") investigation, commenced on July 12, 2013. '' That October, this case was dismissed by a D.C. Federal Court. On August 22, 2013, one month after Morgan Drexen 's lawsuit, the CFPB filed its own lawsuit against Morgan Drexen in the United States District Court for the Central District of California alleging that Morgan Drexen charged advance fees for debt relief services in violation of the Telemarketing Sales Rule and engaged in deceptive acts and practices in violation of the Consumer Financial Protection Act (CFPA). The CFPB won this lawsuit and Morgan Drexen was ordered to pay $132,882,488 in restitution and a $40 million civil penalty. In October 2016, the United States Court of Appeals for the District of Columbia Circuit ruled that it was unconstitutional for the CFPB Director to be removable by the President of the United States only for cause, such as "inefficiency, neglect of duty or malfeasance. '' Circuit Judge Brett Kavanaugh, joined by Senior Circuit Judge A. Raymond Randolph, wrote that the law was "a threat to individual liberty '' and instead found that the President could remove the CFPB Director at will. The ruling essentially makes the CFPB part of the United States federal executive departments. Circuit Judge Karen L. Henderson agreed that the CFPB Director had been wrong in adopting a new interpretation of the Real Estate Settlement Procedures Act, finding the statute of limitations did not apply to the CFPB, and fining the petitioning mortgage company PHH Corporation $109 million, but she dissented from giving the President a new power to remove the Director, citing constitutional avoidance. The U.S. Court of Appeals for the District of Columbia Circuit has ordered en banc review of this decision to be heard on May 24, 2017. A 2013 press release from the United States House Financial Services Committee criticized the CFPB for what was described as a "radical structure '' that "is controlled by a single individual who can not be fired for poor performance and who exercises sole control over the agency, its hiring and its budget. '' Moreover, the committee alleged a lack of financial transparency and a lack of accountability to Congress or the President. Committee Vice Chairman Patrick McHenry, expressed particular concern about travel costs and a $55 million renovation of CFPB headquarters, stating "$55 million is more than the entire annual construction and acquisition budget for GSA for the totality of federal buildings. '' In 2012, the majority of GSA 's Federal Buildings Fund went to rental costs, totaling $5.2 billion. $50 million was budgeted for construction and acquisition of facilities. In 2014, some employees and former employees of the CFPB testified before Congress about an alleged culture of racism and sexism at the agency. Former employees testified they were retaliated against for bringing problems to the attention of superiors. The CFPB has been heavily criticized for the methodology it uses to identify instances of racial discrimination among auto lenders. Because of legal constraints, the agency used a system to "guess '' the race of auto loan applicants based on their last name and zip code. Based on that information, the agency charged several lenders were discriminating against minority applicants and levied large fines and settlements against those companies. Ally Financial paid $98 million in fines and settlement fees in 2013. As the agency 's methodology means it can only guess who may be victims of discrimination entitled to settlement funds, as of late 2015 the CFPB had yet to compensate any individuals who were victims of Ally 's allegedly discriminatory practices. The CFPB acknowledged in private documents that it knew the agency was overestimating the number of minority applicants, but continued using the flawed methodology anyway to levy penalties against financial companies.
when is house of cards season 6 released
House of Cards (season 6) - wikipedia The sixth and final season of the American political drama web television series House of Cards was confirmed by Netflix on December 4, 2017, and is scheduled to be released on November 2, 2018. Unlike previous seasons that consisted of thirteen episodes each, the sixth season will consist of only eight. The season will not include former lead actor Kevin Spacey, who was fired from the show due to sexual misconduct allegations. On October 11, 2017, The Baltimore Sun reported that House of Cards had been renewed for a sixth season and that filming would begin by the end of October 2017. On October 18, 2017, production of the sixth season of House of Cards appeared to be already in progress, without an official renewal announcement by Netflix, when a gunman opened fire near a House of Cards set outside Baltimore. Production company Media Rights Capital and Netflix stated that production on the show was not impacted by the shooting. Production on the series was shut down on October 30, 2017, following the sexual assault allegations towards Kevin Spacey by actor Anthony Rapp, who publicly stated that Spacey had made a sexual advance on him in 1986, when he was 14 years old. Furthermore, Netflix announced its decision to cancel the series after the upcoming season, though multiple sources stated that the decision to end the series was made prior to Rapp 's accusation. The following day, Netflix and MRC announced that production on the season would be suspended indefinitely, in order to review the current situation and to address any concerns of the cast and crew. On November 3, 2017, Netflix announced that they will no longer be associated with Spacey in any capacity whatsoever. On December 4, 2017, Ted Sarandos, Netflix 's chief content officer, announced that production would restart in 2018 with Robin Wright in the lead, without Spacey 's involvement, and revealed that the sixth and final season of the show would consist of eight episodes. The latest developments significantly affected the production process, as the cast and crew were forced to scrap plans and start working on a new script for the season in a finite amount of time, due to other contractual obligations. On January 31, 2018, House of Cards resumed production, with new cast members including Diane Lane and Greg Kinnear, who were later joined by Australian actor Cody Fern in a regular role. Filming for the sixth season of House of Cards was completed on May 25, 2018. The following cast members will star in the sixth season of the series: On January 31, 2018, it was announced that Diane Lane and Greg Kinnear will play siblings on the sixth season of the series. Australian actor Cody Fern was added a few days later, in a regular role.
why did many cities disappear after the fall of rome
Fall of the Western Roman Empire - wikipedia The Fall of the Western Roman Empire (also called Fall of the Roman Empire or Fall of Rome) was the process of decline in the Western Roman Empire in which it failed to enforce its rule, and its vast territory was divided into several successor polities. The Roman Empire lost the strengths that had allowed it to exercise effective control over the West; modern historians mention factors including the effectiveness and numbers of the army, the health and numbers of the Roman population, the strength of the economy, the competence of the Emperors, the internal struggles for power, the religious changes of the period, and the efficiency of the civil administration. Increasing pressure from barbarians outside Roman culture also contributed greatly to the collapse. The reasons for the collapse are major subjects of the historiography of the ancient world and they inform much modern discourse on state failure. Relevant dates include 117 CE, when the Empire was at its greatest territorial extent, and the accession of Diocletian in 284. Irreversible major territorial loss, however, began in 376 with a large - scale irruption of Goths and others. In 395, after winning two destructive civil wars, Theodosius I died, leaving a collapsing field army and the Empire, still plagued by Goths, divided between his two incapable sons. By 476 when Odoacer deposed the Emperor Romulus, the Western Roman Emperor wielded negligible military, political, or financial power and had no effective control over the scattered Western domains that could still be described as Roman. Invading barbarians had established their own power in most of the area of the Western Empire. While its legitimacy lasted for centuries longer and its cultural influence remains today, the Western Empire never had the strength to rise again. The Fall is not the only unifying concept for these events; the period described as Late Antiquity emphasizes the cultural continuities throughout and beyond the political collapse. Since 1776, when Edward Gibbon published the first volume of his The History of the Decline and Fall of the Roman Empire, Decline and Fall has been the theme around which much of the history of the Roman Empire has been structured. "From the eighteenth century onward, '' historian Glen Bowersock wrote, "we have been obsessed with the fall: it has been valued as an archetype for every perceived decline, and, hence, as a symbol for our own fears. '' The Fall is not the only unifying concept for these events; the period described as Late Antiquity emphasizes the cultural continuities throughout and beyond the political collapse. The Fall of the Western Roman Empire was the process of decline in the Western Roman Empire in which it failed to enforce its rule. The loss of centralized political control over the West, and the lessened power of the East, are universally agreed, but the theme of decline has been taken to cover a much wider time span than the hundred years from 376. For Cassius Dio, the accession of the emperor Commodus in 180 CE marked the descent "from a kingdom of gold to one of rust and iron ''. Gibbon started his story in 98 and Theodor Mommsen regarded the whole of the imperial period as unworthy of inclusion in his Nobel Prize - winning History of Rome. Arnold J. Toynbee and James Burke argue that the entire Imperial era was one of steady decay of institutions founded in republican times. As one convenient marker for the end, 476 has been used since Gibbon, but other markers include the Crisis of the Third Century, the Crossing of the Rhine in 406 (or 405), the sack of Rome in 410, the death of Julius Nepos in 480, all the way to the Fall of New Rome in 1453. Gibbon gave a classic formulation of reasons why the Fall happened. He began an ongoing controversy about the role of Christianity, but he gave great weight to other causes of internal decline and to attacks from outside the Empire. The story of its ruin is simple and obvious; and, instead of inquiring why the Roman empire was destroyed, we should rather be surprised that it had subsisted so long. The victorious legions, who, in distant wars, acquired the vices of strangers and mercenaries, first oppressed the freedom of the republic, and afterwards violated the majesty of the purple. The emperors, anxious for their personal safety and the public peace, were reduced to the base expedient of corrupting the discipline which rendered them alike formidable to their sovereign and to the enemy; the vigour of the military government was relaxed, and finally dissolved, by the partial institutions of Constantine; and the Roman world was overwhelmed by a deluge of Barbarians. Alexander Demandt enumerated 210 different theories on why Rome fell, and new ideas have emerged since. Historians still try to analyze the reasons for loss of political control over a vast territory (and, as a subsidiary theme, the reasons for the survival of the Eastern Roman Empire). Comparison has also been made with China after the end of the Han dynasty, which re-established unity under the Sui dynasty while the Mediterranean world remained politically disunited. Low taxes helped the Roman aristocracy increase their wealth, which equalled or exceeded the revenues of the central government. An emperor sometimes replenished his treasury by confiscating the estates of the "super-rich '', but in the later period, the resistance of the wealthy to paying taxes was one of the factors contributing to the collapse of the Empire. From at least the time of Henri Pirenne scholars have described a continuity of Roman culture and political legitimacy long after 476. Pirenne postponed the demise of classical civilization to the 8th century. He challenged the notion that Germanic barbarians had caused the Western Roman Empire to end, and he refused to equate the end of the Western Roman Empire with the end of the office of emperor in Italy. He pointed out the essential continuity of the economy of the Roman Mediterranean even after the barbarian invasions, and suggested that only the Muslim conquests represented a decisive break with antiquity. The more recent formulation of a historical period characterized as "Late Antiquity '' emphasizes the transformations of ancient to medieval worlds within a cultural continuity. In recent decades archaeologically - based argument even extends the continuity in material culture and in patterns of settlement as late as the eleventh century. Observing the political reality of lost control, but also the cultural and archaeological continuities, the process has been described as a complex cultural transformation, rather than a fall. The Roman Empire reached its greatest geographical extent under Trajan (emperor 98 -- 117), who ruled a prosperous state that stretched from Armenia to the Atlantic. The Empire had large numbers of trained, supplied, and disciplined soldiers, as well as a comprehensive civil administration based in thriving cities with effective control over public finances. Among its literate elite it had ideological legitimacy as the only worthwhile form of civilization and a cultural unity based on comprehensive familiarity with Greek and Roman literature and rhetoric. The Empire 's power allowed it to maintain extreme differences of wealth and status (including slavery on a large scale), and its wide - ranging trade networks permitted even modest households to use goods made by professionals far away. Its financial system allowed it to raise significant taxes which, despite endemic corruption, supported a large regular army with logistics and training. The cursus honorum, a standardized series of military and civil posts organised for ambitious aristocratic men, ensured that powerful noblemen became familiar with military and civil command and administration. At a lower level within the army, connecting the aristocrats at the top with the private soldiers, a large number of centurions were well - rewarded, literate, and responsible for training, discipline, administration, and leadership in battle. City governments with their own properties and revenues functioned effectively at a local level; membership of city councils involved lucrative opportunities for independent decision - making, and, despite its obligations, became seen as a privilege. Under a series of emperors who each adopted a mature and capable successor, the Empire did not require civil wars to regulate the imperial succession. Requests could be submitted directly to the better emperors, and the answers had the force of law, putting the imperial power directly in touch with even humble subjects. The cults of polytheist religion were hugely varied, but none claimed that theirs was the only truth, and their followers displayed mutual tolerance, producing a polyphonous religious harmony. Religious strife was rare after the suppression of the Bar Kokhba revolt in 136 (after which the devastated Judaea ceased to be a major centre for Jewish unrest). Heavy mortality in 165 -- 180 from the Antonine Plague seriously impaired attempts to repel Germanic invaders, but the legions generally held or at least speedily re-instated the borders of the Empire. The Empire suffered from multiple, serious crises during the third century, including the rise of the Sassanid Empire, which inflicted three crushing defeats on Roman field armies and remained a potent threat for centuries. Other disasters included repeated civil wars, barbarian invasions, and more mass mortality in the Plague of Cyprian (from 250 onwards). Rome abandoned the province of Dacia on the north of the Danube (271), and for a short period the Empire split into a Gallic Empire in the West (260 -- 274), a Palmyrene Empire in the East (260 -- 273), and a central Roman rump state. The Rhine / Danube frontier also came under more effective threat from larger barbarian groupings, which had developed better agriculture and larger populations. The Empire survived the Crisis of the Third Century, directing its economy successfully towards defence, but survival came at the price of a more centralized and bureaucratic state. Under Gallienus the senatorial aristocracy ceased joining the ranks of the senior military commanders, its typical members lacking interest in military service and showing incompetence at command. Aurelian reunited the empire in 274; and from 284 Diocletian and his successors reorganized it with more emphasis on the military. John the Lydian, writing over two centuries later, reported that Diocletian 's army at one point totaled 389,704 men, plus 45,562 in the fleets, and numbers may have increased later. With the limited communications of the time, both the European and the Eastern frontiers needed the attention of their own supreme commanders. Diocletian tried to solve this problem by re-establishing an adoptive succession with a senior (Augustus) and junior (Caesar) emperor in each half of the Empire, but this system of tetrarchy broke down within one generation; the hereditary principle re-established itself with generally unfortunate results, and thereafter civil war became again the main method of establishing new imperial regimes. Although Constantine the Great (in office 306 to 337) again re-united the Empire, towards the end of the fourth century the need for division was generally accepted. From then on, the Empire existed in constant tension between the need for two emperors and their mutual mistrust. Until late in the fourth century the united Empire retained sufficient power to launch attacks against its enemies in Germania and in the Sassanid Empire. Receptio of barbarians became widely practiced: imperial authorities admitted potentially hostile groups into the Empire, split them up, and allotted to them lands, status, and duties within the imperial system. In this way many groups provided unfree workers (coloni) for Roman landowners, and recruits (laeti) for the Roman army. Sometimes their leaders became officers. Normally the Romans managed the process carefully, with sufficient military force on hand to ensure compliance, and cultural assimilation followed over the next generation or two. The new supreme rulers disposed of the legal fiction of the early Empire (seeing the emperor as but the first among equals); emperors from Aurelian (reigned 270 -- 275) onwards openly styled themselves as dominus et deus, "lord and god '', titles appropriate for a master - slave relationship. An elaborate court ceremonial developed, and obsequious flattery became the order of the day. Under Diocletian, the flow of direct requests to the emperor rapidly reduced and soon ceased altogether. No other form of direct access replaced them, and the emperor received only information filtered through his courtiers. Official cruelty, supporting extortion and corruption, may also have become more commonplace. While the scale, complexity, and violence of government were unmatched, the emperors lost control over their whole realm insofar as that control came increasingly to be wielded by anyone who paid for it. Meanwhile, the richest senatorial families, immune from most taxation, engrossed more and more of the available wealth and income, while also becoming divorced from any tradition of military excellence. One scholar identifies a great increase in the purchasing power of gold, two and a half fold from 274 to the later fourth century, which may be an index of growing economic inequality between a gold - rich elite and a cash - poor peasantry. Within the late Roman military, many recruits and even officers had barbarian origins, and soldiers are recorded as using possibly - barbarian rituals such as elevating a claimant on shields. Some scholars have seen this as an indication of weakness; others disagree, seeing neither barbarian recruits nor new rituals as causing any problem with the effectiveness or loyalty of the army. In 313 Constantine I declared official toleration of Christianity, followed over the ensuing decades by establishment of Christian orthodoxy and by official and private action against pagans and non-orthodox Christians. His successors generally continued this process, and Christianity became the religion of any ambitious civil official. Under Constantine the cities lost their revenue from local taxes, and under Constantius II (r. 337 -- 361) their endowments of property. This worsened the existing difficulty in keeping the city councils up to strength, and the services provided by the cities were scamped or abandoned. Public building projects became fewer, more often repairs than new construction, and now provided at state expense rather than by local grandees wishing to consolidate long - term local influence. A further financial abuse was Constantius 's increased habit of granting to his immediate entourage the estates of persons condemned of treason and other capital charges; this reduced future though not immediate income, and those close to the emperor gained a strong incentive to stimulate his suspicion of plots. Constantine settled Franks on the lower left bank of the Rhine; their settlements required a line of fortifications to keep them in check, indicating that Rome had lost almost all local control. Under Constantius, bandits came to dominate areas such as Isauria well within the empire. The tribes of Germany also became more populous and more threatening. In Gaul, which did not really recover from the invasions of the third century, there was widespread insecurity and economic decline in the 300s, perhaps worst in Armorica. By 350, after decades of pirate attacks, virtually all villas in Armorica were deserted, and local use of money ceased about 360. Repeated attempts to economize on military expenditure included billeting troops in cities, where they could less easily be kept under military discipline and could more easily extort from civilians. Except in the rare case of a determined and incorruptible general, these troops proved ineffective in action and dangerous to civilians. Frontier troops were often given land rather than pay; as they farmed for themselves, their direct costs diminished, but so did their effectiveness, and there was much less economic stimulus to the frontier economy. However, except for the provinces along the lower Rhine, the agricultural economy was generally doing well. The average nutritional state of the population in the West suffered a serious decline in the late second century; the population of North - Western Europe did not recover, though the Mediterranean regions did. The numbers and effectiveness of the regular soldiers may have declined during the fourth century: payrolls were inflated so that pay could be diverted and exemptions from duty sold, their opportunities for personal extortion were multiplied by residence in cities, and their effectiveness was reduced by concentration on extortion instead of drill. However, extortion, gross corruption, and occasional ineffectiveness were not new to the Roman army; there is no consensus whether its effectiveness significantly declined before 376. Ammianus Marcellinus, himself a professional soldier, repeats longstanding observations about the superiority of contemporary Roman armies being due to training and discipline, not to physical size or strength. Despite a possible decrease in its ability to assemble and supply large armies, Rome maintained an aggressive and potent stance against perceived threats almost to the end of the fourth century. Julian (r. 360 -- 363) launched a drive against official corruption which allowed the tax demands in Gaul to be reduced to one - third of their previous amount, while all government requirements were still met. In civil legislation Julian was notable for his pro-pagan policies. All Christian sects were officially tolerated by Julian, persecution of heretics was forbidden, and non-Christian religions were encouraged. Some Christians continued to destroy temples, disrupt rituals, and break sacred images, seeking martyrdom and at times achieving it at the hands of non-Christian mobs or secular authorities; some pagans attacked the Christians who had previously been involved with the destruction of temples. Julian won victories against Germans who had invaded Gaul. He launched an expensive campaign against the Persians, which ended in defeat and his own death. He succeeded in marching to the Sassanid capital of Ctesiphon, but lacked adequate supplies for an assault. He burned his boats and supplies to show resolve in continuing operations, but the Sassanids began a war of attrition by burning crops. Finding himself cut off in enemy territory, he began a land retreat during which he was mortally wounded. His successor Jovian, acclaimed by a demoralized army, began his brief reign (363 -- 364) trapped in Mesopotamia without supplies. To purchase safe passage home, he had to concede areas of northern Mesopotamia and Kurdistan, including the strategically important fortress of Nisibis, which had been Roman since before the Peace of Nisibis in 299. The brothers Valens (r. 364 -- 378) and Valentinian I (r. 364 -- 375) energetically tackled the threats of barbarian attacks on all the Western frontiers and tried to alleviate the burdens of taxation, which had risen continuously over the previous forty years; Valens in the East reduced the tax demand by half in his fourth year. Both were Christians and confiscated the temple lands that Julian had restored, but were generally tolerant of other beliefs. Valentinian in the West refused to intervene in religious controversy; in the East, Valens had to deal with Christians who did not conform to his ideas of orthodoxy, and persecution formed part of his response. The wealth of the church increased dramatically, immense resources both public and private being used for ecclesiastical construction and support of the religious life. Bishops in wealthy cities were thus able to offer vast patronage; Ammianus described some as "enriched from the offerings of matrons, ride seated in carriages, wearing clothing chosen with care, and serve banquets so lavish that their entertainments outdo the tables of kings ''. Edward Gibbon remarked that "the soldiers ' pay was lavished on the useless multitudes of both sexes who could only plead the merits of abstinence and chastity '', though there are no figures for the monks and nuns nor for their maintenance costs. Pagan rituals and buildings had not been cheap either; the move to Christianity may not have had significant effects on the public finances. Some public disorder also followed competition for prestigious posts; Pope Damasus I was installed in 366 after an election whose casualties included a hundred and thirty - seven corpses in the basilica of Sicininus. Valentinian died of an apoplexy while personally shouting at envoys of Germanic leaders. His successors in the West were children, his sons Gratian (r. 375 -- 383) and Valentinian II (r. 375 -- 392). Gratian, "alien from the art of government both by temperament and by training '' removed the Altar of Victory from the Senate House, and he rejected the pagan title of Pontifex Maximus. In 376 the East faced an enormous barbarian influx across the Danube, mostly Goths who were refugees from the Huns. They were exploited by corrupt officials rather than effectively resettled, and they took up arms, joined by more Goths and by some Alans and Huns. Valens was in Asia with his main field army, preparing for an assault on the Persians, and redirecting the army and its logistic support would have required time. Gratian 's armies were distracted by Germanic invasions across the Rhine. In 378 Valens attacked the invaders with the Eastern field army, perhaps some 20,000 men -- possibly only 10 % of the soldiers nominally available in the Danube provinces -- and in the Battle of Adrianople, 9 August 378, he lost much of that army and his own life. All of the Balkan provinces were thus exposed to raiding, without effective response from the remaining garrisons who were "more easily slaughtered than sheep ''. Cities were able to hold their own walls against barbarians who had no siege equipment, and they generally remained intact although the countryside suffered. Gratian appointed a new Augustus, a proven general from Hispania called Theodosius. During the next four years, he partially re-established the Roman position in the East. These campaigns depended on effective imperial coordination and mutual trust -- between 379 and 380 Theodosius controlled not only the Eastern empire, but also, by agreement, the diocese of Illyricum. Theodosius was unable to recruit enough Roman troops, relying on barbarian warbands without Roman military discipline or loyalty. In contrast, during the Cimbrian War, the Roman Republic, controlling a smaller area than the western Empire, had been able to reconstitute large regular armies of citizens after greater defeats than Adrianople, and it ended that war with the near - extermination of the invading barbarian supergroups, each recorded as having more than 100,000 warriors (with allowances for the usual exaggeration of numbers by ancient authors). Theodosius 's partial failure may have stimulated Vegetius to offer advice on re-forming an effective army (the advice may date from the 390s or from the 430s): From the foundation of the city till the reign of the Emperor Gratian, the foot wore cuirasses and helmets. But negligence and sloth having by degrees introduced a total relaxation of discipline, the soldiers began to think their armor too heavy, as they seldom put it on. They first requested leave from the Emperor to lay aside the cuirass and afterwards the helmet. In consequence of this, our troops in their engagements with the Goths were often overwhelmed with their showers of arrows. Nor was the necessity of obliging the infantry to resume their cuirasses and helmets discovered, notwithstanding such repeated defeats, which brought on the destruction of so many great cities. Troops, defenseless and exposed to all the weapons of the enemy, are more disposed to fly than fight. What can be expected from a foot - archer without cuirass or helmet, who can not hold at once his bow and shield; or from the ensigns whose bodies are naked, and who can not at the same time carry a shield and the colors? The foot soldier finds the weight of a cuirass and even of a helmet intolerable. This is because he is so seldom exercised and rarely puts them on. The final Gothic settlement was acclaimed with relief, even the official panegyrist admitting that these Goths could not be expelled or exterminated, nor reduced to unfree status. Instead they were either recruited into the imperial forces, or settled in the devastated provinces along the south bank of the Danube, where the regular garrisons were never fully re-established. In some later accounts, and widely in recent work, this is regarded as a treaty settlement, the first time that barbarians were given a home within the Empire in which they retained their political and military cohesion. No formal treaty is recorded, nor details of whatever agreement was actually made, and when "the Goths '' re-emerge in our records they have different leaders and are soldiers of a sort. In 391 Alaric, a Gothic leader, rebelled against Roman control. Goths attacked the emperor himself, but within a year Alaric was accepted as a leader of Theodosius 's Gothic troops and this rebellion was over. Theodosius 's financial position must have been difficult, since he had to pay for expensive campaigning from a reduced tax base. The business of subduing barbarian warbands also demanded substantial gifts of precious metal. Nevertheless, he is represented as financially lavish, though personally frugal when on campaign. At least one extra levy provoked desperation and rioting in which the emperor 's statues were destroyed. He was pious, a Nicene Christian heavily influenced by Ambrose, and implacable against heretics. In 392 he forbade even private honor to the gods, and pagan rituals such as the Olympic Games. He either ordered or connived at the widespread destruction of sacred buildings. Theodosius had to face a powerful usurper in the West; Magnus Maximus declared himself Emperor in 383, stripped troops from the outlying regions of Britannia (probably replacing some with federate chieftains and their war - bands) and invaded Gaul. His troops killed Gratian and he was accepted as Augustus in the Gallic provinces, where he was responsible for the first official executions of Christian heretics. To compensate the Western court for the loss of Gaul, Hispania, and Britannia, Theodosius ceded the diocese of Dacia and the diocese of Macedonia to their control. In 387 Maximus invaded Italy, forcing Valentinian II to flee to the East, where he accepted Nicene Christianity. Maximus boasted to Ambrose of the numbers of barbarians in his forces, and hordes of Goths, Huns, and Alans followed Theodosius. Maximus negotiated with Theodosius for acceptance as Augustus of the West, but Theodosius refused, gathered his armies, and counterattacked, winning the civil war in 388. There were heavy troop losses on both sides of the conflict. Later Welsh legend has Maximus 's defeated troops resettled in Armorica, instead of returning to Britannia, and by 400, Armorica was controlled by Bagaudae rather than by imperial authority. Theodosius restored Valentinian II, still a very young man, as Augustus in the West. He also appointed Arbogast, a pagan general of Frankish origin, as Valentinian 's commander - in - chief and guardian. Valentinian quarreled in public with Arbogast, failed to assert any authority, and died, either by suicide or by murder, at the age of 21. Arbogast and Theodosius failed to come to terms and Arbogast nominated an imperial official, Eugenius (r. 392 -- 394), as emperor in the West. Eugenius made some modest attempts to win pagan support, and with Arbogast led a large army to fight another destructive civil war. They were defeated and killed at the Battle of the Frigidus, which was attended by further heavy losses especially among the Gothic federates of Theodosius. The north - eastern approaches to Italy were never effectively garrisoned again. Theodosius died a few months later in early 395, leaving his young sons Honorius (r. 395 -- 423) and Arcadius (r. 395 -- 408) as emperors. In the immediate aftermath of Theodosius 's death, the magister militum Stilicho, married to Theodosius 's niece, asserted himself in the West as the guardian of Honorius and commander of the remains of the defeated Western army. He also claimed control over Arcadius in Constantinople, but Rufinus, magister officiorum on the spot, had already established his own power there. Henceforward the Empire was not under the control of one man, until much of the West had been permanently lost. Neither Honorius nor Arcadius ever displayed any ability either as rulers or as generals, and both lived as the puppets of their courts. Stilicho tried to reunite the Eastern and Western courts under his personal control, but in doing so achieved only the continued hostility of all of Arcadius 's successive supreme ministers. The ineffectiveness of Roman military responses from Stilicho onwards has been described as "shocking '', with little evidence of indigenous field forces or of adequate training, discipline, pay, or supply for the barbarians who formed most of the available troops. Local defence was occasionally effective, but was often associated with withdrawal from central control and taxes; in many areas, barbarians under Roman authority attacked culturally - Roman "Bagaudae ''. Corruption, in this context the diversion of public finance from the needs of the army, may have contributed greatly to the Fall. The rich senatorial aristocrats in Rome itself became increasingly influential during the fifth century; they supported armed strength in theory, but did not wish to pay for it or to offer their own workers as army recruits. They did, however, pass large amounts of money to the Christian Church. At a local level, from the early fourth century, the town councils lost their property and their power, which often became concentrated in the hands of a few local despots beyond the reach of the law. The fifth - century Western emperors, with brief exceptions, were individuals incapable of ruling effectively or even of controlling their own courts. Those exceptions were responsible for brief, but remarkable resurgences of Roman power. Without an authoritative ruler, the Balkan provinces fell rapidly into disorder. Alaric was disappointed in his hopes for promotion to magister militum after the battle of the Frigidus. He again led Gothic tribesmen in arms and established himself as an independent power, burning the countryside as far as the walls of Constantinople. Alaric 's ambitions for long - term Roman office were never quite acceptable to the Roman imperial courts, and his men could never settle long enough to farm in any one area. They showed no inclination to leave the Empire and face the Huns from whom they had fled in 376; indeed the Huns were still stirring up further migrations which often ended by attacking Rome in turn. Alaric 's group was never destroyed nor expelled from the Empire, nor acculturated under effective Roman domination. Stilicho moved with his remaining mobile forces into Greece, a clear threat to Rufinus ' control of the Eastern empire. The bulk of Rufinus ' forces were occupied with Hunnic incursions in Asia Minor and Syria, leaving Thrace undefended. He opted to enlist Alaric and his men, and sent them to Thessaly to stave off Stilicho 's threat, which they did. No battle took place. Stilicho was forced to send some of his Eastern forces home. They went to Constantinople under the command of one Gainas, a Goth with a large Gothic following. On arrival, Gainas murdered Rufinus, and was appointed magister militum for Thrace by Eutropius, the new supreme minister and the only eunuch consul of Rome, who controlled Arcadius "as if he were a sheep ''. Stilicho obtained a few more troops from the German frontier and continued to campaign ineffectively against the Eastern empire; again he was successfully opposed by Alaric and his men. During the next year, 397, Eutropius personally led his troops to victory over some Huns who were marauding in Asia Minor. With his position thus strengthened he declared Stilicho a public enemy, and he established Alaric as magister militum per Illyricum. A poem by Synesius advises the emperor to display manliness and remove a "skin - clad savage '' (probably Alaric) from the councils of power and his barbarians from the Roman army. We do not know if Arcadius ever became aware of the existence of this advice, but it had no recorded effect. Synesius, from a province suffering the widespread ravages of a few poor but greedy barbarians, also complained of "the peacetime war, one almost worse than the barbarian war and arising from military indiscipline and the officer 's greed. '' The magister militum in the Diocese of Africa declared for the East and stopped the supply of grain to Rome. Italy had not fed itself for centuries and could not do so now. In 398, Stilicho sent his last reserves, a few thousand men, to re-take the Diocese of Africa, and he strengthened his position further when he married his daughter Maria to Honorius. Throughout this period Stilicho, and all other generals, were desperately short of recruits and supplies for them. In 400, Stilicho was charged to press into service any "laetus, Alamannus, Sarmatian, vagrant, son of a veteran '' or any other person liable to serve. He had reached the bottom of his recruitment pool. Though personally not corrupt, he was very active in confiscating assets; the financial and administrative machine was not producing enough support for the army. In 399, Tribigild 's rebellion in Asia Minor allowed Gainas to accumulate a significant army (mostly Goths), become supreme in the Eastern court, and execute Eutropius. He now felt that he could dispense with Alaric 's services and he nominally transferred Alaric 's province to the West. This administrative change removed Alaric 's Roman rank and his entitlement to legal provisioning for his men, leaving his army -- the only significant force in the ravaged Balkans -- as a problem for Stilicho. In 400, the citizens of Constantinople revolted against Gainas and massacred as many of his people, soldiers and their families, as they could catch. Some Goths at least built rafts and tried to cross the strip of sea that separates Asia from Europe; the Roman navy slaughtered them. By the beginning of 401, Gainas ' head rode a pike through Constantinople while another Gothic general became consul. Meanwhile, groups of Huns started a series of attacks across the Danube, and the Isaurians marauded far and wide in Anatolia. In 401 Stilicho travelled over the Alps to Raetia, to scrape up further troops. He left the Rhine defended only by the "dread '' of Roman retaliation, rather than by adequate forces able to take the field. Early in spring, Alaric, probably desperate, invaded Italy, and he drove Honorius westward from Mediolanum, besieging him in Hasta Pompeia in Liguria. Stilicho returned as soon as the passes had cleared, meeting Alaric in two battles (near Pollentia and Verona) without decisive results. The Goths, weakened, were allowed to retreat back to Illyricum where the Western court again gave Alaric office, though only as comes and only over Dalmatia and Pannonia Secunda rather than the whole of Illyricum. Stilicho probably supposed that this pact would allow him to put Italian government into order and recruit fresh troops. He may also have planned with Alaric 's help to relaunch his attempts to gain control over the Eastern court. However, in 405, Stilicho was distracted by a fresh invasion of Northern Italy. Another group of Goths fleeing the Huns, led by one Radagaisus, devastated the north of Italy for six months before Stilicho could muster enough forces to take the field against them. Stilicho recalled troops from Britannia and the depth of the crisis was shown when he urged all Roman soldiers to allow their personal slaves to fight beside them. His forces, including Hun and Alan auxiliaries, may in the end have totalled rather less than 15,000 men. Radagaisus was defeated and executed. 12,000 prisoners from the defeated horde were drafted into Stilicho 's service. Stilicho continued negotiations with Alaric; Flavius Aetius, son of one of Stilicho 's major supporters, was sent as a hostage to Alaric in 405. In 406 Stilicho, hearing of new invaders and rebels who had appeared in the northern provinces, insisted on making peace with Alaric, probably on the basis that Alaric would prepare to move either against the Eastern court or against the rebels in Gaul. The Senate deeply resented peace with Alaric; in 407, when Alaric marched into Noricum and demanded a large payment for his expensive efforts in Stilicho 's interests, the senate, "inspired by the courage, rather than the wisdom, of their predecessors, '' preferred war. One senator famously declaimed Non est ista pax, sed pactio servitutis ("This is not peace, but a pact of servitude ''). Stilicho paid Alaric four thousand pounds of gold nevertheless. Stilicho sent Sarus, a Gothic general, over the Alps to face the usurper Constantine III, but he lost and barely escaped, having to leave his baggage to the bandits who now infested the Alpine passes. The empress Maria, daughter of Stilicho, died in 407 or early 408 and her sister Aemilia Materna Thermantia married Honorius. In the East, Arcadius died on 1 May 408 and was replaced by his son Theodosius II; Stilicho seems to have planned to march to Constantinople, and to install there a regime loyal to himself. He may also have intended to give Alaric a senior official position and send him against the rebels in Gaul. Before he could do so, while he was away at Ticinum at the head of a small detachment, a bloody coup against his supporters took place at Honorius 's court. It was led by Stilicho 's own creature, one Olympius. Stilicho had news of the coup at Bononia (where he was probably waiting for Alaric). His small escort of barbarians was led by Sarus, who rebelled. His Gothic troops massacred the Hun contingent in their sleep, and then withdrew towards the cities in which their families were billeted. Stilicho ordered that these troops should not be admitted, but, now without an army, he was forced to flee for sanctuary, promised his life, and killed. Alaric was again declared an enemy of the Emperor. The conspiracy then massacred the families of the federate troops (as presumed supporters of Stilicho, although they had probably rebelled against him), and the troops defected en masse to Alaric. The conspirators seem to have let their main army disintegrate, and had no policy except hunting down supporters of Stilicho. Italy was left without effective indigenous defence forces thereafter. Heraclianus, a co-conspirator of Olympius, became governor of the Diocese of Africa, where he controlled the source of most of Italy 's grain, and he supplied food only in the interests of Honorius 's regime. As a declared ' enemy of the Emperor ', Alaric was denied the legitimacy that he needed to collect taxes and hold cities without large garrisons, which he could not afford to detach. He again offered to move his men, this time to Pannonia, in exchange for a modest sum of money and the modest title of Comes, but he was refused as a supporter of Stilicho. He moved into Italy, probably using the route and supplies arranged for him by Stilicho, bypassing the imperial court in Ravenna which was protected by widespread marshland and had a port, and he menaced the city of Rome itself. In 407, there was no equivalent of the determined response to the catastrophic Battle of Cannae in 216 BCE, when the entire Roman population, even slaves, had been mobilized to resist the enemy. Alaric 's military operations centred on the port of Rome, through which Rome 's grain supply had to pass. Alaric 's first siege of Rome in 408 caused dreadful famine within the walls. It was ended by a payment that, though large, was less than one of the richest senators could have produced. The super-rich aristocrats made little contribution; pagan temples were stripped of ornaments to make up the total. With promises of freedom, Alaric also recruited many of the slaves in Rome. Alaric withdrew to Tuscany and recruited more slaves. Ataulf, a Goth nominally in Roman service and brother - in - law to Alaric, marched through Italy to join Alaric despite taking casualties from a small force of Hunnic mercenaries led by Olympius. Sarus was an enemy of Ataulf, and on Ataulf 's arrival went back into imperial service. In 409 Olympius fell to further intrigue, having his ears cut off before he was beaten to death. Alaric tried again to negotiate with Honorius, but his demands (now even more moderate, only frontier land and food) were inflated by the messenger and Honorius responded with insults, which were reported verbatim to Alaric. He broke off negotiations and the standoff continued. Honorius 's court made overtures to the usurper Constantine III in Gaul and arranged to bring Hunnic forces into Italy, Alaric ravaged Italy outside the fortified cities (which he could not garrison), and the Romans refused open battle (for which they had inadequate forces). Late in the year Alaric sent bishops to express his readiness to leave Italy if Honorius would only grant his people a supply of grain. Honorius, sensing weakness, flatly refused. Alaric moved to Rome and captured Galla Placidia, sister of Honorius. The Senate in Rome, despite its loathing for Alaric, was now desperate enough to give him almost anything he wanted. They had no food to offer, but they tried to give him imperial legitimacy; with the Senate 's acquiescence, he elevated Priscus Attalus as his puppet emperor, and he marched on Ravenna. Honorius was planning to flee to Constantinople when a reinforcing army of 4,000 soldiers from the East disembarked in Ravenna. These garrisoned the walls and Honorius held on. He had Constantine 's principal court supporter executed and Constantine abandoned plans to march to Honorius 's defence. Attalus failed to establish his control over the Diocese of Africa, and no grain arrived in Rome where the famine became even more frightful. Jerome reports cannibalism within the walls. Attalus brought Alaric no real advantage, failing also to come to any useful agreement with Honorius (who was offered mutilation, humiliation, and exile). Indeed, Attalus 's claim was a marker of threat to Honorius, and Alaric dethroned him after a few months. In 410 Alaric took Rome by starvation, sacked it for three days (there was relatively little destruction, and in some Christian holy places Alaric 's men even refrained from wanton wrecking and rape), and invited its remaining barbarian slaves to join him, which many did. The city of Rome was the seat of the richest senatorial noble families and the centre of their cultural patronage; to pagans it was the sacred origin of the empire, and to Christians the seat of the heir of Saint Peter, Pope Innocent I, the most authoritative bishop of the West. Rome had not fallen to an enemy since the Battle of the Allia over eight centuries before. Refugees spread the news and their stories throughout the Empire, and the meaning of the fall was debated with religious fervour. Both Christians and pagans wrote embittered tracts, blaming paganism or Christianity respectively for the loss of Rome 's supernatural protection, and blaming Stilicho 's earthly failures in either case. Some Christian responses anticipated the imminence of Judgement Day. Augustine in his book "City of God '' ultimately rejected the pagan and Christian idea that religion should have worldly benefits; he developed the doctrine that the City of God in heaven, undamaged by mundane disasters, was the true objective of Christians. More practically, Honorius was briefly persuaded to set aside the laws forbidding pagans to be military officers, so that one Generidus could re-establish Roman control in Dalmatia. Generidus did this with unusual effectiveness; his techniques were remarkable for this period, in that they included training his troops, disciplining them, and giving them appropriate supplies even if he had to use his own money. The penal laws were reinstated no later than 25 August 410 and the overall trend of repression of paganism continued. Procopius mentions a story in which Honorius, on hearing the news that Rome had "perished '', was shocked, thinking the news was in reference to his favorite chicken he had named "Roma ''. On hearing that Rome itself had fallen he breathed a sigh of relief: At that time they say that the Emperor Honorius in Ravenna received the message from one of the eunuchs, evidently a keeper of the poultry, that Roma had perished. And he cried out and said, "And yet it has just eaten from my hands! '' For he had a very large cockerel, Roma by name; and the eunuch comprehending his words said that it was the city of Roma which had perished at the hands of Alaric, and the emperor with a sigh of relief answered quickly: "But I thought that my fowl Roma had perished. '' So great, they say, was the folly with which this emperor was possessed. Alaric then moved south, intending to sail to Africa, but his ships were wrecked in a storm and he shortly died of fever. His successor Ataulf, still regarded as an usurper and given only occasional and short - term grants of supplies, moved north into the turmoil of Gaul, where there was some prospect of food. His supergroup of barbarians are called the Visigoths in modern works: they may now have been developing their own sense of identity. The Crossing of the Rhine in 405 / 6 brought unmanageable numbers of German and Alan barbarians (perhaps some 30,000 warriors, 100,000 people) into Gaul. They may have been trying to get away from the Huns, who about this time advanced to occupy the Great Hungarian Plain. For the next few years these barbarian tribes wandered in search of food and employment, while Roman forces fought each other in the name of Honorius and a number of competing claimants to the imperial throne. The remaining troops in Britannia elevated a succession of imperial usurpers. The last, Constantine III, raised an army from the remaining troops in Britannia, invaded Gaul and defeated forces loyal to Honorius led by Sarus. Constantine 's power reached its peak in 409 when he controlled Gaul and beyond, he was joint consul with Honorius and his magister militum Gerontius defeated the last Roman force to try to hold the borders of Hispania. It was led by relatives of Honorius; Constantine executed them. Gerontius went to Hispania where he may have settled the Sueves and the Asding Vandals. Gerontius then fell out with his master and elevated one Maximus as his own puppet emperor. He defeated Constantine and was besieging him in Arelate when Honorius 's general Constantius arrived from Italy with an army (possibly, mainly of Hun mercenaries). Gerontius 's troops deserted him and he committed suicide. Constantius continued the siege, defeating a relieving army. Constantine surrendered in 411 with a promise that his life would be spared, and was executed. In 410, the Roman civitates of Britannia rebelled against Constantine and evicted his officials. They asked for help from Honorius, who replied that they should look to their own defence. While the British may have regarded themselves as Roman for several generations, and British armies may at times have fought in Gaul, no central Roman government is known to have appointed officials in Britannia thereafter. In 411, Jovinus rebelled and took over Constantine 's remaining troops on the Rhine. He relied on the support of Burgundians and Alans to whom he offered supplies and land. In 413 Jovinus also recruited Sarus; Ataulf destroyed their regime in the name of Honorius and both Jovinus and Sarus were executed. The Burgundians were settled on the left bank of the Rhine. Ataulf then operated in the south of Gaul, sometimes with short - term supplies from the Romans. All usurpers had been defeated, but large barbarian groups remained un-subdued in both Gaul and Hispania.. The imperial government was quick to restore the Rhine frontier. The invading tribes of 407 had passed into Spain at the end of 409 but the Visigoths had exited Italy at the beginning of 412 and settled themselves in Narbo. Heraclianus was still in command in the diocese of Africa, the last of the clique that overthrew Stilicho to retain power. In 413 he led an invasion of Italy, lost to a subordinate of Constantius, and fled back to Africa where he was murdered by Constantius 's agents. In January 414 Roman naval forces blockaded Ataulf in Narbo, where he married Galla Placidia. The choir at the wedding included Attalus, a puppet emperor without revenues or soldiers. Ataulf famously declared that he had abandoned his intention to set up a Gothic empire because of the irredeemable barbarity of his followers, and instead he sought to restore the Roman Empire. He handed Attalus over to Honorius 's regime for mutilation, humiliation, and exile, and abandoned Attalus 's supporters. (One of them, Paulinus Pellaeus, recorded that the Goths considered themselves merciful for allowing him and his household to leave destitute, but alive, without being raped.) Ataulf moved out of Gaul, to Barcelona. There his infant son by Galla Placidia was buried, and there Ataulf was assassinated by one of his household retainers, possibly a former follower of Sarus. His ultimate successor Wallia had no agreement with the Romans; his people had to plunder in Hispania for food. In 416 Wallia reached agreement with Constantius; he sent Galla Placidia back to Honorius and received provisions, six hundred thousand modii of wheat. From 416 to 418, Wallia 's Goths campaigned in Hispania on Constantius 's behalf, exterminating the Siling Vandals in Baetica and reducing the Alans to the point where the survivors sought the protection of the king of the Asding Vandals. (After retrenchment they formed another barbarian supergroup, but for the moment they were reduced in numbers and effectively cowed.) In 418, by agreement with Constantius, Wallia 's Goths accepted land to farm in Aquitania. Constantius also reinstituted an annual council of the southern Gallic provinces, to meet at Arelate. Although Constantius rebuilt the western field army to some extent -- the Notitia Dignitatum gives a list of the units of the western field army at this time -- he did so only by replacing half of its units (vanished in the wars since 395) by re-graded barbarians, and by garrison troops removed from the frontier. Constantius had married the princess Galla Placidia (despite her protests) in 417. The couple soon had two children, Honoria and Valentinian III, and Constantius was elevated to the position of Augustus in 420. This earned him the hostility of the Eastern court, which had not agreed to his elevation. Nevertheless, Constantius had achieved an unassailable position at the Western court, in the imperial family, and as the able commander - in - chief of a partially restored army. This settlement represented a real success for the Empire -- a poem by Rutilius Namatianus celebrates his voyage back to Gaul in 417 and his confidence in a restoration of prosperity. But it marked huge losses of territory and of revenue; Rutilius travelled by ship past the ruined bridges and countryside of Tuscany, and in the west the River Loire had become the effective northern boundary of Roman Gaul. In the east of Gaul the Franks controlled large areas; the effective line of Roman control until 455 ran from north of Cologne (lost to the Ripuarian Franks in 459) to Boulogne. The Italian areas which had been compelled to support the Goths had most of their taxes remitted for several years. Even in southern Gaul and Hispania large barbarian groups remained, with thousands of warriors, in their own non-Roman military and social systems. Some occasionally acknowledged a degree of Roman political control, but without the local application of Roman leadership and military power they and their individual subgroups pursued their own interests. Constantius died in 421, after only seven months as Augustus. He had been careful to make sure that there was no successor in waiting, and his own children were far too young to take his place. Honorius was unable to control his own court and the death of Constantius initiated more than ten years of instability. Initially Galla Placidia sought Honorius 's favour in the hope that her son might ultimately inherit. Other court interests managed to defeat her, and she fled with her children to the Eastern court in 422. Honorius himself died, shortly before his thirty - ninth birthday, in 423. After some months of intrigue, the patrician Castinus installed Joannes as Western Emperor, but the Eastern Roman government proclaimed the child Valentinian III instead, his mother Galla Placidia acting as regent during his minority. Joannes had few troops of his own. He sent Aetius to raise help from the Huns. An Eastern army landed in Italy, captured Joannes, cut his hand off, abused him in public, and killed him with most of his senior officials. Aetius returned, three days after Joannes ' death, at the head of a substantial Hunnic army which made him the most powerful general in Italy. After some fighting, Placidia and Aetius came to an agreement; the Huns were paid off and sent home, while Aetius received the position of magister militum. Galla Placidia, as Augusta, mother of the Emperor, and his guardian until 437, could maintain a dominant position in court, but women in Ancient Rome did not exercise military power and she could not herself become a general. She tried for some years to avoid reliance on a single dominant military figure, maintaining a balance of power between her three senior officers, Aetius (magister militum in Gaul), Count Boniface governor in the Diocese of Africa, and Flavius Felix magister militum praesentalis in Italy. Meanwhile, the Empire deteriorated seriously. Apart from the losses in the Diocese of Africa, Hispania was slipping out of central control and into the hands of local rulers and Suevic bandits. In Gaul the Rhine frontier had collapsed, the Visigoths in Aquitaine may have launched further attacks on Narbo and Arelate, and the Franks, increasingly powerful although disunited, were the major power in the north - east. Aremorica was controlled by Bagaudae, local leaders not under the authority of the Empire. Aetius at least campaigned vigorously and mostly victoriously, defeating aggressive Visigoths, Franks, fresh Germanic invaders, Bagaudae in Aremorica, and a rebellion in Noricum. Not for the first time in Rome 's history, a triumvirate of mutually distrustful rulers proved unstable. In 427 Felix tried to recall Boniface from Africa; he refused, and overcame Felix 's invading force. Boniface probably recruited some Vandal troops among others. In 428 the Vandals and Alans were united under the able, ferocious, and long - lived king Genseric; he moved his entire people to Tarifa near Gibraltar, divided them into 80 groups nominally of 1,000 people, (perhaps 20,000 warriors in total), and crossed from Hispania to Mauretania without opposition. (The Straits of Gibraltar were not an important thoroughfare at the time, and there were no significant fortifications nor military presence at this end of the Mediterranean.) They spent a year moving slowly to Numidia, defeating Boniface. He returned to Italy where Aetius had recently had Felix executed. Boniface was promoted to magister militum and earned the enmity of Aetius, who may have been absent in Gaul at the time. In 432 the two met at the Battle of Ravenna which left Aetius 's forces defeated and Boniface mortally wounded. Aetius temporarily retired to his estates, but after an attempt to murder him he raised another Hunnic army (probably by conceding parts of Pannonia to them) and in 433 he returned to Italy, overcoming all rivals. He never threatened to become an Augustus himself and thus maintained the support of the Eastern court, where Valentinian 's cousin Theodosius II reigned until 450. Aetius campaigned vigorously, somewhat stabilizing the situation in Gaul and in Hispania. He relied heavily on his forces of Huns. With a ferocity celebrated centuries later in the Nibelungenlied, the Huns slaughtered many Burgundians on the middle Rhine, re-establishing the survivors as Roman allies, the first Kingdom of the Burgundians. This may have returned some sort of Roman authority to Trier. Eastern troops reinforced Carthage, temporarily halting the Vandals, who in 435 agreed to limit themselves to Numidia and leave the most fertile parts of North Africa in peace. Aetius concentrated his limited military resources to defeat the Visigoths again, and his diplomacy restored a degree of order to Hispania. However, his general Litorius was badly defeated by the Visigoths at Toulouse, and a new Suevic king, Rechiar, began vigorous assaults on what remained of Roman Hispania. At one point Rechiar even allied with Bagaudae. These were Romans not under imperial control; some of their reasons for rebellion may be indicated by the remarks of a Roman captive under Attila who was happy in his lot, giving a lively account of the vices of a declining empire, of which he had so long been the victim; the cruel absurdity of the Roman princes, unable to protect their subjects against the public enemy, unwilling to trust them with arms for their own defence; the intolerable weight of taxes, rendered still more oppressive by the intricate or arbitrary modes of collection; the obscurity of numerous and contradictory laws; the tedious and expensive forms of judicial proceedings; the partial administration of justice; and the universal corruption, which increased the influence of the rich, and aggravated the misfortunes of the poor. A religious polemic of about this time complains bitterly of the oppression and extortion suffered by all but the richest Romans. Many wished to flee to the Bagaudae or even to foul - smelling barbarians. Although these men differ in customs and language from those with whom they have taken refuge, and are unaccustomed too, if I may say so, to the nauseous odor of the bodies and clothing of the barbarians, yet they prefer the strange life they find there to the injustice rife among the Romans. So you find men passing over everywhere, now to the Goths, now to the Bagaudae, or whatever other barbarians have established their power anywhere... We call those men rebels and utterly abandoned, whom we ourselves have forced into crime. For by what other causes were they made Bagaudae save by our unjust acts, the wicked decisions of the magistrates, the proscription and extortion of those who have turned the public exactions to the increase of their private fortunes and made the tax indictions their opportunity for plunder? From Britannia comes an indication of the prosperity which freedom from taxes could bring. No sooner were the ravages of the enemy checked, than the island was deluged with a most extraordinary plenty of all things, greater than was before known, and with it grew up every kind of luxury and licentiousness. Nevertheless, effective imperial protection from barbarian ravages was eagerly sought. About this time authorities in Britannia asked Aetius for help: "To Aetius, now consul for the third time: the groans of the Britons. '' And again a little further, thus: -- "The barbarians drive us to the sea; the sea throws us back on the barbarians: thus two modes of death await us, we are either slain or drowned. '' The Romans, however, could not assist them... The Visigoths passed another waymark on their journey to full independence; they made their own foreign policy, sending princesses to make (rather unsuccessful) marriage alliances with Rechiar of the Sueves and with Huneric, son of the Vandal king Genseric. In 439 the Vandals moved eastward (temporarily abandoning Numidia) and captured Carthage, where they established an independent state with a powerful navy. This brought immediate financial crisis to the Western Empire; the diocese of Africa was prosperous, normally required few troops to keep it secure, contributed large tax revenues, and exported wheat to feed Rome and many other areas. Roman troops assembled in Sicily, but the planned counter-attack never happened. Huns attacked the Eastern empire, and the troops, which had been sent against Genseric, were hastily recalled from Sicily; the garrisons, on the side of Persia, were exhausted; and a military force was collected in Europe, formidable by their arms and numbers, if the generals had understood the science of command, and the soldiers the duty of obedience. The armies of the Eastern empire were vanquished in three successive engagements... From the Hellespont to Thermopylae, and the suburbs of Constantinople, (Attila) ravaged, without resistance, and without mercy, the provinces of Thrace and Macedonia. Attila 's invasions of the East were stopped by the walls of Constantinople, and at this heavily fortified Eastern end of the Mediterranean there were no significant barbarian invasions across the sea into the rich southerly areas of Anatolia, the Levant, and Egypt. Despite internal and external threats, and more religious discord than the West, these provinces remained prosperous contributors to tax revenue; despite the ravages of Attila 's armies and the extortions of his peace treaties, tax revenue generally continued to be adequate for the essential state functions of the Eastern empire. Genseric settled his Vandals as landowners and in 442 was able to negotiate very favourable peace terms with the Western court. He kept his latest gains and his eldest son Huneric was honoured by betrothal to Princess Eudocia, who carried the legitimacy of the Theodosian dynasty. Huneric 's Gothic wife was suspected of trying to poison her father - in - law Genseric; he sent her home without her nose or ears, and his Gothic alliance came to an early end. The Romans regained Numidia, and Rome again received a grain supply from Africa. The losses of income from the Diocese of Africa were equivalent to the costs of nearly 40,000 infantry or over 20,000 cavalry. The imperial regime had to increase taxes. Despite admitting that the peasantry could pay no more, and that a sufficient army could not be raised, the imperial regime protected the interests of landowners displaced from Africa and allowed wealthy individuals to avoid taxes. In 444, the Huns were united under Attila. His subjects included Huns, outnumbered several times over by other groups, predominantly Germanic. His power rested partly on his continued ability to reward his favoured followers with precious metals, and he continued to attack the Eastern Empire until 450, by when he had extracted vast sums of money and many other concessions. Attila may not have needed any excuse to turn West, but he received one in the form of a plea for help from Honoria, the Emperor 's sister, who was being forced into a marriage which she resented. Attila claimed Honoria as his wife and half of the Western Empire 's territory as his dowry. Faced with refusal, he invaded Gaul in 451 with a huge army. In the bloody battle of the Catalaunian Plains the invasion was stopped by the combined forces of the barbarians within the Western empire, coordinated by Aetius and supported by what troops he could muster. The next year, Attila invaded Italy and proceeded to march upon Rome, but an outbreak of disease in his army, lack of supplies, reports that Eastern Roman troops were attacking his noncombatant population in Pannonia, and, possibly, Pope Leo 's plea for peace induced him to halt this campaign. Attila unexpectedly died a year later (453) and his empire crumbled as his followers fought for power. The life of Severinus of Noricum gives glimpses of the general insecurity, and ultimate retreat of the Romans on the Upper Danube, in the aftermath of Attila 's death. The Romans were without adequate forces; the barbarians inflicted haphazard extortion, murder, kidnap, and plunder on the Romans and on each other. So long as the Roman dominion lasted, soldiers were maintained in many towns at the public expense to guard the boundary wall. When this custom ceased, the squadrons of soldiers and the boundary wall were blotted out together. The troop at Batavis, however, held out. Some soldiers of this troop had gone to Italy to fetch the final pay to their comrades, and no one knew that the barbarians had slain them on the way. In 454 Aetius was personally stabbed to death by Valentinian, who was himself murdered by the dead general 's supporters a year later. (Valentinian) thought he had slain his master; he found that he had slain his protector: and he fell a helpless victim to the first conspiracy which was hatched against his throne. A rich senatorial aristocrat, Petronius Maximus, who had encouraged both murders, then seized the throne. He broke the engagement between Huneric, prince of the Vandals, and Princess Eudocia, and had time to send Avitus to ask for the help of the Visigoths in Gaul before the Vandals sailed to Italy. Petronius was unable to muster any effective response and was killed by a mob as he tried to flee the city. The Vandals entered Rome, and plundered it for two weeks. Despite the shortage of money for the defence of the state, considerable private wealth had accumulated since the previous sack in 410. The Vandals sailed away with large amounts of treasure and also with the Princess Eudocia, who became the wife of one Vandal king and the mother of another. The Vandals conquered Sicily, and their fleet became a constant danger to Roman sea trade and to the coasts and islands of the western Mediterranean. Avitus, at the Visigothic court in Burdigala, declared himself Emperor. He moved on Rome with Visigothic support which gained his acceptance by Majorian and Ricimer, commanders of the remaining army of Italy. This was the first time that a barbarian kingdom had played a key role in the imperial succession. Avitus 's son - in - law Sidonius wrote propaganda to present the Visigothic king Theoderic II as a reasonable man with whom a Roman regime could do business. Theoderic 's payoff included precious metal from stripping the remaining public ornaments of Italy, and an unsupervised campaign in Hispania. There he not only defeated the Sueves, executing his brother - in - law Rechiar, but he also plundered Roman cities. The Burgundians expanded their kingdom in the Rhone valley and the Vandals took the remains of the Diocese of Africa. In 456 the Visigothic army was too heavily engaged in Hispania to be an effective threat to Italy, and Ricimer had just destroyed a pirate fleet of sixty Vandal ships; Majorian and Ricimer marched against Avitus and defeated him near Placentia. He was forced to become Bishop of Placentia, and died (possibly murdered) a few weeks later. Majorian and Ricimer were now in control of Italy. Ricimer was the son of a Suevic king and his mother was the daughter of a Gothic one, so he could not aspire to an imperial throne. After some months, allowing for negotiation with the new emperor of Constantinople and the defeat of 900 Alamannic invaders of Italy by one of his subordinates, Majorian was acclaimed as Augustus. Majorian is described by Gibbon as "a great and heroic character ''. He rebuilt the army and navy of Italy with vigour and set about recovering the remaining Gallic provinces, which had not recognized his elevation. He defeated the Visigoths at the Battle of Arelate, reducing them to federate status and obliging them to give up their claims in Hispania; he moved on to subdue the Burgundians, the Gallo - Romans around Lugdunum (who were granted tax concessions and whose senior officials were appointed from their own ranks) and the Suevi and Bagaudae in Hispania. Marcellinus, magister militum in Dalmatia and the pagan general of a well - equipped army, acknowledged him as emperor and recovered Sicily from the Vandals. Aegidius also acknowledged Majorian and took effective charge of northern Gaul. (Aegidius may also have used the title "King of the Franks ''.) Abuses in tax collection were reformed and the city councils were strengthened, both actions necessary to rebuild the strength of the Empire but disadvantageous to the richest aristocrats. Majorian prepared a fleet at Carthago Nova for the essential reconquest of the Diocese of Africa. The fleet was burned by traitors, and Majorian made peace with the Vandals and returned to Italy. Here Ricimer met him, arrested him, and executed him five days later. Marcellinus in Dalmatia, and Aegidius around Soissons in northern Gaul, rejected both Ricimer and his puppets and maintained some version of Roman rule in their areas. Ricimer later ceded Narbo and its hinterland to the Visigoths for their help against Aegidius; this made it impossible for Roman armies to march from Italy to Hispania. Ricimer was then the effective ruler of Italy (but little else) for several years. From 461 to 465 the pious Italian aristocrat Libius Severus reigned. There is no record of anything significant that he even tried to achieve, he was never acknowledged by the East whose help Ricimer needed, and he died conveniently in 465. After two years without a Western Emperor, the Eastern court nominated Anthemius, a successful general who had a strong claim on the Eastern throne. He arrived in Italy with an army, supported by Marcellinus and his fleet; he married his daughter to Ricimer, and he was proclaimed Augustus in 467. In 468, at vast expense, the Eastern empire assembled an enormous force to help the West retake the Diocese of Africa. Marcellinus rapidly drove the Vandals from Sardinia and Sicily, and a land invasion evicted them from Tripolitania. The commander in chief with the main force defeated a Vandal fleet near Sicily and landed at Cape Bon. Here Genseric offered to surrender, if he could have a five - day truce to prepare the process. He used the respite to prepare a full - scale attack preceded by fireships, which destroyed most of the Roman fleet and killed many of its soldiers. The Vandals were confirmed in their possession of the Diocese of Africa and they retook Sardinia and Sicily. Marcellinus was murdered, possibly on orders from Ricimer. The Praetorian prefect of Gaul, Arvandus, tried to persuade the new king of the Visigoths to rebel, on the grounds that Roman power in Gaul was finished anyway, but he refused. Anthemius was still in command of an army in Italy. Additionally, in northern Gaul, a British army led by one Riothamus, operated in imperial interests. Anthemius sent his son over the Alps, with an army, to request that the Visigoths return southern Gaul to Roman control. This would have allowed the Empire land access to Hispania again. The Visigoths refused, defeated the forces of both Riothamus and Anthemius, and with the Burgundians took over almost all of the remaining imperial territory in southern Gaul. Ricimer then quarreled with Anthemius, and besieged him in Rome, which surrendered in July 472 after more months of starvation. Anthemius was captured and executed (on Ricimer 's orders) by the Burgundian prince Gundobad. In August Ricimer died of a pulmonary haemorrhage. Olybrius, his new emperor, named Gundobad as his patrician, then died himself shortly thereafter. After the death of Olybrius there was a further interregnum until March 473, when Gundobad proclaimed Glycerius emperor. He may have made some attempt to intervene in Gaul; if so, it was unsuccessful. In 474 Julius Nepos, nephew and successor of the general Marcellinus, arrived in Rome with soldiers and authority from the eastern emperor Leo I. Gundobad had already left to contest the Burgundian throne in Gaul and Glycerius gave up without a fight, retiring to become bishop of Salona in Dalmatia. In 475, Orestes, a former secretary of Attila, drove Julius Nepos out of Ravenna and proclaimed his own son Flavius Momyllus Romulus Augustus (Romulus Augustulus) to be Emperor, on October 31. His surname ' Augustus ' was given the diminutive form ' Augustulus ' by rivals because he was still a minor, and he was never recognized outside of Italy as a legitimate ruler. In 476, Orestes refused to grant Odoacer and the Heruli federated status, prompting an invasion. Orestes fled to the city of Pavia on August 23, 476, where the city 's bishop gave him sanctuary. Orestes was soon forced to flee Pavia when Odoacer 's army broke through the city walls, and his army ravaged the city. Odoacer 's army chased Orestes to Piacenza, where they captured and executed him on August 28, 476. On September 4, 476, Odoacer forced then 16 - year - old Romulus Augustulus, whom his father Orestes had proclaimed to be Rome 's Emperor, to abdicate. After deposing Romulus, Odoacer did not execute him. The Anonymus Valesianus wrote that Odoacer, "taking pity on his youth '', spared Romulus ' life and granted him an annual pension of 6,000 solidi before sending him to live with relatives in Campania. Odoacer then installed himself as ruler over Italy, and sent the Imperial insignia to Constantinople. By convention, the Western Roman Empire is deemed to have ended on 4 September 476, when Odoacer deposed Romulus Augustulus and proclaimed himself ruler of Italy, but this convention is subject to many qualifications. In Roman constitutional theory, the Empire was still simply united under one emperor, implying no abandonment of territorial claims. In areas where the convulsions of the dying Empire had made organized self - defence legitimate, rump states continued under some form of Roman rule after 476. Julius Nepos still claimed to be Emperor of the West and controlled Dalmatia until his murder in 480. Syagrius son of Aegidius ruled the Domain of Soissons until his murder in 487. The indigenous inhabitants of Mauretania developed kingdoms of their own, independent of the Vandals, with strong Roman traits. They again sought Imperial recognition with the reconquests of Justinian I, and they put up effective resistance to the Muslim conquest of the Maghreb. While the civitates of Britannia sank into a level of material development inferior even to their pre-Roman Iron Age ancestors, they maintained identifiably Roman traits for some time, and they continued to look to their own defence as Honorius had authorized. Odoacer began to negotiate with the East Roman (Byzantine) Emperor Zeno, who was busy dealing with unrest in the East. Zeno eventually granted Odoacer the status of patrician and accepted him as his own viceroy of Italy. Zeno, however, insisted that Odoacer had to pay homage to Julius Nepos as the Emperor of the Western Empire. Odoacer never returned any territory or real power, but he did issue coins in the name of Julius Nepos throughout Italy. The murder of Julius Nepos in 480 (Glycerius may have been among the conspirators) prompted Odoacer to invade Dalmatia, annexing it to his Kingdom of Italy. In 488 the Eastern emperor authorized a troublesome Goth, Theoderic (later known as "the Great '') to take Italy. After several indecisive campaigns, in 493 Theoderic and Odoacer agreed to rule jointly. They celebrated their agreement with a banquet of reconciliation, at which Theoderic 's men murdered Odoacer 's, and Theoderic personally cut Odoacer in half. The Roman Empire was not only a political unity enforced by violence. It was also the combined and elaborated civilization of the Mediterranean basin and beyond. It included manufacture, trade, and architecture, widespread secular literacy, written law, and an international language of science and literature. The Western barbarians lost much of these higher cultural practices, but their redevelopment in the Middle Ages by polities aware of the Roman achievement formed the basis for the later development of Europe. Observing the cultural and archaeological continuities through and beyond the period of lost political control, the process has been described as a complex cultural transformation, rather than a fall.
the spirit catches you when you fall down sparknotes
The Spirit Catches You and You Fall Down - wikipedia The Spirit Catches You and You Fall Down: A Hmong Child, Her American Doctors, and the Collision of Two Cultures is a 1997 book by Anne Fadiman that chronicles the struggles of a Hmong refugee family from Houaysouy, Sainyabuli Province, Laos, the Lees, and their interactions with the health care system in Merced, California. In 2005 Robert Entenmann, Ph. D., of St. Olaf College wrote that the book is "certainly the most widely read book on the Hmong experience in America. '' On the most basic level, the book tells the story of the family 's second youngest and favored daughter, Lia Lee, who was diagnosed with a severe form of epilepsy named Lennox - Gastaut Syndrome and the culture conflict that obstructs her treatment. Through miscommunications about medical dosages and parental refusal to give certain medicines due to mistrust, misunderstandings, and behavioral side effects, and the inability of the doctors to develop more empathy with the traditional Hmong lifestyle or try to learn more about the Hmong culture, Lia 's condition worsens. The dichotomy between the Hmong 's perceived spiritual factors and the Americans ' perceived scientific factors comprises the overall theme of the book. The book is written in a unique style, with every other chapter returning to Lia 's story and the chapters in - between discussing broader themes of Hmong culture, customs, and history; American involvement in and responsibility for the war in Laos; and the many problems of immigration, especially assimilation and discrimination. While particularly sympathetic to the Hmong, Fadiman presents the situation from the perspectives of both the doctors and the family. An example of medical anthropology, the book has been cited by medical journals and lecturers as an argument for greater cultural competence, and often assigned to medical, pharmaceutic, and anthropological students in the US. In 1997, it won the National Book Critics Circle Award for General Nonfiction. Lia Lee (Romanized Popular Alphabet: Liab Lis, July 19, 1982 -- August 31, 2012. She was born in Merced, CA, and she was a Hmong child. She had seizures due to epilepsy. Anne Fadiman: She is an author and narrator of ' The Spirit Catches You and You Fall Down '. She wrote about her experience with Lia and her family. Through this experience, she learned the importance of understanding about diversity of culture between doctor, patient, and family. Neil Ernst and Peggy Philp: They are Lia 's doctors at MCMC. There is conflict between them and Lia 's parents because of Hmong shamanism culture versus western medicine. This leads to great misunderstandings between each other. Foua Yang and Nao Kao Lee: They are Lia 's parents, and they love Lia very much. They only believe in their traditional approach to medical treatment, with a strong influence from shamanism. Jeanine Hilt: A social worker who makes Lia her personal cause. She fights against the medical establishment on Lia 's behalf and cares for the Hmong as a significant culture. Lia experienced her first seizure at three months of age, but a resident at Merced Community Medical Center misdiagnosed her condition, and the hospital was unable to communicate with her parents since the hospital had no Hmong interpreters. Anne Fadiman wrote that Lia 's parents did not give her medication as it was prescribed because they believed that Lia Lee 's state showed a sense of spiritual giftedness, and they did not want to take that away. The American doctors did not understand the Hmong traditional remedies that the Lee family used. The doctors treating Lia Lee, Neil and Peggy Ernst, had her removed from her home when she was almost three years of age, and placed into foster care for one year, causing friction with her parents. By age 41⁄2 Lia Lee had been admitted to hospital care 17 times and had made over 100 outpatient visits. The worst seizure Lia had put her onto the verge of death. She went to the emergency room and Dr. Neil Ernst could not do anything. He talked to Lia 's parents about transferring her to Fresno, California because Lia would need further treatment that Dr. Ernst could not provide. Lia 's parents "... believed their daughter was transferred not because of her critical condition but because of the Ernst 's vacation plans ''. Lia Lee slipped into a coma after suffering from a grand mal seizure in 1986, when she was four years of age. Lia Lee 's doctors believed she would die, but Lia Lee remained alive but with no higher brain functions. Fadiman 's sources for information about the history of the Hmong include Hmong: History of a People by Keith Quincy. She stated "Were I citing the source of each detail, Quincy 's name would attach itself to nearly every sentence in the pages on the Hmong in China. '' Fadiman 's book cited the Quincy theory that the Hmong people originated from Siberia. Entenmann wrote that because of the reliance on Quincy 's book, Fadiman 's book propagates the idea that Sonom was a Hmong king, a concept that Entenmann says is inaccurate. Marilyn Mochel, a nurse and clinical educator at Sutter Merced Medical Center (now Mercy Medical Center Merced), who heads the hospital 's cross-cultural program, said in 1999 that "The book has allowed more dialogue. There 's certainly more awareness and dialogue than before. Both sides are teachers and learners. '' Lia Lee lived in a persistent vegetative state for 26 years. She died in Sacramento, California, on August 31, 2012 at the age of 30. At that age she weighed 47 pounds (21 kg) and was 4 feet 7 inches (1.40 m) tall; many children with severe brain damage have limited growth as they age. Outside of the State of California Lia Lee 's death was not widely reported. Fadiman said that pneumonia was the immediate cause of death. Margalit Fox of The New York Times said "(b) ut Lia 's underlying medical issues were more complex still '' because she had lived in a persistent vegetative state for such a long period of time. As of 2012 most individuals who go into that state die three to five years afterwards. Ralph Jennings of The Modesto Bee said "Hmong, including some among the 2,000 in Modesto, say the book typified conflicts between their culture and American institutions. But some say it did n't capture the complexity of the Hmong culture. '' Cheng Lee, a brother of Lia Lee, said that his father and mother liked Fadiman 's book. "Compellingly written, from the heart and from the trenches. I could n't wait to finish it, then reread it and ponder it again. It is a powerful case study of a medical tragedy. '' - David H. Mark, Journal of the American Medical Association Anne Fadiman 's essay "Hmong Odyssey, '' adapted from the book, was published in the March -- April 1998 Via. The Hmong community leaders in Fresno, California praised the essay, saying that it was thoughtful and accurate. New England Journal of Medicine article 1 (1) New England Journal of Medicine article 2 (2)
most nba free throws made in a row
NBA Regular season records - wikipedia This article lists all - time records achieved in the NBA regular season in major statistical categories recognized by the league, including those set by teams and individuals in a game, season, and career. The NBA also recognizes records from its original incarnation, the Basketball Association of America (BAA). In 2006, the NBA introduced age requirement restrictions. Prospective high school players must wait a year before entering the NBA, making age - related records harder to break. Note: Other than the longest game and disqualifications in a game, all records in this section are since the 24 - second shot clock was instituted for 1954 -- 55 season onward. * This award has only been given since the 1968 -- 69 season. * * This award has only been given since the 1982 -- 83 season.
where did dance gavin dance get their name from
Dance Gavin Dance - wikipedia Dance Gavin Dance is an American post-hardcore band from Sacramento, California, and formed in 2005. The band currently consists of Tilian Pearson (clean vocals), Jon Mess (unclean vocals), Will Swan (lead guitar), Tim Feerick (bass guitar), and Matthew Mingus (drums, percussion). The band formerly included lead vocalists Jonny Craig and Kurt Travis. Swan and Mingus are the only band members who have appeared on every studio album while Mess has performed on each album but Happiness. Upon their development, Dance Gavin Dance released their debut EP, Whatever I Say Is Royal Ocean in 2006 and signed to Rise Records thereafter. The band released their full - length debut studio album, Downtown Battle Mountain, in May 2007, which spawned the singles "And I Told Them I Invented Times New Roman '' and "Lemon Meringue Tie ''. Throughout 2008, the band 's vocalist Jonny Craig and guitarist Sean O'Sullivan left the band and were replaced by vocalist Kurt Travis (later of A Lot Like Birds) and guitarist Zac Garren. The band released their second full - length album, Dance Gavin Dance, in August. The album peaked at No. 172 on the U.S. Billboard 200 chart and No. 26 on the Top Independent Albums chart. Vocalist Jon Mess and bass guitarist Eric Lodge left the band before the album 's release. Bassist Jason Ellis replaced Lodge. Happiness was released as the band 's third studio album in June 2009, peaking at No. 145 on the Billboard 200 Albums chart and No. 30 on the Top Independent Albums chart and is the group 's first release in which guitarist Will Swan pursued screaming vocals along with guitar parts for an album. In 2010, original vocalists Jonny Craig and Jon Mess, and bass guitarist Eric Lodge returned to the band and recorded their fourth studio album, Downtown Battle Mountain II, released in March 2011, charting at No. 82 on the Billboard 200 Albums chart, No. 13 on the Independent Albums chart, and No. 2 on the Hot Rock Albums chart, becoming their highest - selling album thus far. Craig and Lodge left the band throughout 2012, leading to vocalist Tilian Pearson, bass guitarist Tim Feerick, and guitarist Josh Benton joining the band. The band released their fifth studio album, Acceptance Speech in October 2013, achieving commercial success, charting at No. 42 on the Billboard 200, No. 13 on the Alternative Albums chart, No. 7 on the Independent Albums chart, and No. 6 on the Top Hard Rock Albums chart. Benton left the band shortly after the album was released. Dance Gavin Dance released their sixth studio album, Instant Gratification, in April 2015, charting at No. 32 on the Billboard 200, becoming their best - selling album to date. In late 2015, the band embarked on their 10 Year Anniversary Tour with supporting acts A Lot Like Birds, Slaves, Strawberry Girls, and Dayshell. All of these groups, with the exception of Dayshell, feature former members of Dance Gavin Dance. They released their seventh studio album, Mothership, on October 7, 2016. Dance Gavin Dance was formed shortly after the dissolution of guitarist Will Swan, drummer Matt Mingus and bassist Eric Lodge 's previous band, Farewell Unknown. After recruiting screamer Jon Mess, singer Jonny Craig joined the band after leaving Ghost Runner On Third due to conflicts within the band. They released a self - produced EP, Whatever I Say Is Royal Ocean, which was subsequently re-released on November 14, 2006, through Rise Records. Their debut studio album Downtown Battle Mountain, produced by Kris Crummett, was released on May 15, 2007, on Rise Records. The band toured with Alesana, A Day To Remember, and Pierce the Veil in support of the album. In August 2007, guitarist Sean O'Sullivan quit the band and was replaced by Zachary Garren. In November 2007, singer Craig quit the band, and was not allowed to rejoin when he asked to, due to extreme tensions and personal conflicts with the other band members. Shortly after Craig 's departure, the band held auditions for a new singer; Nic Newsham of Gatsby 's American Dream chose not to join but guested on the song "Uneasy Hearts Weigh The Most '' on their self - titled 2008 album. Kurt Travis, formerly of Five Minute Ride, O! The Joy, and No Not Constant became the band 's new singer. On April 20, 2008, Dance Gavin Dance entered the studio to record Dance Gavin Dance with Kris Crummett helming the role of producer once more. It was released on August 19, 2008. Before the album was released, but after it had been recorded, two original members Jon Mess and Eric Lodge left Dance Gavin Dance. Following their departure, Will Swan took up screaming duties alongside playing guitar, and Jason Ellis (formerly of Five Minute Ride) replaced Eric on bass. The band filmed a music video for the song "Me and Zoloft Get Along Just Fine '' with director Robby Starbuck which was released November 18, 2008. Jon Mess ' voice was lip synced by Will Swan in the music video. In February 2009, the band went into the studio as a five - piece to record the follow - up to their self - titled album yet again with producer Kris Crummett. The resulting album, Happiness, was released on June 9, 2009. Bassist Jason Ellis appears on Happiness, but left the band before its release. He was replaced by Tim Feerick. AllMusic gave a mixed review to the album, writing, "Happiness bristles with the kind of overachiever eclecticism that 's as impressive as it is divisive, '' but that two songs were marred by the "nasally realm of high school emo, '' leaving the listener unsatisfied with the experience. A music video for the track "Tree Village '' was released shortly after the album. The band embarked on tours with Emarosa, Closure in Moscow, Scary Kids Scaring Kids, and others in support of the album. On February 10, 2010, guitarist Zachary Garren was kicked out of the band due to personal conflicts with Will Swan. Shortly after, Zac Garren met drummer Ben Rosett and formed Strawberry Girls, now signed to Tragic Hero Records. Dance Gavin Dance played the Soundwave festival as scheduled as a four - piece with Swan handling all guitar duties and Kurt Travis playing keyboards. Jon Mess and Eric Lodge officially rejoined the band in mid-2010. Josh Benton, former guitarist and bandmate of Kurt Travis in No Not Constant, took up guitar duties. In August Alternative Press said singer Kurt Travis and Dance Gavin Dance had officially parted ways in order for Jonny Craig to rejoin the band. Mess stated in an interview with noisey.vice.com that if Jonny was n't willing to rejoin, the band would 've broken up. Former guitarist Sean O'Sullivan rejoined the band for several home shows towards the end of 2010, returning the band to their Downtown Battle Mountain line - up. In March 2011, the band released Downtown Battle Mountain II. In March 2011 the band began their US tour with Iwrestledabearonce, In Fear And Faith, and Just Like Vinyl, followed by a small European tour, culminating in 2 shows in London playing the original Downtown Battle Mountain in full on the first night and Downtown Battle Mountain II in full on the second night. The band also played at the 2011 Vans Warped Tour. In an April, 2011 interview with Mind Equals Blown, Drummer Matt Mingus stated the band plans to release another album with the current, reunited lineup. On August 20, 2012, Jonny Craig announced his departure from the band. On August 21, 2012, Dance Gavin Dance announced Craig 's departure via Facebook saying "When Jonny rejoined DGD, we were about to call it quits. Tired of touring and being away from home for 8 months of the year, we wanted to shake things up. We thought it 'd be fun to do a dbm sequel with Jonny and go out on that. We recorded dbm2 and felt it would be a disservice to our fans not to let them hear the new record live. We did a tour and felt rejuvenated but Jonny was n't in a good place. After trying to convince him into getting help over warped tour, we canceled a headliner in November / December 2011 and told him we would n't tour until he sought out real help for his addiction. We got our good friend Matt Geise to fill in for a short run we did earlier this year while Jonny was in rehab. He was released and wanted to be out on the road immediately. We were n't sure how things would go so we only booked a small portion of the all stars tour to see if Jonny could still function as our lead singer. After a week and a half things were not going well. Everything came to a head when Jonny was publicly scolded by the owner of Sumerian Records for multiple offenses. It was then that we realized that our time with Jonny had reached its end. We wish Jonny the best In his future endeavors but he will no longer be a part of DGD. We however will continue playing and writing music because it 's what we love doing. '' Tilian Pearson, formerly of Tides of Man, was asked to become the new vocalist during the making of his solo album, Material Me. Pearson, along with guitarist Josh Benton and bassist Tim Feerick, were confirmed as official members by Jon Mess. The band 's fifth album, Acceptance Speech, was released in October 2013 with Rise Records. The album was produced by Matt Malpass. Shortly after the shooting of their music video for their single "Strawberry Swisher Pt. 3 '', Josh Benton parted ways with the band in order to focus on his career as an audio engineer and producer. Aric Garcia from Hail the Sun filled in for The Acceptance Speech Tour and The Rise Records tour. On September 17, Dance Gavin Dance released a b - side from Acceptance Speech entitled "Pussy Vultures ''. On October 29, 2014, producer Kris Crummett announced that the recording for the band 's sixth studio album was completed. In place of former member Josh Benton, Strawberry Girls and former Dance Gavin Dance guitarist Zachary Garren, Secret Band guitarist Martin Bianchini, and Hail the Sun guitarist and touring member Aric Garcia contributed their respective guitar parts on the album. On February 6, 2015, Rise Records released a teaser for the new album Instant Gratification, which was later released on April 14, 2015. On February 12, 2015, the band released the lead single, "On the Run ''. The second single, "We Own the Night '', was released on March 12, 2015. The music video for "We Own the Night '' was uploaded to the official Rise Records YouTube channel on May 7, 2015. On April 2, the band premiered the music video for the song "Stroke God, Millionaire ''. On February 19, 2015, the band 's guitarist Will Swan published a post on Facebook revealing that the guitar pedalboard he uses to perform live with Secret Band, Dance Gavin Dance, and Sianvar was stolen at a February 14 show on The Blue Swan Tour. He launched a GoFundMe account and asked fans to donate $2,500 to help purchase a replacement pedalboard. The fund reached its goal within three hours of its launch. Dance Gavin Dance toured as a supporting act on the Take Action! Tour with Memphis May Fire, Crown The Empire, and Palisades from March 10 to April 4, 2015. In support of Instant Gratification, the band announced the Instant Gratification Tour, which took place from April 14 to May 8, 2015, across North America with supporting acts Polyphia, Hail The Sun, and Stolas. The band embarked on their headlining Australia tour from May 14 to May 23, 2015, with opening acts Arcasia. In celebration of the band 's 10th anniversary, Dance Gavin Dance embarked on the 10 Year Anniversary tour with supporting acts A Lot Like Birds, Slaves, Dayshell, and Strawberry Girls from November 14 to December 19, 2015, in North America. On December 23, 2015, Rise Records revealed that Dance Gavin Dance were to release their upcoming seventh studio album the fall of 2016. The band performed at So What Music Festival in Grand Prairie, Texas on March 20, 2016. They also performed at the Extreme Thing Sports & Music Festival in Las Vegas, Nevada on April 2, 2016, with other bands such as Saosin, the Story So Far, the Maine, Mayday Parade, among several others On March 2, 2016, the band announced their live studio album, Tree City Sessions, which was released on May 13, 2016. The album contains 12 live recorded songs performed in Sacramento, California at the Pus Cavern Studios. On May 10, 2016, the band announced the U.K. leg of their 10th anniversary tour that included vocalists Tilian Pearson, Kurt Travis, and Jonny Craig. On July 11, the group announced their U.S. fall tour which took place from September 22 to October 27, 2016. On July 27, 2016, the band announced their seventh studio album, Mothership, set to be released on October 7, 2016. The lead single, "Chucky Vs. the Giant Tortoise '', was released on August 18, 2016. The music video for "Betrayed By The Game '' was released on September 16, 2016, and the music video for "Young Robot '' was released on September 27, 2016. In support of the album, the band embarked on The Mothership Tour with supporting acts The Contortionist, Hail the Sun, Good Tiger, and The White Noise, which took place from September 22 to October 27, 2016. Dance Gavin Dance embarked on their Europe 10th anniversary tour from November 3 to November 26, 2016. Through February and March 2017 the band embarked on a tour with the math rock band CHON. The tour was known as The Robot with Human Hair Vs. Chonzilla. On June 1, 2017, the band released a cover of the Bruno Mars single "That 's What I Like '' on YouTube. For the summer of 2017, the band headlined the Vans Warped Tour. On June 15, 2017, the band released a single titled, "Summertime Gladness. '' On October 4, 2017, the band announced a US tour performing their 7th LP ' Mothership ' in its entirety through December that year with support from Polyphia, Icarus the Owl, and Wolf & Bear. On October 17, 2017, the band announced that recording of their upcoming 8th LP had begun and that the album should expect a release date of sometime in 2018. Dance Gavin Dance 's musical style has been described as post-hardcore, math rock, experimental rock, progressive rock, screamo, jazz fusion and emo. Critics have compared the band to fellow post-hardcore peers the Fall of Troy, Alexisonfire and Circa Survive. Their 2011 release Downtown Battle Mountain II is said to feature "the same structuring as The Mars Volta album The Bedlam In Goliath in that it never lets up ''. The band has personally cited bands Glassjaw, Earth Wind & Fire, Deftones, Temptations, At the Drive - In, Cursive, Explosions in the Sky, MF Doom, and Radiohead as influences. Former vocalist Jonny Craig is currently the frontman of Slaves and former vocalist Kurt Travis handled vocal duties for the band, A Lot Like Birds, whereas former guitarist Zachary Garren has started his band Strawberry Girls, and lead guitarist Will Swan currently operates his own record label, Blue Swan Records, and plays guitar in the supergroup Sianvar and Dance Gavin Dance side project Secret Band, which also features Jon Mess and Matt Mingus. Former guitarist Josh Benton also works as a record producer and has produced Dance Gavin Dance 's live album Tree City Sessions, as well as a majority of the releases on Blue Swan Records. Current members Current touring musicians Session musicians Former members Former touring musicians Timeline Studio albums EPs Live albums Compilation Appearances
where do interstitial and appositional growth of cartilage occur
Endochondral ossification - wikipedia Endochondral ossification is one of the two essential processes during fetal development of the mammalian skeletal system by which bone tissue is created. Unlike intramembranous ossification, which is the other process by which bone tissue is created, cartilage is present during endochondral ossification. Endochondral ossification is also an essential process during the rudimentary formation of long bones, the growth of the length of long bones, and the natural healing of bone fractures. The cartilage model will grow in length by continuous cell division of chondrocytes, which is accompanied by further secretion of extracellular matrix. This is called interstitial growth. The process of appositional growth occurs when the cartilage model also grows in thickness due to the addition of more extracellular matrix on the peripheral cartilage surface, which is accompanied by new chondroblasts that develop from the perichondrium. The first site of ossification occurs in the primary center of ossification, which is in the middle of diaphysis (shaft). Then: About the time of birth in mammals, a secondary ossification center appears in each end (epiphysis) of long bones. Periosteal buds carry mesenchyme and blood vessels in and the process is similar to that occurring in a primary ossification center. The cartilage between the primary and secondary ossification centers is called the epiphyseal plate, and it continues to form new cartilage, which is replaced by bone, a process that results in an increase in length of the bone. Growth continues until the individual is about 20 years old or until the cartilage in the plate is replaced by bone. The point of union of the primary and secondary ossification centers is called the epiphyseal line. The growth in diameter of bones around the diaphysis occurs by deposition of bone beneath the periosteum. Osteoclasts in the interior cavity continue to resorb bone until its ultimate thickness is achieved, at which point the rate of formation on the outside and degradation from the inside is constant. During endochondral ossification, five distinct zones can be seen at the light - microscope level. During fracture healing, cartilage is often formed and is called callus. This cartilage ultimately develops into new bone tissue through the process of endochondral ossification. Masson Goldner trichrome stain of growth plate in a rabbit tibia. Section of fetal bone of cat. ir. Irruption of the subperiosteal tissue. p. Fibrous layer of the periosteum. o. Layer of osteoblasts. im. Subperiosteal bony deposit. (From Quain 's "Anatomy, '' E.A. Schäfer.)
youtube i just can't get you outta my head
Ca n't Get You Out of My Head - Wikipedia "Ca n't Get You Out of My Head '' is a song recorded by Australian singer Kylie Minogue for her eighth studio album, titled Fever, which she released in 2001. The song was released in Australia by Parlophone as the lead single from the album on 8 September 2001. It was released on 17 September 2001 in the United Kingdom. In the United States, the single was released on 18 February 2002. Jointly written, composed, and produced by Cathy Dennis and Rob Davis, "Ca n't Get You Out of My Head '' is a midtempo dance - pop song which lyrically details its narrator 's obsession towards her lover. The song is famous for its "la la la '' hook. In addition to acclaim from music critics, "Ca n't Get You Out of My Head '' found commercial success on a large scale. It peaked at number one on the charts of Austria, Belgium, France, Germany, Italy, Poland, Switzerland, the United Kingdom, and every other European country excluding Finland. It also topped the charts of Minogue 's native Australia and of Canada and New Zealand. In the United States, the song peaked at number seven on the US Billboard Hot 100 chart, becoming Minogue 's biggest hit in the region since "The Loco - Motion ''. "Ca n't Get You Out of My Head '' reportedly reached number one in 40 countries across the globe. It was certified triple - platinum in Australia, double - platinum in the United Kingdom, and gold in the United States. It became Minogue 's first single to sell in excess of one million copies in the United Kingdom, where it also stands as the 28th best - selling single of the millennium. As of 2013, the song was Minogue 's highest selling single and one of the best - selling singles of all time, with worldwide sales exceeding five million. The accompanying music video for the song was directed by Dawn Shadforth, and featured Minogue performing various dance routines in different futuristic backdrops. It became notable for the revealing hooded white jumpsuit Minogue wore during one of the scenes. The song has been performed by Minogue during all of her concert tours as of 2017, with the exception of the Anti Tour. Following its release, "Ca n't Get You Out of My Head '' ranked on a number of decade - end lists compiled by magazines such as Rolling Stone, The Guardian, and NME. It is considered to be Minogue 's strongest commercial breakthrough in the United States and is said to have been the reason behind the success of its parent album Fever in the region. "Ca n't Get You Out of My Head '' is also recognized as Minogue 's signature song and was a defining point in her musical career. In 2012, the song was re-recorded for inclusion in Minogue 's orchestral compilation album, The Abbey Road Sessions. In 2000, Minogue signed to the Parlophone Records label and released her seventh studio album, Light Years. The disco and Europop inspired album was a critical and commercial success, and was later certified four times - platinum in Minogue 's native country Australia for shipment of 280,000 units, and platinum in the United Kingdom for shipment of 300,000 units. "Spinning Around '' was released as the lead single from the album, and it was a commercial success, attaining a platinum certification in Australia for shipment of 70,000 units, and a silver certification in the United Kingdom for shipment of 200,000 units. She promoted the album by embarking on the On a Night Like This tour. Minogue premiered "Ca n't Get You Out of My Head '' by performing it during the tour, and soon after, discussion regarding the song "quickly set online messageboards alight. '' "Ca n't Get You Out of My Head '' was chosen as the lead single from Minogue 's eighth studio album, Fever, and it was released on 8 September 2001 by Parlophone in Australia, while in the United Kingdom and other European countries it was released on 17 September. -- Minogue, about her reaction to hearing the demo of the song. "Ca n't Get You Out of My Head '' was jointly written, composed, and produced by Cathy Dennis and Rob Davis. Dennis and Davis had been brought together by British artist manager Simon Fuller, who wanted the duo to come up with a song for British pop group S Club 7. The song was recorded using Cubase music software, which Davis ran on his Mac computer. Davis began playing an acoustic guitar and ran a 125 beats per minute drum loop, on which Dennis, herself a singer who had enjoyed chart success in the United States, began singing the line "I just ca n't get you out of my head '' in the key of D minor. After three and a half hours, the demo was recorded and the vocals were laid afterwards. Dennis called their recording setup for the song, "the most primitive set - up you could imagine! Different producers work in different ways. But it 's good to be reminded you do n't have to be reliant on equipment. A song is about melody and lyrics and being able to take something away in your memory that is going to haunt you. '' She also regarded their production as a "very natural and fluid process, '' saying: "We know how hard we work sometimes to write songs and then spend months picking them to pieces, but this was the easiest process, the chemicals were all happy and working together. '' But after Fuller heard the demo, he felt it was wrong for S Club 7 and rejected it; English singer - songwriter Sophie Ellis - Bextor also turned down the offer to record it. Davis then met with Minogue 's A&R executive, Jamie Nelson. After hearing the demo cassette of the song, Nelson booked it for Minogue to record later that year. Nelson was impressed by the "vibe '' of the song and felt it would please the "danceheads. '' Although Davis was initially under the impression that the recording deal would be called off later, Minogue became enthusiastic to record the song after hearing 20 seconds of the demo. The whole song, including Minogue 's vocals, were recorded at Davis 's home studio in Surrey. The music, excluding the guitar part, was programmed using Korg Triton workstation via MIDI. Dennis later remarked: "Even though Kylie was n't the first artist to be offered the song, I do n't believe it was meant to go to anyone other than Kylie, and I do n't believe anyone else would have done the incredible job she did with it, with the video, looking super-hot! '' "Ca n't Get You Out of My Head '' is a "robotic '' midtempo dance - pop song, with a tempo of 126 beats per minute. According to the sheet music published at Musicnotes.com by EMI Music Publishing, Minogue 's vocal range spans from C to D. Minogue chants a "la la la '' hook in the song, which is often heralded as its most appealing part. BBC Radio 2 noted that the composition of the song is "deceptively simple, but its veins run with the whole history of electronic music. '' They described the song 's bassline as "pulsing, '' and recognised influences of English rock band New Order and German electronic music band Kraftwerk. The song does not follow the common verse - chorus structure and is instead composed of numerous "misplaced sections. '' Dennis reasoned that these sections "somehow work together '' as she and Davis "did n't try to force any structure after the event. The seeds were watered and they very quickly sprouted into something bigger than any of us. '' Likewise, Davis commented: "It breaks a few rules as it starts with a chorus and in comes the ' la 's ' -- that is what confused my publisher (Fuller) when he first heard it. '' Through the lyrics of the song, its singer expresses an obsession with an anonymous figure. Dorian Lynskey from The Guardian termed the song a "mystery '' as Minogue never reveals the identity of her object of infatuation. The critic suggested that the person Minogue is referring to is either "a partner, an evasive one - night stand or someone who does n't know she exists. '' Writing for the same newspaper, Everett True identified a "darker element '' in the simple lyrics and felt this sentiment was echoed in Minogue 's restrained vocals. Further, True emphasised that while Minogue 's 1987 single "I Should Be So Lucky '' had presented an optimistic romantic future, "Ca n't Get You Out of My Head '' focuses on an "unhealthy '' and potentially destructive obsession. He also noted that in the former song, Minogue played "the wide - eyed ingénue with alacrity, '' but in the latter song she is aware of the harmful nature of her infatuation, calling it a "desire that is wholly dependent on her own self - control. '' In late 2012, "Ca n't Get You Out of My Head '' was re-recorded by Minogue for inclusion in her orchestral compilation album, The Abbey Road Sessions. On the album, Minogue reworked 16 of her past songs with an orchestra, which, according to Nick Levine from BBC Music, "re-imagine them without the disco glitz and vocal effects. '' The Abbey Road Sessions - version of "Ca n't Get You Out of My Head '' features a "more dramatic, fully fleshed out '' musical arrangement, and follows a pizzicato playing technique, in which the strings of a string instrument are continuously plucked. The song received overwhelming critical acclaim, subsequently being cited as one of the greatest songs of the 2000s. Chris True from AllMusic picked the song as a highlight of Fever, commenting that it "pulses and grooves like no other she 's (Minogue 's) recorded. '' Jason Thompson from PopMatters described Minogue 's vocals as a "sexual come on '' and called the song "trim and funky, certainly something that could n't miss anywhere. '' Dominique Leone from Pitchfork Media praised the commercial prospect of the song, saying that it "exudes a catchiness that belies its inherent simplicity, so reassuring during an era when chart acts sound increasingly baroque and producers race to see who can ape electronic music trends first. '' Jim Farber from Entertainment Weekly said that the song "fully lives up to its title '' with "every sound a hook, '' and compared it to the works of Andrea True. Michael Hubbard from MusicOMH labelled the song "one of 2001 's best singles, '' saying that it "predictably beat off lesser ' competition. ' '' In 2012, The Guardian critic Everett True defined "Ca n't Get You Out of My Head '' as "one of those rare moments in pop: sleek and chic and stylish and damnably danceable, but with a darker element hidden in plain sight. '' Billboard 's Jason Lipshutz felt that "the metallic production would n't carry quite as much of its impact if Minogue had attempted to outshine it; wisely, her voice operates alongside it, finding renewed power in its drive '', concluding that "almost 15 years after its release, it still sounds fresh, forward - thinking and alive ''. The Abbey Road Sessions -- version of the song received generally positive reviews. Tim Sendra from AllMusic felt that "most interesting reboot '' in the album took place on "Ca n't Get You Out of My Head, '' saying that the "insistent strings push the song along with a tightly coiled electricity that is impossible to resist. '' He also picked the song as a highlight on the album. Sal Cinquemani from Slant Magazine chose the song as one of the "standouts '' on the album, saying that its arrangement "compensate for the lack of synthetic dance beats and vocal effects. '' Tania Zeine from ARIA Charts described the track as a "powerful violin ballad with the accompaniment of a large orchestra throughout the remainder. '' Simon Price from The Independent said that while the original version of "Ca n't Get You Out of My Head '' would be "impossible to improve on, '' the reworked version "turns it into a pizzicato thriller score. '' Jude Rogers from The Quietus, however, felt that the song does not "respond well to this (orchestral) treatment. '' Louis Vartel from the LGBT oriented website NewNownext, called it "a pulsing march through the most hypnotic beat of the past 20 years (...) Kylie 's American breakthrough still holds up as a cool, streamlined dance - floor magnet ''. He ranked the track on the second position of his list of the singer 's 48 greatest songs, in honor of her 48th birthday. John Russell, from LGBT magazine Queerty, concluded that "even if ' La - La - La ' did n't make Kylie a household name in the States, at the very least it introduced her to a whole new generation of adoring American gays ''. In Minogue 's native country, Australia, "Ca n't Get You Out of My Head '' entered at number one on the Australian Singles Chart of 23 September 2001, and remained at that position for four weeks. It spent a total of 12 weeks on the chart. In this region, it was certified triple - platinum by the Australian Recording Industry Association for shipments of 210,000 units. In both the Dutch - speaking Flanders and French - speaking Wallonia regions of Belgium, the song peaked at number one on the Ultratop chart, spending a total of 22 and 24 weeks on the charts, respectively. In Belgium, the song was certified double - platinum for sales of 100,000 units. In France, the song entered the French Singles Chart at number 14 and peaked at number one, spending a total of 41 weeks on the chart. In this region, it was certified platinum by the Syndicat National de l'Édition Phonographique for sales of 500,000 units. As of August 2014, the song was the 22nd best - selling single of the 21st century in France, with 542,000 units sold. In Germany, the song was number one for one week on the German Singles Chart. In this region, it was certified platinum by the Federal Association of Music Industry for shipments of 500,000 units. In Ireland, the song entered at number one on the Ireland Singles Chart, spending a total of consecutive 19 weeks on the chart. In the United Kingdom, the single faced competition in a hugely - hyped chart battle with Victoria Beckham 's single "Not Such an Innocent Girl. '' On the chart date of 29 September 2001, "Ca n't Get You Out of My Head '' debuted at number one on the UK Singles Chart with first week sales of 306,000 units, while "Not Such an Innocent Girl '' debuted at number six with first week sales of 35,000 units. It spent four weeks at number one, and a total of 25 weeks inside the top 40 on the chart. The song spent a record - breaking eight weeks at number one on the airplay chart of the country and became the first to garner 3000 radio plays in a single week. Subsequently, it became the most - played song of 2001 in the region. "Ca n't Get You Out of My Head '' was certified platinum by the British Phonographic Industry for shipments of 600,000 units in 2001. The certification was upgraded to double - platinum in 2015, denoting shipments of 1,200,000 units. In the United States, "Ca n't Get You Out of My Head '' peaked at number seven on the US Billboard Hot 100 chart, becoming Minogue 's best selling single in the region since her interpretation of "The Loco - Motion. '' Additionally, the song peaked at number one on the Hot Dance Club Songs chart, at number 23 on the Adult Top 40 chart, at number three on the Mainstream Top 40 chart, and number eight on the Hot 100 Airply chart. In this region, the song was certified gold by the Recording Industry Association of America for shipments of 500,000 units. The accompanying music video for "Ca n't Get You Out of My Head '' was directed by Dawn Shadforth, and featured dance routines choreographed by Michael Rooney. Early in Minogue 's career, her youthful look, slim figure, and her "proportionally '' large mouth attracted comments from various critics, with British red top newspaper News of the World speculating that the singer could possibly be an alien. Later while discussing the video, Shadforth and music critic Paul Morley took this "bizarre suggestion '' into consideration to comment on Minogue as a "creative, experimental artist. '' Shadforth blocked some shots of the initial driving scene based upon similar shots of Shirley Manson piloting an airplane in her award - winning dogfight clip for Garbage 's "Special. '' The video was released on 11 August 2001. It begins with Minogue driving a De Tomaso Mangusta sports car on a futuristic bridge, while singing the "la la la '' hook of the song. The next scene consists of a number of dance couples performing a dance routine dressed in black and white costumes; they are soon joined by Minogue, who is seen wearing a white tracksuit. The setting changes to a room where Minogue is seen striking various poses sporting bright crimson lipstick and a hooded white jumpsuit with a neckline plunging down to her navel. The outfit was designed by London - based fashion designer Fee Doran, under the label of Mrs Jones. Minogue then performs a synchronised dance routine with several backup dancers, who are wearing red and black suits. As the video ends, she performs a similar routine on the top of a building during the night, this time wearing a lavender halter neck dress with ribbon tile trim. Various scenes in the video show Minogue 's face "unusually '' close to the lens of the camera, thus it "subtly distorts, yet remains glamorous. '' Shadforth felt the shot gave a "sort of sense of intimacy and as you say a sort of strangeness, '' again drawing upon the suggestion of Minogue being an alien. Similarly, Morley opined that it was "the side of Kylie that suddenly reveals itself as being experimental, she is prepared to push herself into positions and shapes that might not be conventionally attractive (...) She becomes alien Kylie as well. '' At the 2002 MTV Video Music Awards ceremony, the music video was nominated for Best Dance Video, while Michael Rooney won the award for Best Choreography. The hooded white jumpsuit Minogue wore in the music video is often considered to be one of her most iconic looks, particularly due to its deep plunging neckline. British fashion designer and Minogue 's stylist William Baker described the choice of the outfit, saying it was "it was pure but kind of slutty at the same time. '' The outfit was put on display at Kylie: The Exhibition, an exhibition that featured "costumes and memorabilia collected over Kylie 's career, '' held at the Victoria and Albert Museum in London, England, and at Kylie: an exhibition, a similar exhibition held at the Powerhouse Museum in Sydney, Australia. It was also included in Minogue 's official fashion photography book, Kylie / Fashion, which was released on 19 November 2012 by Thames and Hudson to celebrate Minogue 's completion of 25 years in music. The music video served as an inspiration for Morley while writing his book Words and Music: the history of pop in the shape of a city. In the book, Morley "turned the lonely drive she (Minogue) made in the song 's video towards a city (...) into a fictional history of music, '' referring to the opening sequence of the music video. The critic takes a ride with Minogue through a city and encounters various musicians and artists like the ghost of Elvis Presley, and Madonna, Kraftwerk, and (Ludwig) Wittgenstein. Academics Diane Railton and Paul Weston, in their 2005 essay Naughty Girls and Red Blooded Women (Representations of Female Heterosexuality in Music Video), contrasted the music video of "Ca n't Get You Out of My Head '' with that of American singer Beyoncé 's 2003 single "Baby Boy. '' Railton and Weston concluded that while both videos focus on two singers performing seductive dance routines, Minogue is presented in a calculated manner and "is always provisional, restricted, and contingent, '' whereas Beyoncé displays a particular "primitive, feral, uncontrolled and uncontrollable '' sexuality embodied by the black female body. The two felt that the videos were representative of the raced depictions of white and black women in colonial times and pop culture, respectively. Comedian John Bishop made a parody of the video which appeared in an episode of the ITV show The Nightly Show, which Bishop was guest presenting. The same video featured in the introduction video of Bishop 's 2017 tour "Winging It ''. On 2 August 2001, Minogue performed "Ca n't Get You Out of My Head '' at the BBC Radio 1 One Big Sunday show held at Leicester, in the United Kingdom, along with "Spinning Around; '' for the performance, she wore a black trilby hat, a sleeveless T - shirt upon which a picture of Marilyn Monroe had been printed, knee - length black boots, and trousers with open zips placed on both the thighs. She performed "Ca n't Get You Out of My Head '' on 8 November 2001 at the MTV Europe Music Awards ceremony in 2001. At the 2002 Brit Awards held on 20 February 2002, Minogue performed a mash - up version of "Ca n't Get You Out of My Head '' and British band New Order 's 1983 song "Blue Monday '' conceived by and also produced by Stuart Crichton. The mash - up was soon released as the B - side to "Love at First Sight, '' the third single from the album Fever. The live performance of the mash - up ranked at number 40 on The Guardian 's list of "50 Key Events in the History of Dance Music '' in 2011. The mashup was dubbed "Ca n't Get Blue Monday Out of My Head '' during its inclusion as the B - side to "Love at First Sight '' and as a remix on Minogue 's remix album Boombox. On 16 March 2002, Minogue performed "Ca n't Get You Out of My Head '' along with "In Your Eyes, '' the second single from Fever, on Saturday Night Live. On 4 June 2012, she sang "Ca n't Get You Out of My Head '' at the Diamond Jubilee Concert in front of the Buckingham Palace, held in honour of Elizabeth II 's completion of 60 years as Queen. Minogue wore a pearl - studded black jacket and hat for the performance. The dance troupe Flawless, who had been finalists on the British television talent show Britain 's Got Talent, served as Minogue 's backup dancers. More recently, Minogue has performed a version of the song remixed by Steve Anderson. She performed the version during her "MasterCard Pricless Gig '' and other mini-concerts to promote her twelfth studio album Kiss Me Once. It was performed during her seven - song set at the 2014 Commonwealth Games closing ceremony, as the final song of her performance. She also performed on the French TV show Le Grand Journal along with "I Was Gonna Cancel. '' Following its release, "Ca n't Get You Out of My Head '' has been performed by Minogue during all of her concert tours as of 2013, with the exception of the Anti Tour in 2012. In 2001, the song was included in the setlist of Minogue 's On a Night Like This tour, which was launched to promote Light Years. According to Tim DiGravina from AllMusic, the performance was infused with an "almost tangible passion and fire. '' The song was included in the encore segment of the KylieFever2002 tour, which was launched to promote Fever. In 2003, Minogue performed the song on the one - night only concert Money Ca n't Buy, which concert was used to promote her ninth studio album, Body Language, and was held at major entertainment venue Hammersmith Apollo in London. During the performance, "a visual flurry of quasi-Japanese symbols '' was projected onto large digital screens set behind the stage, and dancers wearing bondage costumes carried out a "robotic '' dance routine. In 2005, she performed the song on her Showgirl: The Greatest Hits Tour. But Minogue was unable to complete the tour, as she was diagnosed with early breast cancer and had to cancel the Australian leg of the tour. After undergoing treatment and recovery, Minogue resumed the concert tour in the form of Showgirl: The Homecoming Tour in 2007, and she included "Ca n't Get You Out of My Head '' on the setlist. In 2008, she performed the song on the KylieX2008 tour, which was launched to promote her tenth studio album, X. The show was split into five acts; "Ca n't Get You Out of My Head '' was featured on the first act, titled "Xlectro Static, '' in a mashup with the song "Boombox. '' In 2009, she performed the same version of the song on the For You, for Me tour, which was her first concert tour in North America. A more rock - oriented version of the song was performed during the Aphrodite: Les Folies Tour, which was launched to promote her eleventh studio album, Aphrodite. It was regarded as "seemingly inspired by the crunch of Janet Jackson 's "Black Cat. '' During the performance, male back - up dancers in "S&M dog collars '' danced with female back - up dancers, who were dressed in red ballroom gowns. In 2012, Minogue promoted The Abbey Road Sessions by performing on the BBC Proms in the Park at Hyde Park, London. During the event, she also sang the orchestral version of "Ca n't Get You Out of My Head. '' More recently, the Steve Anderson remix was performed on Minogue 's Kiss Me Once Tour and Kylie Summer 2015 Tour. It followed a performance of "Sexercize '' in the fourth section, and an interlude dubbed the "Sex Segue. '' She also performed this version during the 2015 Royal Albert Hall performance as part of her "Kylie Christmas '' concert. Following its release, "Ca n't Get You Out of My Head '' peaked at number one on the charts of every European country (except Finland), Australia, New Zealand, and Canada 's MuchMusic chart. The song reportedly reached number one in 40 countries worldwide, and as of 2013, it has sold over seven million copies. In the United Kingdom, the total sales of the song are accumulated to be around 1.16 million units, thus making it the 75th best - selling single in the United Kingdom of all time. It is Minogue 's highest selling single as of 2013 and also one of the best - selling singles of all time. As of June 2015, it is the 28th best - selling single of the millennium in the United Kingdom. The song is notable for being Minogue 's biggest and strongest commercial breakthrough in the United States, a region in which Minogue had previously managed to achieve little success. The commercial success of "Ca n't Get You Out of My Head '' in the US is often considered to have piloted its parent album Fever to achieve similar success in the region. The album would later peak at number three on the Billboard 200 chart, and attain a platinum certification from the Recording Industry Association of America for shipments of 1,000,000 units. Fever reportedly sold over six million copies worldwide, becoming Minogue 's highest selling album as of 2013. -- Chris True, in his biography of Minogue at AllMusic. In 2011, Rolling Stone magazine ranked "Ca n't Get You Out of My Head '' at number 45 on "100 Best Songs of the 2000s '' list, noting that Minogue had "seduced the U.S. with this mirror - ball classic '' and that "we 've been hearing it at the gym ever since. '' NME ranked the song at number 74 on their "100 Best Track of the Noughties '' list, saying it "encapsulated everything enviable in a well - crafted song '' and heralding it as Minogue 's best single. In 2012, Priya Elan from NME ranked the song at number four on his "The Greatest Pop Songs in History '' list, saying: "It was unlike any song I remember hearing before. '' In 2012, The Guardian included the song in their list of "The Best Number One Records, '' labelling it to be "sleek, Arctic - blue minimalism, like an emotionally thwarted retelling of Donna Summer 's "I Feel Love. '' In the same year, PRS for Music, a UK copyright collection society and performance rights organisation which collects royalties on behalf of songwriters and composers, named "Ca n't Get You Out of My Head '' the "Most Popular Song of the Decade '' as it received the highest airplay and live covers in the 2000s (decade). In 2013, a survey of 700 people was conducted as part of the Manchester Science Festival to find out which song they considered the "catchiest, '' and "Ca n't Get You Out of My Head '' topped the poll. Lee Barron, in his essay The Seven Ages of Kylie Minogue: Postmodernism, Identity, and Performative Mimicry, noted that the song "further established Minogue 's cultural and commercial relevance in the new millennium. '' He remarked that the song "with its hypnotic ' la la la ' refrain and the deceptively uncomplicated, catchily repetitive beats and synth - sound, marked yet another clearly defined image transformation from the camp - infused Light Years to an emphasis upon a cool, machine - like sexuality, a trait clearly identifiable within the promotional video for ' Ca n't Get You Out of My Head. ' '' Similarly, Everett True from The Guardian wrote that the song continued the "change in the marketing and public perception of Kylie '' and her transition from the "homely girl - next - door '' to "a much more flirtatious, sophisticated persona '' that started with the release of "Spinning Around '' in 2000. True also felt that the success of "Ca n't Get You Out of My Head '' was one of the motivating factors behind "manufactured '' pop music gaining a "new postmodern respectability '' and marked a "clear shift in attitude towards pop music among the ' serious ' rock critic fraternity: the idea that (manufactured, female) pop music might well be the equal of (organic, male) rock music after all, that each has their high points and their low. '' In 2011, Minogue 's official website posted a special article marking the song 's 10th anniversary on 8 September, the release date of "Ca n't Get You Out of My Head '' in Australia. "Ca n't Get You Out of My Head '' is recognised as Minogue 's signature song. "Ca n't Get You Out of My Head '' garnered Minogue a number of awards. At the 2001 Top of the Pops Awards ceremony, the song won the award for "Best Single. '' At the 2002 ARIA Music Awards ceremony, "Ca n't Get You Out of My Head '' won the awards for "Single of the Year '' and "Highest Selling Single, '' and Minogue won the "Outstanding Achievement Award. '' In 2002, it won a Dutch Edison Award for "Single of the Year. '' In the same year, Dennis and Davis won three awards at the 47th Ivor Novello Awards for their composition of the song; they won the awards "The Ivors Dance Award, '' "Most Performed Work, '' and "International Hit of the Year. '' At the inaugural Premios Oye! in 2002, the song received a nomination in the category of "Song of the Year. '' In the 2007 episode of The Simpsons, "Homerazzi '', the track is heard playing in a celebrity - packed nightclub into which Homer storms. The track is also heard in the films Bridget Jones: The Edge of Reason, Hey Good Looking!, Holy Motors, and 20,000 Days on Earth. The "Ca n't Get Blue Monday Out of My Head '' mash - up is heard in the film Layer Cake, and New Order themselves sampled the song during their 2005 Coachella set as they performed "Blue Monday ''. Ca n't Get You Out Of My Head is featured in the popular dancing game, Just Dance for the Nintendo Wii, it has yet to appear in Just Dance Unlimited or Just Dance Now These are the formats and track listings of major single releases of "Ca n't Get You Out of My Head: '' sales figures based on certification alone shipments figures based on certification alone The Complete Kylie, Simon Sheridan, Reynolds & Hearn Books (February 2009). (2nd ed.) ISBN 1 - 905287 - 89 - 5
what is the most recent nicholas sparks movie
Nicholas Sparks - wikipedia Nicholas Charles Sparks (born December 31, 1965) is an American romance novelist and screenwriter. He has published nineteen novels and two non-fiction books. Several of his novels have become international bestsellers, and eleven of his romantic - drama novels have been adapted to film all with multimillion - dollar box office grosses. Sparks was born in Omaha, Nebraska and wrote his first novel, The Passing, in 1985, while a student at the University of Notre Dame. His first published work came in 1990, when he co-wrote with Billy Mills Wokini: A Lakota Journey to Happiness and Self - Understanding, which sold approximated 50,000 copies in its first year. In 1993, Sparks wrote his breakthrough novel The Notebook in his spare time while selling pharmaceuticals in Washington, D.C.. Two years later, his novel was discovered by literary agent Theresa Park who offered to represent him. The novel was published in October 1996 and made the New York Times best - seller list in its first week of release. Nicholas Sparks was born on December 31, 1965 in Omaha, Nebraska to Patrick Michael Sparks, a future professor of business, and Jill Emma Marie Sparks (née Thoene), a homemaker and an optometrist 's assistant. Nicholas was the second of three children, with an older brother, Michael Earl "Micah '' Sparks (born 1964), and a younger sister, Danielle "Dana '' Sparks (1966 -- 2000), who died at the age of 33 from a brain tumor. Sparks has said that she was the inspiration for the main character in his novel A Walk to Remember. Sparks was raised in the Roman Catholic faith, and is of German, Czech, English, and Irish ancestry. He and his ex-wife are Catholics and are raising their children in the Catholic faith. His father pursued graduate studies at University of Minnesota and University of Southern California, one reason for his family 's frequent moves. By the time Sparks was eight, he had lived in Watertown, Minnesota; Inglewood, California; Playa Del Rey, California and his mother 's hometown of Grand Island, Nebraska for a year, during which his parents were separated. By 1974 his father became a professor of business at California State University, Sacramento, and the family settled in Fair Oaks, California. The family remained there through Sparks ' high school days, and in 1984, he graduated as the valedictorian of Bella Vista High School. After being offered a full sports scholarship for track and field, at the University of Notre Dame, Sparks accepted and enrolled, majoring in business finance. In 1988, while on spring break, he met his future wife, Cathy Cote of New Hampshire, and then concluded his early academic work by graduating from Notre Dame with honors. Sparks and Cote would be married on July 22, 1989, and they moved to New Bern, North Carolina. Prior to those milestones, however, Sparks had begun writing in his early college years. Sparks decided to start writing based on a simple remark from his mother when he was 19 years old that introduced him to the possibility: ' "Your problem is that you 're bored. You need to find something to do... '' Then she looked at me and said the words that would eventually change my life: "Write a book. '' Until that moment, I had never considered writing. Granted, I read all the time, but actually sitting down and coming up with a story on my own?... I was nineteen years old and had become an accidental author. In 1985, while at home for the summer between his freshman and sophomore years at Notre Dame, Sparks penned his first -- though never published -- novel entitled The Passing. He wrote another novel in 1989, also unpublished called The Royal Murders. After college, Sparks sought both work with publishers, and applied to law school, but was rejected in both attempts. He then spent the next three years trying other careers, including real estate appraisal, waiting tables, selling dental products by phone and starting his own manufacturing business. In 1990, Sparks co-wrote a book with Billy Mills entitled Wokini: A Lakota Journey to Happiness and Self - Understanding, a nonfiction book about a three - week trip Sparks and his brother took around the around the world, as well as their personal growth experiences during that time, and the influence of Lakota spiritual beliefs and practices. The book was published by Feather Publishing, Random House, and Hay House, and sales for this first book approximated 50,000 copies in its first year after release. In 1992, Sparks began selling pharmaceuticals, and in 1993 was transferred to Washington, D.C.. It was there that he wrote another novel in his spare time, The Notebook. Two years later, he was discovered by literary agent Theresa Park, who picked The Notebook out of her agency 's slush pile, liked it, and offered to represent him. In October 1995, Park secured a $1 million advance for The Notebook from Time Warner Book Group. The novel was published in October 1996 and made the New York Times best - seller list in its first week of release. With the success of his first novel, he moved to New Bern, North Carolina. He subsequently wrote several international bestsellers, and several of his novels have been adapted as films: Message in a Bottle (1999), A Walk to Remember (2002), The Notebook (2004), Nights in Rodanthe (2008), Dear John (2010), The Last Song (2010), The Lucky One (2012), Safe Haven (2013), The Best of Me (2014), The Longest Ride (2015), and The Choice (2016). He has also sold the screenplay adaptations of True Believer and At First Sight. His 2016 novel, Two by Two, sold about 98,000 copies during the first week after release. 11 of Nicholas Sparks ' novels have been # 1 New York Times Best Sellers. Sparks and his then - wife Cathy lived together in New Bern, North Carolina with their three sons and twin daughters until 2014. On January 6, 2015, Sparks announced that he and Cathy had amicably separated. They subsequently divorced. Sparks donated $9,000,000 for a new, all - weather tartan track to New Bern High School along with his time to help coach the New Bern High School track team and a local club track team as a volunteer head coach. Sparks contributes to other local and national charities, as well, including the Creative Writing Program (MFA) at the University of Notre Dame by funding scholarships, internships and annual fellowships. In 2008, Entertainment Weekly reported that Sparks and his then - wife had donated "close to $10 million '' to start a Christian, international, college - prep private school, The Epiphany School of Global Studies, which emphasizes travel and lifelong learning. He was later sued by the headmaster of this school who accused Sparks of homophobia, racism and anti-semitism. In his spare time, Sparks volunteers at his local retirement home.
when is elena coming back in season 7
The vampire Diaries (season 7) - Wikipedia The Vampire Diaries, a one - hour American supernatural drama, was renewed for a seventh season by The CW on January 11, 2015, and premiered on October 8, 2015. On March 11, 2016, The CW renewed The Vampire Diaries for an eighth and final season. This season represents a soft - reboot for the series, with the exits of Nina Dobrev, Michael Trevino and Steven R. McQueen. Damon, Bonnie and Alaric have been spending the summer in Europe while Stefan, Caroline and Matt make a deal with Lily and her Heretics when they failed to defeat them. Stefan and Caroline evacuate the whole town to protect them from the Heretics but agree that anyone who passes the border will be Heretics ' blood meal. Alaric consults psychics about his dead wife. Damon rescues Bonnie from a car accident but only at the last minute. When the trio get back in town, they discover about the deal with the Heretics. Damon and Bonnie make their own plan and kill a Heretic. When Lily learns about the death, she attains revenge by kidnapping Caroline with the help of Enzo. In present day, Caroline is held prisoner by vervain soaked ropes but escapes from Enzo, only to get recaptured by the Heretic girls. The girls torture her. Valerie seems to be good to her by putting a spell on her which keeps her away from other vampires. Damon and Stefan rescue Caroline from the house which was made easy to enter by making Matt 's heart stop temporarily, as Matt was made owner of the house. In the last minute, Stefan is pulled out of the house and Damon is told by Enzo that the Heretics have Elena. Lily and the Heretics perform a funeral ritual for their fellow dead heretic and the love of Lily 's life. Lily threatens Damon into admitting to the murder of the Heretic and tells him to keep away from Mystic Falls. Meanwhile, Alaric tests the Phoenix stone on a corpse in the morgue. Damon sets off on a road trip with Bonnie and Alaric in search of leverage they can use against his mother. The trio searches for and meet the last heretic, Oscar, who defected from Lily 's group. Damon asked for Oscar 's help in siphoning magic from Bonnie who was getting horrible visions after touching the Phoenix Stone. Oscar refuses to help when he finds out about the stone, and tries to escape by knocking Alaric and Bonnie unconscious and breaking Damon 's neck. The trio captures Oscar and makes a deal with Lily in exchange for Elena 's coffin. Meanwhile, Caroline, still being held hostage by the Heretics, learns some information about Valerie 's past. Flashbacks to 1863 show Valerie meeting Stefan for the first time and showing how they fell in love. In actual fact, Valerie was sent by Lily to watch out for her son but they fell in love. When Valerie planned to run away while pregnant, Julian beat her and brought her back to the ship. There, Valerie killed herself which turned her into the very first heretic as she had died with Lily 's blood in her system. At the same time, Stefan learns a few unexpected details about his own past from Lily about how she first met Valerie. Also, Alaric turns to Bonnie for her help after coming clean that he has the Phoenix Stone which can bring the dead back to life. Lily sets Caroline free out of admiration for Stefan 's honesty with her. On Halloween, Stefan and Damon desperately hide Oscar 's corpse, as his death disables their exchange plan with Lily, who released Caroline to Enzo 's delighted relief. They turn for his resuscitation to Bonnie and Alaric, who admits having kept the Phoenix stone. However the sorcery ritual proves hard to crack while the heretic girls, Nora and Mary Louise come searching for the corpse, which is hid in the dorm. Lily instructs Nora and Mary Louise to kill a student at Whitmore College every hour unless Oscar is returned to her. So, Stefan and Caroline team up to lure Nora and Mary Louise to Whitmore 's annual Heaven and Hell dance to keep an eye on the couple while Damon, Bonnie and Alaric try to resurrect Oscar using the Phoenix Stone. Also, Valerie confides in Enzo about killing Oscar in order to prevent him from detailing the whereabouts of Lily 's former lover, Julian, whom Valerie has a past with. Damon and Stefan team up with Valerie, who is searching for Julian 's coffin to destroy it. Meanwhile, Rick is dealing with Jo to cope things after she becomes alive again. Bonnie goes to the Salvatore manor to ask Oscar, who is under Enzo 's watch, about the Phoenix stone but unfortunately, gets attacked by him. Enzo gets the Phoenix stone as Bonnie loses it when she tries to run away from Oscar. Enzo then stabs Oscar when he goes crazy. When the Salvatore brothers and Valerie reach their location, she explains that the Phoenix stone does not bring people alive but it transferred vampire souls imprisoned in the stone into a body. Just as Valerie tries to burn Julian 's corpse, Lily and the Heretics arrive and rescue him. Valerie, who for her actions are left alone by the Heretics, tells Damon that it was Lily 's plan to make Elena sleep as long as Bonnie is alive. Damon tries to murder Lily but they escape by making coffins explode and injuring Stefan, Damon and Valerie. Back in the Salvatore house, Enzo, in possession of Phoenix stone, makes Lily choose between him and Julian but Lily chooses the latter and disappointed, Enzo gives her the stone. With the stone, the Heretics revive Julian. Bonnie sadly informs Rick about the stone and that Jo 's body might not have his wife 's soul. Bonnie is joined by Damon who vows to get revenge on Lily who destroys everything that makes him happy. Lily and the Heretics throw a party at Salvatore manor which Stefan and Damon also attend. Bonnie arrives with Enzo and she tries to make Lily jealous for Enzo. Meanwhile, Rick deals with Jo and her suddenly declining health. Valerie and Caroline arrive there, where Valerie explains that Jo was human and the vampire soul is not compatible with her body which meant that she was as good as dead. Back in the manor, Julian and the Salvatore brothers get into an argument which brings them to fight until Lily intervenes and says she will never let her children suffer. Rick has to let go of Jo but Valerie finds out the twins are still alive. They then make a locator spell which seems to points nowhere but shockingly Valerie deduces that the twins are inside Caroline 's body. Lily, Damon and Stefan plan killing Julian which requires unlinking him from Lily with the help of the Heretics. Caroline informs Stefan about her pregnancy who finally comes to terms with it. Enzo warns Lily of the impending threat to her life and they share a moment. Later, Enzo is kidnapped by Matt and his troop. Valerie and the Salvatore brothers corner Julian but are interrupted by Mary Louise. Julian forces Lily to choose between killing Valerie or Damon. Unaware that the unlinking 's already done, Lily stakes herself, thinking this could kill Julian. Nora breaks up with Mary Louise as the latter sided with Julian instead of Lily. Everybody then says their goodbyes to Lily before her death, except Damon, who coldly dismisses her for Elena 's sleep spell. Lily is put to rest. Stefan and Damon 's search for Julian leads them to a bar outside of Mystic Falls filled with dead Santas. As Julian arrives with his minions, the two parties tiff up which later aggravates back home. Nora and Bonnie become friends during a toy drive. Caroline is struggling with her pregnancy as her feeding urges keep her on the edge; she visits her mom 's grave for some peace. The fight between the Salvatore brothers and Julian finally leads to him staking Damon with the Phoenix sword, ergo trapping his soul inside the Phoenix hell - stone. It is shown that Damon is back to his Civil War days, injured and scared. When Nora learns that all this time Bonnie was simply stalling her from saving Julian, she siphons out Bonnie 's magic and stakes Stefan with the Phoenix Sword as well. Damon saves Matt and Bonnie as Caroline stealthily injects him with vervain. Stefan is having disturbing post-hell hallucinations of him and Damon underwater with the latter drowning. In his hallucinations, Damon even posed a threat to his and Caroline 's relationship. (Damon kept Stefan from escaping hell - world, so in order to escape, the latter had to let go of him). Soon, Henry 's spirit starts haunting Damon. Tyler shows up for Caroline 's baby shower. Under Julian 's leadership, Mystic Falls is now a shoddy home to his vampire mates. It is revealed that Alaric will move to Dallas and Caroline finds it hard to accept that she has to eventually part with the twins. Matt is arrested by a lady cop named Penny for DUI who later questions his wooden weapons. Nora and Bonnie anxiously plan to face a ruthless legendary vampire huntress (she hunts down any Phoenix sword - staked vampire) after Nora receives an X-marked envelope warning of her arrival. Damon meets up with an apprehensive Tyler forcing him to take him to see Elena. As Damon opens Elena 's coffin, he finds Henry. He charges at a prepared Tyler but eventually bashes him against concrete. Henry here advises Damon to attain liberation by letting go of what holds back his true monstrous nature. An exasperated Damon attempts to finish off Henry by setting the coffin on fire. As he watches it burn, Henry reappears telling him that he burnt Elena for good which leaves him devastated. Back home, Stefan assures Damon that he will always have his back who is at a loss of words, unable to reveal that his reason to breathe and live is no more as he killed Elena. Back in Whitmore, Caroline 's supernatural pregnancy is affecting her as the babies are siphoners who keep siphoning away her vampirism magic causing her to desiccate inside out. Valerie tries swapping their magic source for a magic - filled talisman but eventually it all goes in vain. Officer Penny learns about Mystic Falls brimming with vampires from Matt. Bonnie, Mary Lou and Nora search for Rayna Cruz, "The Huntress '' finding her old and feeble in a psychiatric ward. Puzzled, the Heretics carry on their search when suddenly, Rayna attacks Bonnie who is saved just in time by Enzo who kills off Rayna. It is revealed that Enzo sent the ' X ' cards to track Rayna; he burns her to ashes only to revive her back in her young true self. Elsewhere, Damon returns to his road - kill antics, picks on Julian and indulges in dangerous games with his mates. The fight gets nastier with Julian stepping up to fight him. By this time, Stefan has reached the spot and begs Damon to stop, who gives him a deaf ear continuing to fight as if he had a death wish. After a while, Damon begs Julian for death. Stefan interferes and finally gets Damon to walk off the ring. Eventually, Damon discloses about him burning Elena alive to a stunned Stefan who beats him up, abandons him and has a breakdown. He is now hell bent on killing Julian. Damon looks on as he stands there taken over by remorse and bleeding profusely. Back in Mystic Falls, Valerie and Stefan finally take on Julian as Stefan catches him off guard and stakes him while Valerie keeps them cloaked. Back in Salvatore boarding house, Damon gets an unexpected visitor, a brunette (she saved him during one of the ring fights) who comes in and kisses him. Wallowing in emptiness, Damon gazes into her eyes accepting the bitter truth of fate and as a final jolt of mental agony, he bends down to kiss her back. Rayna is a supernaturally blessed vampire huntress who was once compelled by Julian to kill her own father, a hunter belonging to one among The Five, using the Phoenix sword as her weapon. Back in 1863, hell bent on seeking revenge, she had impaled Beau and staked Julian into the Phoenix hell - stone. She has now released herself and retrieved her sword from Bonnie and Damon. Caroline 's fast desiccation called for an early C section delivery which was aided by all the Heretics and later Bonnie. Suddenly, Beau 's wound reopens and he gets killed and burnt by Rayna. Nora and Mary Louise run for their lives. Meanwhile, Valerie and Stefan try easing the delivery (the twins were reluctant to come out thus leave their constant source of magic). Bonnie, Alaric and Valerie watched as Caroline successfully gave birth to the twins. Damon distracted and tried fighting Rayna but almost gets staked just as Stefan came to rescue. In this process Stefan gets marked by the sword once more forcing him to run from her. Enzo informs Damon that Elena is alive and well protected by his men and that he was hallucinating her death. Beyond relieved, Damon promises Stefan that he will make things right now that life has offered him a second chance. Alaric names the twins Josie in honor of their mom Josette Laughlin, and Elizabeth, in honor of Caroline 's mom. Caroline moves to Dallas with Alaric and the twins. Valerie locates an anti-magic bar in New Orleans, "St. James Infirmary '' where Stefan must hide (as Rayna 's sword will not be able to trace him there). At the bar, Stefan meets Klaus. He eventually finds out that Stefan is there hiding from Rayna and is furious. He asks Stefan to leave, and Stefan forgets his phone. Klaus gets to talk with Caroline, finally agreeing to protect Stefan and does so. While on the run, both Nora and Mary Louise are tranquilized and captured by men who work for "The Armory ''. Meanwhile, Bonnie and Damon meet Enzo at a supernatural museum - like organization named "The Armory '', who are desperate to lock up Rayna. Enzo works for them as they are giving him information about his family. He wants to know Stefan 's location in order to lure Rayna in. Damon adamantly keeps his brother 's location a secret after getting warned about the organization 's dubious past from Valerie. An equally adamant Enzo tranquilizes Damon and knocks out Bonnie (it is shown that, somehow, Enzo makes Bonnie immune to magic). Damon finds himself locked in with a comatose Tyler who will soon transition into a werewolf because it is a full moon. Bonnie takes desperate measures to save Damon from being bitten. Damon struggles with Tyler, who makes him realize that "people around him are always dying in order to save him and it must stop ''. Taking it to heart, Damon advises Bonnie to not open the cell as she could die. Undeterred, Bonnie manages to open the cell, only to get her skull bashed by Tyler who runs away after a fight. Damon sees that she is not healing with his blood and gets her hospitalized. As he talks to an unconscious Bonnie, he says that he is deeply hurt to have put her and Stefan 's lives in danger, both of whom tried to save him. This makes him think that it would be best to take himself out of the equation as to no longer bother his loved ones, and it will all end tomorrow. This episode begins a crossover with The Originals that concludes on "A Streetcar Named Desire ''. Stefan and Valerie pursue a herb that can hide them from Rayna. With help from Matt, Damon finally captures Rayna and kills her multiple times in order to die permanently. Bonnie and Enzo discover that once Rayna 's lives are over, all of those she marked will die as well and inform Damon just in time before she dies permanently. Damon escapes and notifies Stefan of his intention to desiccate for the next 60 years, to Stefan 's disapproval. After an advise from Penny, Matt confronts Stefan and tells him and all vampires to get away from Mystic Falls, otherwise he will show the world the existence of vampires on a CCTV footage. Damon arrives at the storage unit where he plans to desiccate alongside Elena 's coffin, but Bonnie shows up letting him know that she is extremely hurt with his decision. She leaves him there and Damon finally lies down to slowly desiccate near to death with hopes that he has saved his loved ones from the array of bad choices he makes putting their lives in jeopardy. Enzo, Damon, Stefan and Caroline try coming up with plans to open The Armory while Matt and Bonnie are chasing after them. While doing so, they meet an accident that injures Matt badly. Alaric and Caroline decide on making the twins open up the vault by siphoning off Bonnie 's spell. Penny 's spirit makes Matt realize that he deserves better, making him regain his consciousness. The twins open the Armory. Damon and Stefan go inside. Enzo lures Bonnie to the cabin letting Damon and Stefan buy more time. The tiny cabin gets nasty as Bonnie, unable to hold her urges, holds a stake down Enzo 's chest. While searching, Damon and Stefan finally approach the vault which Damon decides to venture in, alone. He assures Stefan that in some way or another, everything will be okay. They share a handshake and hug and part ways as Damon finally gets inside the vault. Enzo recollects of the glorious 3 years as he struggles to the impale while Damon, just in time, finds the Everlasting 's body, sets it on fire thus severing the link and lifting the curse off of Bonnie. Outside, Alaric tells Caroline to stay back with Stefan and parts ways on good terms saying that no matter what, they will always be family. Stefan and Caroline get back together. A rejoiced Bonnie forgives Damon on the phone and while he 's getting out, he starts hearing Elena 's voice. Enzo and Bonnie warn Damon that it is the vault playing tricks with his mind but he keeps following the voice and something scary overpowers him. Enzo rushes in to help encountering a strange Damon. The monster takes Enzo as well. The cast features Paul Wesley as Stefan Salvatore, Ian Somerhalder as Damon Salvatore, Kat Graham as Bonnie Bennett, Candice Accola as Caroline Forbes and Zach Roerig as Matt Donovan. On April 6, 2015, it was announced that Nina Dobrev would be leaving after the sixth season; he announced that Michael Trevino would only appear as a guest on the new season. On April 11, 2015, it was announced that Steven R. McQueen would be departing the show. On July 15, 2015, it was announced that Scarlett Byrne and Teressa Liane were hired to play Nora and Mary Louise, respectively. Nora and Mary Louise are described as extremely powerful and protective with each other, because it is the first same - sex couple in the series; additionally it reported that Elizabeth Blackmore was chosen to give life to Valerie. The trio is part of the family of heretics Lily Salvatore. On November 5, he announced that Leslie - Anne Huff was hired to play Rayna, a slayer.
when was the last time the mona lisa was stolen
Vincenzo Peruggia - wikipedia Vincenzo Peruggia (October 8, 1881 -- October 8, 1925) was an Italian thief, most famous for stealing the Mona Lisa on 21 August 1911. Born in Dumenza, Varese, Italy, he died in Saint - Maur - des - Fossés, France. In 1911, Peruggia perpetrated what has been described as the greatest art theft of the 20th century. It was a police theory that the former Louvre worker hid inside the museum on Sunday, August 20, knowing the museum would be closed the following day. But, according to Peruggia 's interrogation in Florence after his arrest, he entered the museum on Monday, August 21 around 7 am, through the door where the other Louvre workers were entering. He said he wore one of the white smocks that museum employees customarily wore and was indistinguishable from the other workers. When the Salon Carré, where the Mona Lisa hung, was empty, he lifted the painting off the four iron pegs that secured it to the wall and took it to a nearby service staircase. There, he removed the protective case and frame. Some people report that he concealed the painting (which Leonardo painted on wood) under his smock. But Peruggia was only 5 ft 3 in (160 cm), and the Mona Lisa measures approx. 21 in × 30 in (53 cm × 77 cm), so it would not fit under a smock worn by someone his size. Instead, he said he took off his smock and wrapped it around the painting, tucked it under his arm, and left the Louvre through the same door he had entered. Peruggia hid the painting in his apartment in Paris. Supposedly, when police arrived to search his apartment and question him, they accepted his alibi that he had been working at a different location on the day of the theft. After keeping the painting hidden in a trunk in his apartment for two years, Peruggia returned to Italy with it. He kept it in his apartment in Florence, Italy but grew impatient, and was finally caught when he contacted Alfredo Geri, the owner of an art gallery in Florence. Geri 's story conflicts with Peruggia 's, but it was clear that Peruggia expected a reward for returning the painting to what he regarded as its "homeland ''. Geri called in Giovanni Poggi, director of the Uffizi Gallery, who authenticated the painting. Poggi and Geri, after taking the painting for "safekeeping '', informed the police, who arrested Peruggia at his hotel. After its recovery, the painting was exhibited all over Italy with banner headlines rejoicing its return and then returned to the Louvre in 1913. While the painting was famous before the theft, the notoriety it received from the newspaper headlines and the large scale police investigation helped the artwork become one of the best known in the world. Peruggia was released from jail after a short time and served in the Italian army during World War I. He later married, had one daughter, Celestina, returned to France, and continued to work as a painter decorator using his birth name Pietro Peruggia. He died on October 8, 1925 (his 44th birthday) in the town of Saint - Maur - des - Fossés, France. His death was not widely reported by the media; obituaries appeared mistakenly only when another Vincenzo Peruggia died in Haute - Savoie in 1947. There are currently two predominant theories regarding the theft of the Mona Lisa. Peruggia said he did it for a patriotic reason: he wanted to bring the painting back for display in Italy "after it was stolen by Napoleon ''. Although perhaps sincere in his motive, Vincenzo may not have known that Leonardo da Vinci took this painting as a gift for Francis I when he moved to France to become a painter in his court during the 16th century, 250 years before Napoleon 's birth. Experts have questioned the ' patriotism ' motive on the grounds that -- if ' patriotism ' was the true motive -- Peruggia would have donated the painting to an Italian museum, rather than have attempted to profit from its sale. The question of money is also confirmed by letters that Peruggia sent to his father after the theft. On December 22, 1911, four months after the theft, he wrote that Paris was where "I will make my fortune and that his (fortune) will arrive in one shot. '' The following year (1912), he wrote: "I am making a vow for you to live long and enjoy the prize that your son is about to realize for you and for all our family. '' Put on trial, the court agreed, to some extent, that Peruggia committed his crime for patriotic reasons and gave him a lenient sentence. He was sent to jail for one year and 15 days, but was hailed as a great patriot in Italy and served only seven months in jail. Another theory emerged later. The theft may have been encouraged or masterminded by Eduardo de Valfierno, a con - man who had commissioned the French art forger Yves Chaudron to make copies of the painting so he could sell them as the missing original. The copies would have gone up in value if the original were stolen. This theory is based entirely on a 1932 article by former Hearst journalist Karl Decker in The Saturday Evening Post. Decker claimed to have known "Valfierno '' and heard the story from him in 1913, promising not to print it until he learned of Valfierno 's death. There is no external confirmation for this tale.
who led the league in scoring in the nba this year
List of National Basketball Association annual scoring Leaders - wikipedia In basketball, points are accumulated through free throws or field goals. The National Basketball Association 's (NBA) scoring title is awarded to the player with the highest points per game average in a given season. The scoring title was originally determined by total points scored through the 1968 -- 69 season, after which points per game was used to determine the leader instead. Players who earned scoring titles before the 1979 -- 80 season did not record any three point field goals because the three - point line had just been implemented in the NBA at the start of that season. To qualify for the scoring title, the player must appear in at least 70 games (out of 82) or have at least 1,400 points. These have been the entry criteria since the 1974 -- 75 season. Wilt Chamberlain holds the all - time records for total points scored (4,029) and points per game (50.4) in a season; both records were achieved in the 1961 -- 62 season. He also holds the rookie records for points per game when he averaged 37.6 points in the 1959 -- 60 season. Among active players, Kevin Durant has the highest point total (2,593) and the highest scoring average (32.0) in a season; both were achieved in the 2013 -- 14 season. Michael Jordan has won the most scoring titles, with ten. Jordan and Chamberlain are the only players to have won seven consecutive scoring titles (this was also Chamberlain 's career total). George Gervin, Allen Iverson and Durant have won four scoring titles in their career, and George Mikan, Neil Johnston and Bob McAdoo have achieved it three times. Paul Arizin, Bob Pettit, Kareem Abdul - Jabbar, Shaquille O'Neal, Tracy McGrady, Kobe Bryant, and Russell Westbrook have each won the scoring title twice. Since the 1946 -- 47 season, five players have won both the scoring title and the NBA championship in the same season: Fulks in 1947 with the Philadelphia Warriors, Mikan from 1949 to 1950 with the Minneapolis Lakers, Abdul - Jabbar (then Lew Alcindor) in 1971 with the Milwaukee Bucks, Jordan from 1991 to 1993 and from 1996 to 1998 with the Chicago Bulls, and O'Neal in 2000 with the Los Angeles Lakers. Since the introduction of the three - point field goal, O'Neal is the only scoring leader to have made no three - pointers in his winning season. At 21 years and 197 days, Durant is the youngest scoring leader in NBA history, averaging 30.1 points in the 2009 -- 10 season. Russell Westbrook led the league with an average of 31.6 points in the 2016 -- 17 season, when he also became the second NBA player to average a triple - double in a season. The most recent champion is James Harden.
the first song written by george harrison on any of the beatles albums
List of songs recorded by George Harrison - wikipedia George Harrison (1943 -- 2001) was an English musician who recorded many songs during his career. During his lifetime, he released eleven studio albums, two compilation albums, two live albums, and many singles as a solo artist. After his death in 2001, a final studio album, two singles, two compilation albums, and four box sets have been released as of 2017. Originally gaining fame as the lead guitarist of the Beatles, Harrison began incorporating Indian instrumentation into their music that helped broaden the horizons of his bandmates and their English and American audiences. Before the break - up of the Beatles, Harrison released two solo albums: the mainly instrumental Wonderwall Music, soundtrack to the 1968 film Wonderwall, and Electronic Sound in 1969, an experimental album that features two lengthy pieces performed on a Moog synthesizer. After the Beatles ' break - up, Harrison released the triple album All Things Must Pass in 1970. Backed by the highly commercial and critically successful single "My Sweet Lord '', the album was a major commercial success and received almost universal acclaim from critics. According to Robert Rodriguez, the album introduced the world to the full range of Harrison 's talents as a songwriter, hidden for years behind Lennon and McCartney. A year later, Harrison organised the Concert for Bangladesh, a charity event created to help raise money for refugees during the Bangladesh Liberation War. A live album and concert film of the event was released in 1972. Harrison 's follow - up to All Things Must Pass, Living in the Material World, was released in 1973. Aided by the popular single "Give Me Love (Give Me Peace on Earth) '', the album continued Harrison 's run of commercial success. This success diminished, however, when his 1974 North American tour received unfavourable reviews in the music press -- a critical backlash that continued with the release of his subsequent album, Dark Horse, and 1975 's Extra Texture (Read All About It), his final album on the Apple record label. Harrison 's first release on his Dark Horse label, Thirty Three & 1 / 3, was issued in 1976, along with the lead single "This Song ''. Widely considered a return to form, the album earned Harrison his strongest reviews since All Things Must Pass. The album was also a commercial success, outselling both Dark Horse and Extra Texture in America. The warmly received self - titled George Harrison was released in 1979. The album featured the sequel to "Here Comes the Sun '', "Here Comes the Moon '', and "Not Guilty '', originally recorded during sessions for the Beatles ' White Album in 1968. Harrison 's next album, Somewhere in England, was released in 1981 and featured "All Those Years Ago '', a tribute to recently deceased former Beatle John Lennon. A year later, Gone Troppo was released, and featured the single "Wake Up My Love '' and "Circles '', another unused song from the White Album era. Having become disinterested in contemporary music, Harrison refused to promote the release, and the album was not a commercial success. After Gone Troppo, Harrison took a five - year hiatus. In 1987, Harrison made his comeback with Cloud Nine. With singles including a rendition of James Ray 's "Got My Mind Set on You '' and the Beatles tribute "When We Was Fab '', the album was a commercial success and received positive reviews from critics. Harrison then formed the supergroup the Traveling Wilburys with Jeff Lynne, Roy Orbison, Tom Petty, and Bob Dylan. The group released two successful studio albums, in 1988 and 1990. In 1994, Harrison collaborated with former Beatles Paul McCartney and Ringo Starr for the Beatles Anthology project. Harrison died of lung cancer seven years later in November 2001. His final album, Brainwashed, was posthumously released in 2002 and was completed by his son Dhani Harrison and Jeff Lynne.
who is the actor that plays frank gallagher
William H. Macy - wikipedia William Hall Macy Jr. (born March 13, 1950) is an American actor, screenwriter, teacher and theatre director. His film career has been built mostly on his appearances in small, independent films, though he has also appeared in summer action films. Macy has described himself as "sort of a Middle American, WASPy, Lutheran kind of guy... Everyman ''. Macy has won two Emmy Awards and three Screen Actors Guild Awards, as well as being nominated for an Academy Award for Best Supporting Actor. Since 2011, he has played Frank Gallagher, a main character in the Showtime adaptation of British television series Shameless. Macy and actress Felicity Huffman have been married since 1997. Macy was born in Miami, Florida, and grew up in Georgia and Maryland. His father, William Hall Macy, Sr. (1922 - 2007), was awarded the Distinguished Flying Cross and an Air Medal for flying a B - 17 Flying Fortress bomber in World War II; he later ran a construction company in Atlanta, Georgia, and worked for Dun & Bradstreet before taking over a Cumberland, Maryland - based insurance agency when Macy was nine years old. His mother, Lois (née Overstreet; 1920 - 2001), was a war widow who met Macy 's father after her first husband died in 1943; Macy has described her as a "Southern belle ''. Macy graduated from Allegany High School in Cumberland, Maryland in 1968, going on to study veterinary medicine at Bethany College in West Virginia. By his own admission a "wretched student, '' he transferred to Goddard College and became involved in theatre, where he performed in ensemble productions of The Three Penny Opera, A Midsummer Night 's Dream and a wide variety of contemporary and improvisational pieces. At Goddard, he first met playwright David Mamet. After graduating from Goddard in 1971, Macy moved to Chicago, Illinois, working as a bartender to pay the rent. Within a year, he and David Mamet, among others, founded St. Nicholas Theater Company, where Macy originated roles in a number of Mamet 's plays, such as American Buffalo and The Water Engine. While in Chicago in his twenties, he did a TV commercial. He was required to join AFTRA in order to do the commercial, and received his SAG card within a year, which for an elated Macy represented an important moment in his career. Macy spent time in Los Angeles, before moving to New York City in 1980, where he had roles in over 50 Off Broadway and Broadway plays. One of his early on - screen roles was as a turtle named Socrates in the direct - to - video film The Boy Who Loved Trolls (1984), under the name W.H. Macy (so as not to be confused with the actor Bill Macy). He also had a minor role as a hospital orderly on the sitcom Kate & Allie in the fourth - season episode "General Hospital '' (also as W.H. Macy). He has appeared in numerous films that Mamet wrote and / or directed, such as House of Games, Things Change, Homicide, Oleanna (reprising the role he originated in the play of the same name), Wag the Dog, State and Main and Spartan. Macy may be best known for his lead role in Fargo, for which he was nominated for an Academy Award. The role helped boost his career and recognizability, though at the expense of nearly confining him to a narrow typecast of a worried man down on his luck. Other Macy roles of the 1990s and 2000s included Benny & Joon, Above Suspicion, Mr. Holland 's Opus, Ghosts of Mississippi, Air Force One, Boogie Nights, Pleasantville, Gus Van Sant 's remake of Psycho, Happy, Texas, Mystery Men, Magnolia, Jurassic Park III, Focus, Panic, Welcome to Collinwood, Seabiscuit, The Cooler and Sahara. Macy has also had a number of roles on television, including a guest appearance on The Unit, as the President of the United States. In 2003, he won a Peabody Award and two Emmy Awards, one for starring in the lead role, and one as co-writer, of the made - for - TNT film Door to Door. Door to Door is a drama based on the true story of Bill Porter, a door - to - door salesman in Portland, Oregon, born with cerebral palsy. His work on ER and Sports Night has also been recognized with Emmy nominations. In a November 2003 interview with USA Today, Macy stated that he wanted to star in a big - budget action movie "for the money, for the security of a franchise like that. And I love big action - adventure movies. They 're way cool. '' He serves as director - in - residence at the Atlantic Theater Company in New York, where he teaches a technique called Practical Aesthetics. A book describing the technique, A Practical Handbook for the Actor (ISBN 0 - 394 - 74412 - 8), is dedicated to Macy and Mamet. In 2007, Macy starred in Wild Hogs, a film about middle - aged men reliving their youthful days by taking to the open road on their Harley - Davidson motorcycles from Cincinnati to the Pacific Coast. Despite being critically panned with a 14 % "rotten '' rating from Rotten Tomatoes, it was a financial success, grossing over $168 million. In 2009, Macy completed filming on The Maiden Heist, a comedy that co-starred Morgan Freeman and Christopher Walken. On June 23, 2008, the Hollywood Chamber of Commerce announced Macy and his wife, Felicity Huffman, would each receive a star on the Hollywood Walk of Fame in the upcoming year. On January 13, 2009, Macy replaced Jeremy Piven in David Mamet 's Speed - the - Plow on Broadway. Piven suddenly and unexpectedly dropped out of the play in December 2008 after he experienced health problems; Norbert Leo Butz covered the role from December 23, 2008, until Macy took over the part. Dirty Girl, which starred Macy along with Juno Temple, Milla Jovovich, Mary Steenburgen and Tim McGraw, premiered September 12, 2010 at the Toronto International Film Festival. In summer 2010, Macy joined the Showtime pilot Shameless as the protagonist Frank Gallagher. The project ultimately went to series, its first season on premiered January 9, 2011. Macy has received high critical acclaim for his performance, eventually getting an Emmy nomination for Outstanding Lead Actor in a Comedy Series in 2014. In the 2012 film The Sessions, Macy played a priest who helps a man with a severe disability find personal fulfillment through a sex surrogate. He made his directorial debut with the independent drama Rudderless, and stars Billy Crudup, Felicity Huffman, Selena Gomez and Laurence Fishburne. He is currently directing The Layover, a road trip sex comedy starring Alexandra Daddario and Kate Upton in which Macy will also appear. In 2015, he had a small role as Grandpa in the drama film Room, which was nominated for the Academy Award for Best Picture. The film reunited him with his Pleasantville wife, Joan Allen. Macy and actress Felicity Huffman dated on - and - off for 15 years before they married on September 6, 1997; they have two daughters, Sophia Grace (born December 1, 2000) and Georgia Grace (born March 14, 2002). Macy and Huffman appeared at a rally for John Kerry in 2004. Macy also plays the ukulele and is an avid woodturner; he has appeared on the cover of the specialist magazine Fine Woodworking and was featured in an article in the April 2015 issue of American Woodturner (publication of American Association of Woodturners). He is a national ambassador for the United Cerebral Palsy Association. Since shooting the film Wild Hogs, Macy has picked up a strong interest in riding motorcycles.
who plays mike in yours mine and ours
Yours, Mine and Ours (1968 film) - wikipedia Yours, Mine and Ours is a 1968 film, directed by Melville Shavelson and starring Lucille Ball, Henry Fonda and Van Johnson. Before its release, it had three other working titles: The Beardsley Story, Full House, and His, Hers, and Theirs. It was based loosely on the story of Frank and Helen Beardsley, although Desilu Productions bought the rights to the story long before Helen 's autobiographical book Who Gets the Drumstick? was released to bookstores. Screenwriters Madelyn Pugh and Bob Carroll wrote several I Love Lucy - style stunts that in most cases had no basis in the actual lives of the Beardsley family, before Melville Shavelson and Mort Lachman took over primary writing duties. The film was commercially successful, and even the Beardsleys themselves appreciated it. Frank Beardsley is a Navy Chief Warrant Officer, recently detached from the aircraft carrier USS Enterprise and assigned as project officer for the Fresnel lens glide - slope indicator, or "meatball, '' that would eventually become standard equipment on all carriers. Helen North is a civilian nurse working in the dispensary at NAS Alameda, the California U.S. Navy base to which Frank is assigned. Frank meets Helen, first by chance in the commissary on the base and again when Frank brings his distraught teen - age daughter for treatment at the dispensary, where Helen informs him that the young lady is simply growing up in a too - crowded house that lacks a mother 's guidance. They immediately hit it off and go on a date, all the while shying away from admitting their respective secrets: Frank has ten children and Helen has eight, from previous marriages ended by their spouses ' deaths. When each finally learns the other 's secret, they initially resist their mutual attraction. But Chief Warrant Officer Darrell Harrison (Van Johnson) is determined to bring them together, so he "fixes up '' each of them with a sure - to - be-incompatible blind date. Helen 's date is an obstetrician (Sidney Miller) who stands a good head shorter than she ("Darrell had a malicious sense of humor, '' Helen observes in voice - over); Frank 's date is a "hip '' girl (Louise Troy) who is not only young enough to be his daughter, but is also far too forward for his taste. As the final touch, Harrison makes sure that both dates take place in the same Japanese restaurant. As Harrison fully expects, Frank and Helen end up leaving the restaurant together in his car, with Frank 's date sitting uncomfortably between them as they carry on about their children. Frank and Helen continue to date regularly, and eventually he invites her to dinner in his home. This nearly turns disastrous when Mike, Rusty, and Greg (Tim Matheson, Gil Rogers, and Gary Goetzman), Frank 's three sons, mix hefty doses of gin, scotch, and vodka into Helen 's drink. As a result, Helen 's behavior turns wild and embarrassing, which Frank can not comprehend until he catches his sons trying to conceal their laughter. "The court of inquiry is now in session! '' he declares, and gets the three to own up and apologize. After this, he announces his intention to marry Helen, adding, "And nobody put anything into my drink. '' Most of the children fight the union at first, regarding each other and their respective stepparents with suspicion. Eventually, however, the 18 children bond into one large blended family, about to increase -- Helen becomes pregnant. Further tension develops between young Philip North and his teacher at the parochial school that he attends: his teacher insists that he use his "legal '' name, which remains North even after his mother marries Beardsley. This prompts Helen and Frank to discuss cross-adopting each other 's children, who (except for Philip) are aghast at the notion of "reburying '' their deceased biological parents. The subsequent birth of Joseph John Beardsley finally unites the children, who agree unanimously to adoption under a common surname. The film ends with the eldest sibling, Mike Beardsley, going off to Camp Pendleton to begin his stint in the United States Marine Corps. This film departs in several various ways from the actual lives of Frank and Helen Beardsley and their children. The names of Frank and Helen Beardsley and their children are real; the wedding invitation that appears midway through the film is the actual North - Beardsley wedding invitation. The career of Lieutenant Richard North USN is also described accurately, but briefly: specifically, he was a navigator on the crew of an A-3 Skywarrior that crashed in a routine training flight, killing all aboard, exactly as Helen describes in the film. Frank Beardsley is described correctly as a Navy warrant officer. The "loan - out '' of the two youngest Beardsleys is also real, and indeed Michael, Charles ("Rusty ''), and Gregory Beardsley were determined to see their father marry Helen North as a means of rectifying this situation. The movie correctly describes Frank Beardsley as applying his Navy mind - set to the daunting task of organizing such a large family (although the chart with the color - coded bathrooms and letter - coded bedrooms -- "I 'm Eleven Red A! '' -- is likely a Hollywood exaggeration). Finally, Michael Beardsley did indeed serve a term in the Marines, as did Rusty. The differences from what Helen Beardsley 's book Who Gets the Drumstick? puts forth include the following: The film also takes dramatic liberties with its depiction of Navy life and flight operations aboard an aircraft carrier: As much as this film departed from the Beardsleys ' actual life, the remake departed even more significantly. Henry Fonda and Lucille Ball take turns providing voice - over narration throughout -- and in at least one scene, Van Johnson talks directly to the camera, as does Fonda. That Lucille Ball would portray Helen Beardsley was never in doubt. But a long line of distinguished actors came under consideration, at one time or another, for the role of Frank Beardsley. They included Desi Arnaz, James Stewart, Fred MacMurray, Jackie Gleason, Art Carney, and John Wayne. Henry Fonda finally accepted, and indeed asked for, the role in a telephone conversation with Robert F. Blumofe in 1967. Ball, who had worked with Fonda before in the 1942 release The Big Street, readily agreed to the casting. One account says that Ball recalled in 1961 that Desilu Productions first bought the rights to the Beardsley - North story in 1959, even before Helen Beardsley published her biography, but this is highly unlikely because Frank and Helen Beardsley married on September 6, 1961. More likely is the story that Bob Carroll and his wife brought the story of the Beardsley family to Ball 's attention after reading it in a local newspaper. However, Mr. Carroll is said to recall his wife mentioning the story in 1960 -- again, a full year before the Beardsleys were married and probably when Dick North was still alive. In any event, Desilu Productions did secure the rights early on, and Mr. Carroll and Madelyn Pugh began instantly to write a script. Production suffered multiple interruptions for several reasons. It began in December 1962 after Ball 's abortive attempt at a career on the Broadway stage. In 1963, production was halted after the box - office failure of her comedy effort Critic 's Choice (with Bob Hope). Later, she had a falling - out with Madelyn Pugh (then known as Madelyn Pugh Martin) and Bob Carroll, precisely because their script overly resembled an I Love Lucy television episode, and commissioned another writer (Leonard Spigelgass) to rewrite the script. Mr. Spigelgass does not seem to have succeeded in breaking free of Lucy 's television work, so producer Robert Blumofe hired yet two more writers (Mickey Rudin and Bernie Weitzman) to make an attempt. When this failed, Blumofe hired Melville Shavelson, who eventually directed. All further rewrite efforts came to an abrupt end at the insistence of United Artists, the film 's eventual distributor. At this point in the production cycle, Helen Beardsley 's book Who Gets the Drumstick? was actually released in 1965. Like many film adaptations, exactly how much the book informed the final shooting script is impossible to determine. Production began in 1967 with Henry Fonda definitely signed on to portray Frank. Mort Lachman, who had been one of Bob Hope 's writers, joined the writing team at the recommendation of Shavelson. Leonard Spigelgass received no on - screen writing credit for his efforts in this film. Filming was done largely on - location in Alameda and San Francisco, California with Mike 's high - school graduation being filmed at Grant High School in southern California (Frank Beardsley 's home, into which the blended family eventually moved, was in Carmel). The total budget is estimated at $2,500,000 US ($175 Million adjusted for inflation); including $1,700,000 for actual filming and post-production. The film received lukewarm critical reviews -- although Leonard Maltin looked favorably upon it as a "wholesome, ' family ' picture '' with an excellent script. Roger Ebert gave the film 3.5 out of 4 stars and praised the performances of Ball and Fonda. It was a massive commercial success, earning nearly $26 million ($182 Million adjusted for inflation) at the box office (on a tight budget of $2.5 million) and earning over $11 million in rentals. Thus, it was the top - grossing film released by United Artists in 1968 and the 11th highest - grossing film of the year. (Both figures are for U.S. and Canada only.) This came about probably on the strength of Lucille Ball 's name and performance (which many of her fans regard as a classic). Some critics felt that Ball, then in her late 50s, was too old for the part of a middle aged mother. Frank Beardsley commented that his family enjoyed the film as general entertainment, and acknowledged that perhaps the scriptwriters felt that their screenplay was "a better story '' than the truth. Not anticipating the huge box office returns from the movie, Lucille Ball failed to make appropriate tax shelter and thus saw most of her share going to pay taxes. The film 's success partly inspired network approval of the television series The Brady Bunch; the original script for the series pilot was written well before this movie became a reality. Among the child actors cast as the Beardsley and North children, several went on to greater success, including Tim Matheson (billed here as Tim Matthieson) who went on to play the character Otter in the more adult - oriented comedy Animal House; Morgan Brittany (billed here as Suzanne Cupito) appeared in many episodes of Dallas; Mitch Vogel appeared in The Reivers with Steve McQueen for which Vogel received a Golden Globe Best Supporting Actor nomination in 1970; and Tracy Nelson, daughter of actor / musician Ricky Nelson, who eventually starred in the series Father Dowling Mysteries beside Tom Bosley, who portrayed the doctor in this movie. Also, Matheson (Mike Beardsley) and future soap - opera actress Jennifer Leak (Colleen North) married in real life in 1968, although they divorced in 1971. Yours, Mine and Ours was released on VHS by MGM / UA Home Video in 1989, 1994, and 1998. A Laserdisc version was released in 1994, featuring noise reduction applied to the film soundtrack. It was released to DVD on March 6, 2001. While the DVD was released in full frame, the original film was a widescreen release; this, therefore, constitutes a pan and scan. It was released on Blu ray on September 13, 2016. The sole special feature is the original movie trailer.