question
stringlengths
15
100
context
stringlengths
18
412k
what is the origin of the siberian husky
Siberian Husky - wikipedia The Siberian Husky (Russian: Сибирский хаски) is a Large size working dog breed that originated in Northeast Asia. The breed belongs to the Spitz genetic family. With proper training, they make great home pets and sled dogs. It is recognizable by its thickly furred double coat, erect triangular ears, and distinctive markings. The original Siberian Huskies were bred by the Chukchi people -- whose hunter - gatherer culture relied on their help. It is an active, energetic, resilient breed, whose ancestors lived in the extremely cold and harsh environment of the Siberian Arctic. William Goosak, a Russian fur trader, introduced them to Nome, Alaska during the Nome Gold Rush, initially as sled dogs. The people of Nome referred to Siberian Huskies as "Siberian Rats '' due to their size of 40 -- 50 lb (18 -- 23 kg), versus the Malamutes size of 75 -- 85 lb (34 -- 39 kg). The first dogs arrived in the Americas 12,000 years ago; however, people and their dogs did not settle in the Arctic until the Paleo - Eskimo people 8,500 years ago and then the Thule people 1,000 years ago, both originating from Siberia. The Siberian Husky was originally developed by the Chukchi people of the Chukchi Peninsula in eastern Siberia. They were brought to Nome, Alaska, in 1908 for sled - dog racing. In 1989, a study was made of ancient canid remains dated to the Late Pleistocene and early Holocene that had been uncovered by miners decades earlier around Fairbanks, Alaska. These were identified as Canis lupus and described as "short - faced wolves ''. The collection was separated into those specimens that looked more wolf - like (i.e. the Beringian wolf), and those that looked more dog - like and in comparison to the skulls of Eskimo dogs from both Greenland and Siberia thought to be their forerunners. In 2015, a study using a number of genetic markers indicated that the Siberian Husky, the Alaskan Malamute and the Alaskan husky share a close genetic relationship between each other and were related to Chukotka sled dogs from Siberia. They were separate to the two Inuit dogs, the Canadian Eskimo Dog and the Greenland dog. In North America, the Siberian Husky and the Malamute both had maintained their Siberian lineage and had contributed significantly to the Alaskan husky, which showed evidence of crossing with European breeds that were consistent with this breed being created in post-colonial North America. Nearly all dog breeds ' genetic closeness to the gray wolf is due to admixture. However, several Arctic dog breeds show a genetic closeness with the now - extinct Taymyr wolf of North Asia due to admixture. These breeds are associated with high latitudes - the Siberian Husky and Greenland dog that are also associated with arctic human populations and to a lesser extent, the Shar Pei and Finnish spitz. An admixture graph of the Greenland dog indicates a best - fit of 3.5 % shared material, however an ancestry proportion ranging between 1.4 % and 27.3 % is consistent with the data. This indicates admixture between the Taymyr wolf population and the ancestral dog population of these 4 high - latitude breeds. This introgression could have provided early dogs living in high latitudes with phenotypic variation beneficial for adaption to a new and challenging environment. It also indicates the ancestry of present - day dog breeds descends from more than one region. A Siberian Husky 's coat is thicker than that of most other dog breeds, comprising two layers: a dense undercoat and a longer topcoat of short, straight guard hairs. It protects the dogs effectively against harsh Arctic winters, but the coat also reflects heat in the summer. It is able to withstand temperatures as low as − 50 to − 60 ° C (− 58 to − 76 ° F). The undercoat is often absent during shedding. Their thick coats require weekly grooming. Siberian Huskies come in a variety of colors and patterns, usually with white paws and legs, facial markings, and tail tip. The most common coats are black and white, then less common copper - red and white, grey and white, pure white, and the rare "agouti '' coat, though many individuals have blondish or piebald spotting. Striking masks, spectacles, and other facial markings occur in wide variety. Merle coat patterns are not allowed. The American Kennel Club allows all coat colors from black to pure white. The American Kennel Club describes the Siberian Husky 's eyes as "an almond shape, moderately spaced and set slightly obliquely. '' The AKC breed standard is that eyes may be brown, blue or black; one of each or Particoloured are acceptable (complete is heterochromia). These eye - color combinations are considered acceptable by the American Kennel Club. The parti - color does not affect the vision of the dog. Show - quality dogs are preferred to have neither pointed nor square noses. The nose is black in gray dogs, tan in black dogs, liver in copper - colored dogs, and may be light tan in white dogs. In some instances, Siberian Huskies can exhibit what is called "snow nose '' or "winter nose. '' This condition is called hypopigmentation in animals. "Snow nose '' is acceptable in the show ring. Siberian Husky tails are heavily furred; these dogs will often curl up with their tails over their faces and noses in order to provide additional warmth. As pictured, when curled up to sleep the Siberian Husky will cover its nose for warmth, often referred to as the "Siberian Swirl ''. The tail should be expressive, held low when the dog is relaxed, and curved upward in a "sickle '' shape when excited or interested in something. It should be symmetrical, and not curved or deviated to the side; the tail can curl enough to touch the back. The breed standard indicates that the males of the breed are ideally between 21 and 24 inches (53 and 61 cm) tall at the withers and weighing between 45 and 60 pounds (20 and 27 kg). Females are smaller, growing to between 20 to 22 inches (51 to 56 cm) tall at the withers and weighing between 35 to 50 pounds (16 to 23 kg). The Husky howls rather than barks. They have been described as escape artists, which can include digging under, chewing through, or even jumping over fences. Because the Siberian Husky had been raised in a family setting by the Chukchi and not left to fend for themselves they could be trusted with children. The ASPCA classifies the breed as good with children. It also states they exhibit high energy indoors, have special exercise needs, and may be destructive "without proper care ''. Siberian Huskies have a high prey drive due to the Chukchi allowing them to roam free in the summer. The dogs hunted in packs and preyed on wild cats, birds, and squirrels, but with training can be trusted with other small animals. They would only return to the Chukchi villages when the snow returned and food became scarce. Their hunting instincts can still be found in the breed today. A 6 ft (1.83 m) fence is recommended for this breed as a pet, although some have been known to overcome fences as high as 8 ft (2.44 m). Electric pet fencing may not be effective. They need the frequent companionship of people and other dogs, and their need to feel as part of a pack is very strong. A 1999 ASPCA publication gives the average life span of the Siberian Husky as 12 to 14 years. Health issues in the breed are mainly genetic, such as seizures and defects of the eye (juvenile cataracts, corneal dystrophy, canine glaucoma and progressive retinal atrophy) and congenital laryngeal paralysis. Hip dysplasia is not often found in this breed; however, as with many medium or larger - sized canines, it can occur. The Orthopedic Foundation for Animals currently has the Siberian Husky ranked 155th out of a possible 160 breeds at risk for hip dysplasia, with only two percent of tested Siberian Huskies showing dysplasia. Siberian Huskies used for sled racing may also be prone to other ailments, such as gastric disease, bronchitis or bronchopulmonary ailments ("ski asthma ''), and gastric erosions or ulcerations. Modern Siberian Huskies registered in the US are largely the descendants of the 1930 Siberia imports and of Leonhard Seppala 's dogs, particularly Togo. The limited number of registered foundational dogs has led to some discussion about their vulnerability to the founder effect. The Siberian Husky, Samoyed, and Alaskan Malamute are all breeds directly descended from the original sled dog. It is thought that the term "husky '' is a corruption of the nickname "Esky '' once applied to the Eskimo and subsequently to their dogs. Breeds descending from the Eskimo dog or Qimmiq were once found throughout the Northern Hemisphere from Siberia to Canada, Alaska, Greenland, Labrador, and Baffin Island. With the help of Siberian Huskies, entire tribes of people were able not only to survive, but to push forth into terra incognita. Admiral Robert Peary of the United States Navy was aided by this breed during his expeditions in search of the North Pole. Dogs from the Anadyr River and surrounding regions were imported into Alaska from 1908 (and for the next two decades) during the gold rush for use as sled dogs, especially in the "All - Alaska Sweepstakes, '' a 408 - mile (657 - km) distance dog sled race from Nome, to Candle, and back. Smaller, faster and more enduring than the 100 - to 120 - pound (45 - to 54 - kg) freighting dogs then in general use, they immediately dominated the Nome Sweepstakes. Leonhard Seppala, the foremost breeder of Siberian Huskies of the time, participated in competitions from 1909 to the mid-1920s. On February 3, 1925, Gunnar Kaasen was first in the 1925 serum run to Nome to deliver diphtheria serum from Nenana, over 600 miles to Nome. This was a group effort by several sled - dog teams and mushers, with the longest (91 miles or 146 km) and most dangerous segment of the run covered by Leonhard Seppala. The Iditarod Trail Sled Dog Race commemorates this famous delivery. The event is also loosely depicted in the 1995 animated film Balto, as the name of Gunnar Kaasen 's lead dog in his sled team was Balto, although unlike the real dog, Balto the character was portrayed as half wolf in the film. In honor of this lead dog, a bronze statue was erected at Central Park in New York City. The plaque upon it is inscribed, Dedicated to the indomitable spirit of the sled dogs that relayed antitoxin six hundred miles over rough ice, across treacherous waters, through Arctic blizzards from Nenana to the relief of stricken Nome in the winter of 1925. Endurance Fidelity Intelligence In 1930, exportation of the dogs from Siberia was halted. The same year saw recognition of the Siberian Husky by the American Kennel Club. Nine years later, the breed was first registered in Canada. The United Kennel Club recognized the breed in 1938 as the "Arctic Husky, '' changing the name to Siberian Husky in 1991. Seppala owned a kennel in Nenana before moving to New England, where he became partners with Elizabeth Ricker. The two co-owned the Poland Springs kennel and began to race and exhibit their dogs all over the Northeast. As the breed was beginning to come to prominence, in 1933 Navy Rear Admiral Richard E. Byrd brought about 50 Siberian Huskies with him on an expedition in which he hoped to journey around the 16,000 - mile coast of Antarctica. Many of the dogs were trained at Chinook Kennels in New Hampshire. Called Operation Highjump, the historic trek proved the worth of the Siberian Husky due to its compact size and greater speeds. Siberian Huskies also served in the United States Army 's Arctic Search and Rescue Unit of the Air Transport Command during World War II. Their popularity was sustained into the 21st century. They were ranked 16th among American Kennel Club registrants in 2012, rising to 14th place in 2013. The original sled dogs bred and kept by the Chukchi were thought to have gone extinct, but Benedict Allen, writing for Geographical magazine in 2006 after visiting the region, reported their survival. His description of the breeding practiced by the Chukchi mentions selection for obedience, endurance, amiable disposition, and sizing that enabled families to support them without undue difficulty. Siberians gained in popularity with the story of the "Great Race of Mercy, '' the 1925 serum run to Nome, featuring Balto and Togo. Although Balto is considered the more famous, being the dog that delivered the serum to Nome after running the final 53 - mile leg, it was Togo who made the longest run of the relay, guiding his musher Leonhard Seppala on a 91 - mile journey that included crossing the deadly Norton Sound to Golovin. The bronze statue of Balto that has been displayed in New York City 's Central Park since 1925 is one of its enduringly popular features. Several purebred Siberian Huskies portrayed Diefenbaker, the "half - wolf '' companion to RCMP Constable Benton Fraser, in the CBS / Alliance Atlantis TV series Due South. In 1960, the US Army undertook a project to construct an under the ice facility for defense and space research, Camp Century, part of Project Iceworm involved a 150 + crew who also brought with them an unofficial mascot, a Siberian Husky named Mukluk. Siberian Huskies are the mascots of the athletic teams of several schools and colleges, including: St. Cloud State University (St. Cloud State Huskies, Blizzard), Northern Illinois University (Northern Illinois Huskies, Victor), the University of Connecticut (Connecticut Huskies, Jonathan), Northeastern University (Northeastern Huskies, Paws), the Michigan Technological University (Michigan Tech Huskies, Blizzard) and University of Washington (Washington Huskies, Harry), and Houston Baptist University (Kiza the Husky).
when was emergency declared in india and why
The Emergency (India) - Wikipedia In India, "the Emergency '' refers to a 21 - month period from 1975 to 1977 when Prime Minister Indira Gandhi had a state of emergency declared across the country. Officially issued by President Fakhruddin Ali Ahmed under Article 352 of the Constitution because of the prevailing "internal disturbance '', the Emergency was in effect from 25 June 1975 until its withdrawal on 21 March 1977. The order bestowed upon the Prime Minister the authority to rule by decree, allowing elections to be suspended and civil liberties to be curbed. For much of the Emergency, most of Gandhi 's political opponents were imprisoned and the press was censored. Several other human rights violations were reported from the time, including a forced mass - sterilization campaign spearheaded by Sanjay Gandhi, the Prime Minister 's son. The Emergency is one of the most controversial periods of independent India 's history. Documents that have surfaced over the past few years indicate that Indira Gandhi had planned to impose the emergency only temporarily for some time until the violence that was erupting in the country had subsided. -- Congress president D.K. Barooah, c. 1974 Between 1967 and 1971, Prime Minister Indira Gandhi came to obtain near - absolute control over the government and the Indian National Congress party, as well as a huge majority in Parliament. The first was achieved by concentrating the central government 's power within the Prime Minister 's Secretariat, rather than the Cabinet, whose elected members she saw as a threat and distrusted. For this she relied on her principal secretary, P.N. Haksar, a central figure in Indira 's inner circle of advisors. Further, Haksar promoted the idea of a "committed bureaucracy '' that required hitherto - impartial government officials to be "committed '' to the ideology of the ruling party of the day. Within the Congress, Indira ruthlessly outmanoeuvred her rivals, forcing the party to split in 1969 -- into the Congress (O) (comprising the old - guard known as the "Syndicate '') and her Congress (R). A majority of the All - India Congress Committee and Congress MPs sided with the prime minister. Indira 's party was of a different breed from the Congress of old, which had been a robust institution with traditions of internal democracy. In the Congress (R), on the other hand, members quickly realised that their progress within the ranks depended solely on their loyalty to Indira Gandhi and her family, and ostentatious displays of sycophancy became routine. In the coming years, Indira 's influence was such that she could install hand - picked loyalists as chief ministers of states, rather than their being elected by the Congress legislative party. Indira 's ascent was backed by her charismatic appeal among the masses that was aided by her government 's near - radical leftward turns. These included the July 1969 nationalisation of several major banks and the September 1970 abolition of the privy purse; these changes were often done suddenly, via ordinance, to the shock of her opponents. Subsequently, unlike the Syndicate and other opponents, Indira was seen as "standing for socialism in economics and secularism in matters of religion, as being pro-poor and for the development of the nation as a whole. '' The prime minister was especially adored by the disadvantaged sections -- the poor, Dalits, women and minorities. For them, she was their Indira Amma, a personification of Mother India. In the 1971 general elections, the people rallied behind Indira 's populist slogan of Garibi Hatao! (get rid of poverty!) to award her a huge majority (352 seats out of 518). "By the margin of its victory, '' historian Ramachandra Guha later wrote, Congress (R) came to be known as the real Congress, "requiring no qualifying suffix. '' In December 1971, under her proactive war leadership, India routed arch - enemy Pakistan in a war that led to the independence of Bangladesh, formerly East Pakistan. Awarded the Bharat Ratna the next month, she was at her greatest peak; for her biographer Inder Malhotra, "The Economist 's description of her as the ' Empress of India ' seemed apt. '' Even opposition leaders, who routinely accused her of being a dictator and of fostering a personality cult, referred to her as Durga, a Hindu goddess. In the Golaknath case, the Supreme Court said that the Constitution could not be amended by Parliament if the changes affect basic issues such as fundamental rights. To nullify this judgement, Parliament dominated by the Indira Gandhi Congress, passed the 24th Amendment in 1971. Similarly, after the government lost a Supreme Court case for withdrawing the privy purse given to erstwhile princes, Parliament passed the 26th Amendment. This gave constitutional validity to the government 's abolition of the privy purse and nullified the Supreme Court 's order. This judiciary -- executive battle would continue in the landmark Kesavananda Bharati case, where the 24th Amendment was called into question. With a wafer - thin majority of 7 to 6, the bench of the Supreme Court restricted Parliament 's amendment power by stating it could not be used to alter the "basic structure '' of the Constitution. Subsequently, Prime Minister Gandhi made A.N. Ray -- the senior most judge amongst those in the minority in Kesavananda Bharati -- Chief Justice of India. Ray superseded three judges more senior to him -- J.M. Shelat, K.S. Hegde and Grover -- all members of the majority in Kesavananda Bharati. Indira Gandhi 's tendency to control the judiciary met with severe criticism, both from the press and political opponents such as Jayaprakash Narayan ("JP ''). During 1973 -- 75, political unrest against the Indira Gandhi government increased across the country. (This led some Congress party leaders to demand a move towards a presidential system, with a more powerful directly elected executive.) The most significant of the initial such movement was the Nav Nirman movement in Gujarat, between December 1973 and March 1974. Student unrest against the state 's education minister ultimately forced the central government to dissolve the state legislature, leading to the resignation of the chief minister, Chimanbhai Patel, and the imposition of President 's rule. After the re-elections in June 1975, Gandhi 's party was defeated by the Janata alliance, formed by parties opposed to the ruling Congress party. Meanwhile there were assassination attempts on public leaders as well as the assassination of the railway minister L.N. Mishra by a bomb. All of these indicated a growing law and order problem in the entire country, which Mrs. Gandhi 's advisors warned her of for months. In March -- April 1974, a student agitation by the Bihar Chatra Sangharsh Samiti received the support of Gandhian socialist Jayaprakash Narayan, referred to as JP, against the Bihar government. In April 1974, in Patna, JP called for "total revolution, '' asking students, peasants, and labour unions to non-violently transform Indian society. He also demanded the dissolution of the state government, but this was not accepted by Centre. A month later, the railway - employees union, the largest union in the country, went on a nationwide railways strike. This strike was brutally suppressed by the Indira Gandhi government, which arrested thousands of employees and drove their families out of their quarters. Raj Narain, who had been defeated in the 1971 parliamentary election by Indira Gandhi, lodged cases of election fraud and use of state machinery for election purposes against her in the Allahabad High Court. Shanti Bhushan fought the case for Narain. Indira Gandhi was also cross-examined in the High Court which was the first such instance for an Indian Prime Minister. On 12 June 1975, Justice Jagmohanlal Sinha of the Allahabad High Court found the prime minister guilty on the charge of misuse of government machinery for her election campaign. The court declared her election null and void and unseated her from her seat in the Lok Sabha. The court also banned her from contesting any election for an additional six years. Serious charges such as bribing voters and election malpractices were dropped and she was held responsible for misusing government machinery, and found guilty on charges such as using the state police to build a dais, availing herself of the services of a government officer, Yashpal Kapoor, during the elections before he had resigned from his position, and use of electricity from the state electricity department. Because the court unseated her on comparatively frivolous charges, while she was acquitted on more serious charges, The Times described it as "firing the Prime Minister for a traffic ticket ''. Her supporters organized mass pro-Indira demonstrations in the streets of Delhi close to the Prime Minister 's residence. The persistent efforts of Narain were praised worldwide as it took over four years for Justice Sinha to pass judgement against the prime minister. Indira Gandhi challenged the High Court 's decision in the Supreme Court. Justice V.R. Krishna Iyer, on 24 June 1975, upheld the High Court judgement and ordered all privileges Gandhi received as an MP be stopped, and that she be debarred from voting. However, she was allowed to continue as Prime Minister pending the resolution of her appeal. JP Narayan and Morarji Desai called for daily anti-government protests. The next day, JP organised a large rally in Delhi, where he said that a police officer must reject the orders of government if the order is immoral and unethical as this was Mahatma Gandhi 's motto during the freedom struggle. Such a statement was taken as a sign of inciting rebellion in the country. Later that day, Indira Gandhi requested a compliant President Fakhruddin Ali Ahmed to issue a proclamation of a state of emergency. Within three hours, the electricity to all major newspapers was cut and the political opposition arrested. The proposal was sent without discussion with the Union Cabinet, who only learnt of it and ratified it the next morning. The Government cited threats to national security, as a war with Pakistan had recently been concluded. Due to the war and additional challenges of drought and the 1973 oil crisis, the economy was in poor condition. The Government claimed that the strikes and protests had paralysed the government and hurt the economy of the country greatly. In the face of massive political opposition, desertion and disorder across the country and the party, Gandhi stuck to the advice of a few loyalists and her younger son Sanjay Gandhi, whose own power had grown considerably over the last few years to become an "extra-constitutional authority ''. Siddhartha Shankar Ray, the Chief Minister of West Bengal, proposed to the prime minister to impose an "internal emergency ''. He drafted a letter for the President to issue the proclamation on the basis of information Indira had received that "there is an imminent danger to the security of India being threatened by internal disturbances ''. He showed how democratic freedom could be suspended while remaining within the ambit of the Constitution. After a quick question regarding a procedural matter, President Fakhruddin Ali Ahmed declared a state of internal emergency upon the prime minister 's advice on the night of 25 June 1975, just a few minutes before the clock struck midnight. As the constitution requires, Mrs. Gandhi advised and President Ahmed approved the continuation of Emergency over every six - month period until her decision to hold elections in 1977. Indira Gandhi devised a ' 20 - point ' economic programme to increase agricultural and industrial production, improve public services and fight poverty and illiteracy, through "the discipline of the graveyard ''. In addition to the official twenty points, Sanjay Gandhi declared his own five - point programme promoting literacy, family planning, tree planting, the eradication of casteism and the abolition of dowry. Later during the Emergency, the two projects merged into a twenty - five point programme. Invoking article 352 of the Indian Constitution, Gandhi granted herself extraordinary powers and launched a massive crackdown on civil liberties and political opposition. The Government used police forces across the country to place thousands of protestors and strike leaders under preventive detention. Vijayaraje Scindia, Jayaprakash Narayan, Raj Narain, Morarji Desai, Charan Singh, Jivatram Kripalani, Atal Bihari Vajpayee, Lal Krishna Advani, Arun Jaitley, Satyendra Narayan Sinha, Gayatri Devi, the dowager queen of Jaipur and other protest leaders were immediately arrested. Organisations like the Rashtriya Swayamsevak Sangh (RSS) and Jamaat - e-Islami along with some political parties were banned. Numerous Communist leaders were arrested along with many others involved with their party. Congress leaders who dissented the Emergency declaration and amendment to the constitution such as Mohan Dharia and Chandra Shekhar resigned their government and party positions and were arrested and placed under detention, In Tamil Nadu, the M. Karunanidhi government was dissolved and the leaders of the DMK were incarcerated. In particular, Karunanidhi 's son M.K. Stalin, was arrested under the Maintenance of Internal Security Act. At least nine High Courts pronounced that even after the declaration of an emergency, a person could challenge his detention. The Supreme Court, now under the Indira Gandhi - appointed Chief Justice A.N. Ray, overruled all of them, upholding the state 's plea for power to detain a person without the necessity of informing him of the grounds for his arrest, or to suspend his personal liberties, or to deprive him of his right to life, in an absolute manner (the habeas corpus case '). Many political workers who were not arrested in the first wave, went ' underground ' continuing organising protests. Elections for the Parliament and state governments were postponed. Gandhi and her parliamentary majorities could rewrite the nation 's laws, since her Congress party had the required mandate to do so -- a two - thirds majority in the Parliament. And when she felt the existing laws were ' too slow ', she got the President to issue ' Ordinances ' -- a law - making power in times of urgency, invoked sparingly -- completely bypassing the Parliament, allowing her to rule by decree. Also, she had little trouble amending the Constitution that exonerated her from any culpability in her election - fraud case, imposing President 's Rule in Gujarat and Tamil Nadu, where anti-Indira parties ruled (state legislatures were thereby dissolved and suspended indefinitely), and jailing thousands of opponents. The 42nd Amendment, which brought about extensive changes to the letter and spirit of the Constitution, is one of the lasting legacies of the Emergency. In the conclusion of his Making of India 's Constitution, Justice Khanna writes: If the Indian constitution is our heritage bequeathed to us by our founding fathers, no less are we, the people of India, the trustees and custodians of the values which pulsate within its provisions! A constitution is not a parchment of paper, it is a way of life and has to be lived up to. Eternal vigilance is the price of liberty and in the final analysis, its only keepers are the people. Imbecility of men, history teaches us, always invites the impudence of power. '' A fallout of the Emergency era was the Supreme Court laid down that, although the Constitution is amenable to amendments (as abused by Indira Gandhi), changes that tinker with its basic structure can not be made by the Parliament. (see Kesavananda Bharati v. State of Kerala) In the Rajan case, P. Rajan of the Regional Engineering College, Calicut, was arrested by the police in Kerala on 1 March 1976, tortured in custody until he died and then his body was disposed of and was never recovered. The facts of this incident came out owing to a habeas corpus suit filed in the Kerala High Court. In September 1976, Sanjay Gandhi initiated a widespread compulsory sterilization programme to limit population growth. The exact extent of Sanjay Gandhi 's role in the implementation of the programme is disputed, with some writers holding Gandhi directly responsible for his authoritarianism, and other writers blaming the officials who implemented the programme rather than Gandhi himself. Rukhsana Sultana was a socialite known for being one of Sanjay Gandhi 's close associates and she gained a lot of notoriety in leading Sanjay Gandhi 's sterilisation campaign in Muslim areas of old Delhi. The campaign primarily involved getting males to undergo vasectomy. Quotas were set up that enthusiastic supporters and government officials worked hard to achieve. There were allegations of coercion of unwilling candidates too. In 1976 -- 1977, the programme led to 8.3 million sterilisations, most of them forced, up from 2.7 million the previous year. The bad publicity led every government since 1977 to stress that family planning is entirely voluntary. Criticism and accusations of the Emergency - era may be grouped as: The Emergency years were the biggest challenge to India 's commitment to democracy, which proved vulnerable to the manipulation of powerful leaders and hegemonic Parliamentary majorities. Rashtriya Swayamsevak Sangh, which was seen close to opposition leaders, and with its large organisational base was seen as having the potential of organising protests against the Government, was also banned. Police clamped down on the organisation and thousands of its workers were imprisoned. The RSS defied the ban and thousands participated in Satyagraha (peaceful protests) against the ban and against the curtailment of fundamental rights. Later, when there was no letup, the volunteers of the RSS formed underground movements for the restoration of democracy. Literature that was censored in the media was clandestinely published and distributed on a large scale and funds were collected for the movement. Networks were established between leaders of different political parties in the jail and outside for the co-ordination of the movement. The Economist described the movement as "the only non-left revolutionary force in the world ''. It said that the movement was "dominated by tens of thousands of RSS cadres, though more and more young recruits are coming ''. Talking about its objectives it said "its platform at the moment has only one plank: to bring democracy back to India ''. However, the claims of RSS leaders have been contested by some political observers like political scientist Professor DL Sheth, who is Honorary Senior Fellow of the Centre for the Study of Developing Societies. He goes on to say these organisations have never borne the brunt Indira 's oppressive regime. He argues that the RSS projects itself as the champion of anti-Emergency struggle because it was a lifeline for them. This is the only thing they have to celebrate. (1) In an article which appeared in the Hindu daily in 2000, Dr. Subrahmanian Swamy, who is currently an MP in the Upper House of Indian Parliament, representing the BJP, and who is known to have waged a war against Indira 's autocracy, had alleged that several Sangh leaders were hobnobbing with Indira. He added that the Sangh, at the instance of Vajpayee, even went farther to sign a peace accord with Indira Gandhi. (2) With the leaders of all opposition parties and other outspoken critics of her government arrested and behind bars, the entire country was in a state of shock. Shortly after the declaration of the Emergency, the Sikh leadership convened meetings in Amritsar where they resolved to oppose the "fascist tendency of the Congress ''. The first mass protest in the country, known as the "Campaign to Save Democracy '' was organised by the Akali Dal and launched in Amritsar, 9 July. A statement to the press recalled the historic Sikh struggle for freedom under the Mughals, then under the British, and voiced concern that what had been fought for and achieved was being lost. The police were out in force for the demonstration and arrested the protestors, including the Shiromani Akali Dal and Shiromani Gurdwara Prabandhak Committee (SGPC) leaders. According to Amnesty International, 140,000 people had been arrested without trial during the twenty months of Gandhi 's Emergency. Jasjit Singh Grewal estimates that 40,000 of them came from India 's two percent Sikh minority. On 18 January 1977, Gandhi called fresh elections for March and released all political prisoners though the Emergency officially ended on 23 March 1977. The opposition Janata movement 's campaign warned Indians that the elections might be their last chance to choose between "democracy and dictatorship. '' In the Lok Sabha elections, held in March, Mrs. Gandhi and Sanjay both lost their Lok Sabha seats, as did all the Congress Candidates in Northern states such as Bihar and Uttar Pradesh. Many Congress Party loyalists deserted Mrs. Gandhi. The Congress was reduced to just 153 seats, 92 of which were from four of the southern states. The Janata Party 's 298 seats and its allies ' 47 seats (of a total 542) gave it a massive majority. Morarji Desai became the first non-Congress Prime Minister of India. Voters in the electorally largest state of Uttar Pradesh, historically a Congress stronghold, turned against Gandhi and her party failed to win a single seat in the state. Dhanagare says the structural reasons behind the discontent against the Government included the emergence of a strong and united opposition, disunity and weariness inside Congress, an effective underground opposition, and the ineffectiveness of Gandhi 's control of the mass media, which had lost much credibility. The structural factors allowed voters to express their grievances, notably their resentment of the emergency and its authoritarian and repressive policies. One grievance often mentioned as the ' nasbandi ' (vasectomy) campaign in rural areas. The middle classes also emphasised the curbing of freedom throughout the state and India. Meanwhile, Congress hit an all - time low in West Bengal because of the poor discipline and factionalism among Congress activists as well as the numerous defections that weakened the party. Opponents emphasised the issues of corruption in Congress and appealed to a deep desire by the voters for fresh leadership. The efforts of the Janata administration to get government officials and Congress politicians tried for Emergency - era abuses and crimes were largely unsuccessful due to a disorganised, over-complex and politically motivated process of litigation. The Thirty - eighth Amendment of the Constitution of India, put in place shortly after the outset of the Emergency and which among other things prohibited judicial reviews of states of emergencies and actions taken during them, also likely played a role in this lack of success. Although special tribunals were organised and scores of senior Congress Party and government officials arrested and charged, including Mrs. Gandhi and Sanjay Gandhi, police were unable to submit sufficient evidence for most cases, and only a few low - level officials were convicted of any abuses. The people lost interest in the hearings owing to their continuous fumbling and complex nature, and the economic and social needs of the country grew more important to them. The Emergency lasted 21 months, and its legacy remains intensely controversial. A few days after the Emergency was imposed, the Bombay edition of The Times of India carried an obituary that read D.E.M O'Cracy, beloved husband of T Ruth, loving father of L.I. Bertie, brother of Faith, Hope and Justice, expired on June 26. A few days later censorship was imposed on newspapers. The Delhi edition of the Indian Express on 28 June, carried a blank editorial, while the Financial Express reproduced in large type Rabindranath Tagore 's poem "Where the mind is without fear ''. However, the Emergency also received support from several sections. It was endorsed by social reformer Vinoba Bhave (who called it Anushasan parva, a time for discipline), industrialist J.R.D. Tata, writer Khushwant Singh, and Indira Gandhi 's close friend and Orissa Chief Minister Nandini Satpathy. However, Tata and Satpathy later regretted that they spoke in favour of the Emergency. Others have argued that Gandhi 's Twenty Point Programme increased agricultural production, manufacturing activity, exports and foreign reserves. Communal Hindu -- Muslim riots, which had resurfaced in the 1960s and 1970s, also reduced in intensity. In the book JP Movement and the Emergency, historian Bipan Chandra wrote, "Sanjay Gandhi and his cronies like Bansi Lal, Minister of Defence at the time, were keen on postponing elections and prolonging the emergency by several years... In October -- November 1976, an effort was made to change the basic civil libertarian structure of the Indian Constitution through the 42nd amendment to it... The most important changes were designed to strengthen the executive at the cost of the judiciary, and thus disturb the carefully crafted system of Constitutional checks and balance between the three organs of the government. ''
what was blur's first number one single in the uk
Blur (band) - wikipedia Blur are an English rock band formed in London in 1988. The group consists of singer / keyboardist Damon Albarn, guitarist / singer Graham Coxon, bassist Alex James and drummer Dave Rowntree. Their debut album Leisure (1991) incorporated the sounds of Madchester and shoegazing. Following a stylistic change influenced by English guitar pop groups such as the Kinks, the Beatles and XTC, Blur released Modern Life Is Rubbish (1993), Parklife (1994) and The Great Escape (1995). In the process, the band became central to the Britpop music and culture movement, and achieved mass popularity in the UK, aided by a chart battle with rivals Oasis in 1995 dubbed the "Battle of Britpop ''. In recording their 1997 self - titled album Blur, the band underwent another reinvention, showing influence from the lo - fi style of American indie rock groups. The album, including the "Song 2 '' single, brought them mainstream success in the US. Their next album, 13 (1999) saw the band members experimenting with electronic and gospel music, and featured more personal lyrics from Albarn. In May 2002, Coxon left Blur during the recording of their seventh album Think Tank (2003). Containing electronic sounds and more minimal guitar work, the album was marked by Albarn 's growing interest in hip hop and African music. After a 2003 tour without Coxon, Blur did no studio work or touring as a band, as members engaged in other projects. Blur reunited, with Coxon back in the fold, for a series of concerts in 2009. In the following years they released several singles and retrospective compilations, and toured internationally. In 2012, the group received a Brit Award for Outstanding Contribution to Music. Their first major release in twelve years, The Magic Whip (2015), became the sixth consecutive Blur studio album to top the British charts. Childhood friends Damon Albarn and Graham Coxon from Essex met Alex James when they began studying at London 's Goldsmiths College in 1988. Albarn was in a group named Circus, who were joined by drummer Dave Rowntree that October. Circus requested the services of Coxon after the departure of their guitarist. That December, Circus fired two members and James joined as the group 's bassist. This new group named themselves Seymour in December 1988, inspired by J.D. Salinger 's Seymour: An Introduction. The group performed live for the first time in summer 1989. In November, Food Records ' A&R representative Andy Ross attended a Seymour performance that convinced him to court the group for his label. The only concern held by Ross and Food was that they disliked the band 's name. Food drew up a list of alternatives, from which the group decided on "Blur ''. Food Records finally signed the newly christened band in March 1990. From March to July 1990, Blur toured Britain, opening for the Cramps, and testing out new songs. In October 1990, after their tour was over, Blur released the "She 's So High '' single, which reached number 48 in the UK Singles Chart. The band had trouble creating a follow - up single, but they made progress when paired with producer Stephen Street. The resulting single release, "There 's No Other Way '', became a hit, peaking at number eight. As a result of the single 's success, Blur became pop stars and were accepted into a clique of bands who frequented the Syndrome club in London dubbed "The Scene That Celebrates Itself ''. NME magazine wrote in 1991, "(Blur) are (the) acceptable pretty face of a whole clump of bands that have emerged since the whole Manchester thing started to run out of steam. '' The band 's third single, "Bang '', performed relatively disappointingly, reaching only number 24. Andy Ross and Food owner David Balfe were convinced Blur 's best course of action was to continue drawing influence from the Madchester genre. Blur attempted to expand their musical sound, but the recording of the group 's debut album was hindered by Albarn having to write his lyrics in the studio. Although the resulting album Leisure (1991) peaked at number seven on the UK Albums Chart, it received mixed reviews, and according to journalist John Harris, "could not shake off the odour of anti-climax ''. After discovering they were £ 60,000 in debt, Blur toured the United States in 1992 in an attempt to recoup their financial losses. The group released the single "Popscene '' to coincide with the start of the tour. Featuring "a rush of punk guitars, ' 60s pop hooks, blaring British horns, controlled fury, and postmodern humor '', "Popscene '' was a turning point for the band musically. However, upon its release it only charted at number 32. "We felt ' Popscene ' was a big departure; a very, very English record '', Albarn told the NME in 1993, "But that annoyed a lot of people... We put ourselves out on a limb to pursue this English ideal and no - one was interested. '' As a result of the single 's lacklustre performance, plans to release a single named "Never Clever '' were scrapped and work on Blur 's second album was pushed back. During the two - month American tour, the band became increasingly unhappy, often venting frustrations on each other, leading to several physical confrontations. The band members were homesick; Albarn said, "I just started to miss really simple things... I missed everything about England so I started writing songs which created an English atmosphere. '' Upon the group 's return to Britain, Blur (Albarn in particular) were upset by the success rival group Suede had achieved while they were gone. After a poor performance at a 1992 gig that featured a well - received set by Suede on the same bill, Blur were in danger of being dropped by Food. By that time, Blur had undergone an ideological and image shift intended to celebrate their English heritage in contrast to the popularity of American grunge bands like Nirvana. Although sceptical of Albarn 's new manifesto for Blur, Balfe gave assent for the band 's choice of Andy Partridge (of XTC) to produce their follow - up to Leisure. The sessions with Partridge proved unsatisfactory, but a chance reunion with Stephen Street resulted in him returning to produce the group. Blur completed their second album Modern Life Is Rubbish in December 1992, but Food Records said the album required more potential hit singles and asked them to return to the studio for a second time. The band complied and Albarn wrote "For Tomorrow '', which became the album 's lead single. "For Tomorrow '' was a minor success, reaching number 28 on the charts. Modern Life Is Rubbish was released in May 1993. The announcement of the album 's release included a press photo which featured Blur, dressed in a mix of mod and skinhead attire, posing alongside a mastiff with the words "British Image 1 '' spraypainted behind them. At the time, such imagery was viewed as nationalistic and racially insensitive by the British music press; to quieten concerns, Blur released the "British Image 2 '' photo, which was "a camp restaging of a pre-war aristocratic tea party ''. Modern Life Is Rubbish peaked at number 15 on the British charts, but failed to break into the US Billboard 200, selling only 19,000 copies there. The success of Parklife (1994) revived Blur 's commercial fortunes. The album 's first single, the disco - influenced "Girls & Boys '', found favour on BBC Radio 1 and peaked at number 5 on the UK Singles Chart, and number 59 in the US Billboard Hot 100 where it remains the band 's highest - charting single. Parklife entered the British charts at number one and stayed on the album charts for 90 weeks. Enthusiastically greeted by the music press -- the NME called it "a Great Pop Record... bigger, bolder, narkier and funnier (than Modern Life is Rubbish) '' -- Parklife is regarded as one of Britpop 's defining records. Blur won four awards at the 1995 Brit Awards, including Best Band and Best Album for Parklife. Coxon later pointed to Parklife as the moment when "(Blur) went from being regarded as an alternative, left field arty band to this amazing new pop sensation ''. Blur began working on their fourth album The Great Escape at the start of 1995. Building upon the band 's previous two albums, Albarn 's lyrics for the album consisted of several third - person narratives. James reflected, "It was all more elaborate, more orchestral, more theatrical, and the lyrics were even more twisted... It was all dysfunctional, misfit characters fucking up. '' The release of the album 's lead single "Country House '' played a part in Blur 's public rivalry with Manchester band Oasis termed the "Battle of Britpop ''. Partly due to increasing antagonisms between the groups, Blur and Oasis ultimately decided to release their new singles on the same day, an event the NME called "The British Heavyweight Championship ''. The debate over which band would top the British singles chart became a media phenomenon, and Albarn appeared on the News at Ten. At the end of the week, "Country House '' ultimately outsold Oasis ' "Roll With It '' by 274,000 copies to 216,000, becoming Blur 's first number one single. The Great Escape, which Albarn told the public was the last instalment in the band 's Life Trilogy, was released in September 1995 to ecstatic reviews. The NME hailed it as "spectacularly accomplished, sumptuous, heart - stopping and inspirational '', while Mojo argued "Blur are the very best that ' 95 Britpop has to offer and this is a most gong - worthy sound, complete with head - slicing guitars, catchy tunes and very funny words ''. Entering the UK charts at number one, the album sold nearly half a million copies in its first month of sale. However, opinion quickly changed and Blur found themselves largely out of favour with the media once again. Following the worldwide success of Oasis ' (What 's the Story) Morning Glory? (which went quadruple platinum in America), the media quipped "(Blur) wound up winning the battle but losing the war. '' Blur became perceived as an "inauthentic middle class pop band '' in comparison to the "working class heroes '' Oasis, which Albarn said made him feel "stupid and confused ''. Alex James later summarised, "After being the People 's Hero, Damon was the People 's Prick for a short period... basically, he was a loser -- very publicly. '' An early 1996 Q magazine interview revealed that relations between Blur members had become very strained; journalist Adrian Deevoy wrote that he found them "on the verge of a nervous breakup ''. Coxon, in particular, began to resent his bandmates: James for his playboy lifestyle, and Albarn for his control over Blur 's musical direction and public image. The guitarist struggled with drinking problems and, in a rejection of the group 's Britpop aesthetic, made a point of listening to noisy American alternative rock bands such as Pavement. In February 1996, when Coxon and James were absent for a lip - synced Blur performance broadcast on Italian television, they were replaced by a cardboard cutout and a roadie, respectively. Blur biographer Stuart Maconie later wrote that, at the time, "Blur were sewn together very awkwardly ''. Although he had previously dismissed it, Albarn grew to appreciate Coxon 's tastes in lo - fi and underground music, and recognised the need to significantly change Blur 's musical direction once again. "I can sit at my piano and write brilliant observational pop songs all day long but you 've got to move on '', he said. He subsequently approached Street, and argued for a more stripped - down sound on the band 's next record. Coxon, recognising his own personal need to -- as Rowntree put it -- "work this band '', wrote a letter to Albarn, describing his desire for their music "to scare people again ''. After initial sessions in London, the band left to record the rest of the album in Iceland, away from the Britpop scene. The result was Blur, the band 's fifth studio album, released in February 1997. Although the music press predicted that the lo - fi sonic experimentation would alienate Blur 's teenage girl fan - base, they generally applauded the effort. Pointing out lyrics such as "Look inside America / She 's alright '', and noting Albarn 's "obligatory nod to Beck, (and promotion of) the new Pavement album as if paid to do so '', reviewers felt the band had come to accept American values during this time -- an about - face of their attitude during the Britpop years. Despite cries of "commercial suicide '', the album and its first single, "Beetlebum '', debuted at number one in the UK. Although the album could not match the sales of their previous albums in the UK, Blur became the band 's most successful internationally. In the US, the album received strong reviews; Blur reached number 61 on the Billboard 200 and was certified gold. The album 's "Song 2 '' single was also popular on alternative radio, reaching number six on the Modern Rock chart. After it was licensed for use in various media -- such as soundtracks, advertisements and television shows -- "Song 2 '' became the most recognisable Blur song in the US. After the success of Blur, the band embarked on a nine - month world tour. In February 1998, a few months after completing the tour, Blur released Bustin ' + Dronin ' for the Japanese market. The album is a collection of Blur songs remixed by artists such as Thurston Moore, William Orbit and Moby. Among the tracks, the band were most impressed by Orbit 's effort and enlisted him to replace Street as producer for their next album, citing a need to approach the recording process from a fresh perspective. Released in March 1999, Blur 's sixth studio album 13 saw them drift still further away from their Britpop - era attitude and sound. Orbit 's production style allowed for more jamming, and incorporated a "variety of emotions, atmospheres, words and sounds '' into the mix. 13 was creatively dominated by Coxon, who "was simply allowed to do whatever he chose, unedited '', by Orbit. Albarn 's lyrics -- more heart - felt, personal and intimate than on previous occasions -- were reflective of his break - up with Elastica frontwoman Justine Frischmann, his partner of eight years. The album received generally favourable reviews from the press. While Q called it "a dense, fascinating, idiosyncratic and accomplished art rock album '', the NME felt it was inconsistent and "(at least) a quarter - of - an - hour too long ''. 13 debuted at the top of the UK charts, staying at that position for two weeks. The album 's lead single, the gospel - based "Tender '', opened at the second spot on the charts. After "Coffee & TV '', the first Blur single to feature Coxon on lead vocals, only reached number 11 in the UK, manager Chris Morrison demanded a chart re-run because of what he deemed was a sales miscalculation. In July 1999, in celebration of their tenth anniversary, Blur released a 22 - CD limited edition box - set of their singles. The accompanying tour saw Blur play the A-sides of the 22 singles in their chronological order of release. In October 2000, the group released the compilation Blur: The Best of, which debuted at number three in the UK and received a Platinum certification for 300,000 copies shipped. Dismissed by the band as "the first record we have seen as product '', the track listing and release dates of Blur: The Best of were determined on the basis of market research and focus groups conducted by Blur 's record label, EMI. By this time, the group had largely disowned the upbeat pop singles from the Britpop era, and favoured the more arty, experimental work on Blur and 13. In an otherwise highly enthusiastic review of the best - of for the NME, Steve Sutherland criticised the band 's "sheer disregard '' for their earlier work; "Just because these songs embarrassed them once they started listening to broadsheet critics and retreated wounded from the big - sales battle with Oasis does n't mean that we 're morons to love them. '' After 13 and the subsequent tours in 1999 -- 2000, band members pursued other projects. Graham Coxon recorded a string of solo albums, while Damon Albarn dedicated his time to Gorillaz, the animated band he had created with Jamie Hewlett. Alex James worked with Fat Les and co-wrote several songs with Sophie Ellis - Bextor and Marianne Faithfull. Recording for Blur 's next album began in London in November 2001, but concerted work started in June 2002, with the sessions moving to Marrakech, Morocco soon after, and then to Devon back in the UK. Not long after the sessions began, Coxon left the group. Coxon said "there were no rows '' and "(the band) just recognised the feeling that we needed some time apart ''. Before the album was released, Blur released a new single, "Do n't Bomb When You Are the Bomb '' as a very limited white label release. The song is largely electronic, and was part of the band 's protest against war in the Middle East. Albarn, however, attempted to assuage fans ' fears that the album would be electronic by providing reassurances that the band 's new album would be "a rockin ' record '', and also said that it has "a lot of finely crafted pop songs ''. Early in 2002, Blur recorded a song that would be played by European Space Agency 's Beagle 2 lander once it touched down; however, attempts to locate the probe after it landed on Mars were fruitless. Think Tank, released in May 2003, was filled with atmospheric, brooding electronic sounds, featuring simpler guitar lines played by Albarn, and largely relying on other instruments to replace Coxon. The guitarist 's absence also meant that Think Tank was almost entirely written by Albarn. Its sound was seen as a testament to Albarn 's increasing interest in African and Middle Eastern music, and to his complete control over the group 's creative direction. Think Tank was yet another UK number one and managed Blur 's highest US position of number 56. It was also nominated for best album at the 2004 Brit Awards. The band did a successful tour in 2003, with former Verve guitarist Simon Tong filling in for Coxon. In 2005, XFM News reported that Blur would record an EP, and denied that they would hire a replacement guitarist for Coxon. There were also some aborted recordings made in 2005. Overall the band kept a low profile and did no studio or touring work as a three - piece. After Coxon significantly thawed on the subject of rejoining Blur, in 2007 band members announced that they would reunite, and that they intended to record together first in August, with the date later being pushed back to September, then October. Though the band members finally met up in October, they posted on their website that they had only "met up for an enjoyable lunch '' and that there were no "other music plans for Blur ''. In December 2008, Blur announced they would reunite for a concert at London 's Hyde Park on 3 July 2009. Days later, the band added a second date, for 2 July. A series of June preview shows were also announced, ending at Manchester Evening News arena on the 26th. All the shows were well received; The Guardian 's music critic Alexis Petridis gave their performance at Goldsmiths college a full five stars, and wrote "Blur 's music seems to have potentiated by the passing of years... they sound both more frenetic and punky and more nuanced and exploratory than they did at the height of their fame ''. Blur headlined the Glastonbury Festival on 28 June, where they played for the first time since their headline slot in 1998. Reviews of the Glastonbury performance were enthusiastic; The Guardian called them "the best Glastonbury headliners in an age ''. The band released their second greatest hits album Midlife: A Beginner 's Guide to Blur in June 2009. Blur also headlined at other summer festivals, including Oxegen 2009 in Ireland, and the Scottish outdoor show of T in the Park. Their T in the Park headline slot was put in jeopardy after Graham Coxon was admitted to hospital with food poisoning. Ultimately, the band did play, albeit an hour and a half after they were scheduled to appear. After the completion of the reunion dates, James said the group had not discussed further plans, and Albarn told Q soon after that Blur had no intention of recording or touring again. He said, "I just ca n't do it anymore '', and explained that the main motivation for participating in the reunion was to repair his relationship with Coxon, which he succeeded at. Coxon also said that no further Blur activity was planned, telling NME.com in September, "We 're in touch and we say ' Wotcha ' and all that but nothing has been mentioned about any more shows or anything else ''. In January 2010, No Distance Left to Run, a documentary about the band, was released in cinemas and a month later on DVD. In April 2010, Blur released their first new recording since 2003, "Fool 's Day '', for the Record Store Day event, as a vinyl record limited to 1000 copies; it was later made available as a free download on their website. No Distance Left to Run was nominated as Best Long Form Music Video for the 53rd Grammy Awards, Blur 's first - ever Grammy nomination. In February 2012, Blur were awarded the Outstanding Contribution to Music award at the 2012 Brit Awards. Later that month, Albarn and Coxon premiered a new track together live, "Under the Westway ''. In April, the band announced that a box - set entitled Blur 21 -- containing all seven Blur studio albums, four discs of unreleased rarities and three DVDs -- would be released in July. Blur had also entered the studio early that year to record material for a new album, but in May producer William Orbit told the NME that Albarn had halted recording. Blur 's official Twitter and Facebook pages announced that the band would release two singles "The Puritan '' and "Under the Westway '' on 2 July. That August, Blur headlined a show at Hyde Park for the 2012 Summer Olympics closing ceremony. In 2013, the band performed at the Rock Werchter in Belgium, the Spanish and Portuguese dates of the Primavera Sound festival, and the Coachella Valley Music and Arts Festival in the United States. In April 2015, Blur released their first studio album in twelve years, The Magic Whip. Conceived over five days in Hong Kong after a cancelled Japan tour in 2013, the album was inspired by the city as well. "There 's nothing pastoral about it '', Albarn said, "it 's very urban ''. The Magic Whip also marks the return of Coxon, absent on all but one track on Think Tank, and Stephen Street, Blur 's producer during the Britpop era. Upon its release, the record was greeted with applause both by the music press and the mainstream media. Awarding the album a full five stars, The Daily Telegraph called The Magic Whip "a triumphant comeback that retains the band 's core identity while allowing ideas they 'd fermented separately over the past decade to infuse their sound with mature and peculiar new flavour combinations ''. The NME concurred, saying Blur were "a reunited band making music to rival their very best ''. It was also a commercial success, becoming the sixth consecutive Blur LP since Parklife (1994) to top the British charts. The Guardian also noted that at times during its first week of release, The Magic Whip sold "more than the rest of the top five combined ''. That December New World Towers, a documentary on the recording process of The Magic Whip, was released in select British theatres.
how was birmingham affected by the industrial revolution
History of Birmingham - wikipedia Alternative meaning: Timeline of Birmingham, Alabama The history of Birmingham in England spans 1400 years of growth, during which time it has evolved from a small 7th century Anglo Saxon hamlet on the edge of the Forest of Arden at the fringe of early Mercia to become a major city through a combination of immigration, innovation and civic pride that helped to bring about major social and economic reforms and to create the Industrial Revolution, inspiring the growth of similar cities across the world. The last 200 years have seen Birmingham rise from market town into the fastest - growing city of the 19th century, spurred on by a combination of civic investment, scientific achievement, commercial innovation and by a steady influx of migrant workers into its suburbs. By the 20th century Birmingham had become the metropolitan hub of the United Kingdom 's manufacturing and automotive industries, having earned itself a reputation first as a city of canals, then of cars, and most recently as a major European convention and shopping destination. By the beginning of the 21st century, Birmingham lay at the heart of a major post-industrial metropolis surrounded by significant educational, manufacturing, shopping, sporting and conferencing facilities. The oldest human artefact found within Birmingham is the Saltley Handaxe: a 500,000 - year - old brown quartzite hand axe about 100 millimetres (3.9 in) long, discovered in the gravels of the River Rea at Saltley in 1892. Other parts or Birmingham are quite similar in this way, as people seem to have lived there for millennia. This provided the first evidence of lower paleolithic human habitation of the English Midlands, an area previously thought to have been sterile and uninhabitable before the end of the last glacial period. Similarly aged axes have since also been found in Erdington and Edgbaston, and bioarchaeological evidence from boreholes in Quinton, Nechells and Washwood Heath suggests that the climate and vegetation of Birmingham during this interglacial period were very similar to those of today. The area became uninhabitable with the advancing glaciation of the last ice age, and the next evidence of human habitation within Birmingham dates from the mesolithic period. A 10,400 - year - old settlement -- the oldest within the city -- was excavated in the Digbeth area in 2009, with evidence that hunter - gatherers with basic flint tools had cleared an area of forest by burning. Flint tools from the later mesolithic period -- between 8000 and 6000 years ago -- have been found near streams in the city, though these probably represent little more than hunting parties or overnight camps. The oldest man - made structures in the city date from the Neolithic era, including a possible cursus identified by aerial photography near Mere Green, and the surviving barrow at Kingstanding. Neolithic axes found across Birmingham include examples made of stone from Cumbria, Leicestershire, North Wales and Cornwall, suggesting the area had extensive trading links at the time. Stone axes used by the area 's first farmers over 5,000 years ago have been found within the city and the first bronze axes date from around 4,000 years ago. Pottery dating back to 2700BC has been found in Bournville. The most common prehistoric sites in Birmingham are burnt mounds -- a form characteristic of upland areas and possibly formed by the heating of stones for cooking or steam bathing purposes. Forty to fifty have been found in the Birmingham area, all but one datable to the period 1700 -- 1000 BC. Burnt mound sites such as that discovered in Bournville also show evidence of wider settlements, with clearances in the woodland and grazing animals. Possible bronze age settlements with later iron age farmsteads have been discovered at Langley Mill Farm in Sutton Coldfield. Further evidence of iron age settlement has been found at Berry Mound, a hill fort located in the Bromsgrove district of Worcestershire, near Shirley. In Roman times a large military fort and marching camp, Metchley Fort, existed on the site of the present Queen Elizabeth Hospital near what is now Edgbaston in southern Birmingham. The fort was constructed soon after the Roman invasion of Britain in AD 43. In AD 70, the fort was abandoned only to be reoccupied a few years later before being abandoned again in AD 120. Remains have also been found of a civilian settlement, or vicus, alongside the Roman fort. Excavations at Parson 's Hill in Kings Norton and at Mere Green have revealed a Roman kiln site. Although no archeological evidence has been found, the presence of the Old English prefix wīc - in Witton (wīc - tūn) suggests that it may have been the site of a significant Romano - British vicus or settlement, which would have been adjacent to the crossing of the River Tame by Icknield Street at Perry Barr. Roman military roads have been identified converging on the Birmingham area from Letocetum (Wall, near Lichfield) in the north; from Salinae (Droitwich) in the south east; from Alauna (Alcester) in the south, and from Pennocrucium (Penkridge) in the north west. In many places the courses of these roads -- including the points where they met -- have been lost as they pass through the urban area, though a section of the route from Wall is well preserved as it passes through Sutton Park. Roads are also likely to have led via the known Roman settlements at Castle Bromwich and Grimstock Hill near Coleshill to Manduessedum (Mancetter), and to the fort at Greensforge near Kinver. The existence of straight road alignments coinciding with early parish boundaries suggests another Roman road may have passed through Birmingham from east to west through Ladywood, Highgate and Sparkbrook, along the line of Ladywood Road, Belgrave Road and parts of Warwick Road. The Roman routes from Wall and Alcester were together named Icknield Street during the later Medieval period, though the implication that they were viewed as a single route by the Romans may be misleading, and it is possible that the road from Droitwich was originally the more important of the two southern routes. Archaeological evidence from the Anglo Saxon era in Birmingham is slight and documentary records of the era are limited to seven Anglo - Saxon charters detailing the outlying areas of King 's Norton, Yardley, Duddeston and Rednal. Place name evidence, however, suggests that it was during this period that many of the settlements that were later to make up the city, including Birmingham itself, were established. The name "Birmingham '' comes from the Old English Beormingahām, meaning the home or settlement of the Beormingas -- a tribe or clan whose name literally means "Beorma 's people '' and which may have formed an early unit of Anglo - Saxon administration. Beorma, after whom the tribe was named, could have been its leader at the time of the Anglo - Saxon settlement, a shared ancestor, or a mythical tribal figurehead. Place names ending in - ingahām are characteristic of primary settlements established during the early phases of Anglo - Saxon colonisation of an area, suggesting that Birmingham was probably in existence by the early 7th century at the latest. Surrounding settlements with names ending in - tūn (farm), - lēah (woodland clearing), - worð (enclosure) and - field (open ground) are likely to be secondary settlements created by the later expansion of the Anglo - Saxon population, in some cases possibly on earlier British sites. The site of Anglo - Saxon and Domesday Birmingham is not known. The traditional view -- that it was a village based around the crossing of the River Rea at Deritend, with a village green on the site that became the Bull Ring -- has now largely been discredited, and not a single piece of Anglo - Saxon material was found during the extensive archeological excavations that preceded the redevelopment of the Bull Ring in 2000. Other locations have been suggested including the Broad Street area; Hockley in the Jewellery Quarter; or the site of the Priory of St Thomas of Canterbury, now occupied by Old Square. Alternatively early Birmingham may have been an area of scattered farmsteads with no central nucleated village, or the name may originally have referred to the wider area of the Beormingas ' tribal homeland, much larger than the later manor and parish and including many surrounding settlements. Analysis of the pre-Norman linkages between parishes suggests that such an area could have extended from West Bromwich to Castle Bromwich, and from the southern boundaries of Northfield and King 's Norton to the northern boundaries of Sutton Coldfield. During the early Anglo - Saxon period the area of the modern city lay across a frontier separating two peoples. Birmingham itself and the parishes in the centre and north of the area were probably colonised by the Tomsaete or "Tame - dwellers '', who were Anglian tribes who migrated along the valleys of the Trent and the Tame from the Humber Estuary and later formed the kingdom of Mercia. Parishes in the south of the current city such as Northfield and King 's Norton were colonised during a later period by the Hwicce, a Saxon tribe whose migration north through the valleys of the Severn and Avon followed the West Saxons ' victory over the Britons at the Battle of Dyrham in 577. The exact boundary between the two groups may not have been precisely defined, but is likely to be indicated by the boundaries of the later dioceses of Lichfield and Worcester established after the 7th century conversion of Mercia to Christianity. The late 7th century saw the kingdom of Mercia expand, absorbing the Hwicce by the late 8th century and eventually coming to dominate most of England, but the growth of Viking power in the later 9th century saw eastern Mercia fall to the Danelaw, while the western part, including the Birmingham area, came to be dominated by Wessex. During the 10th century Edward the Elder of Wessex reorganised western Mercia for defensive purposes into shires based around the fortified burhs established by his sister Æthelflæd. The Birmingham area again found itself a border region, with the parish of Birmingham forming part of the Coleshill Hundred of newly created Warwickshire, but other areas of the modern city falling within Staffordshire and Worcestershire. The first surviving documentary record of Birmingham is in the Domesday Book of 1086, where it is recorded as the small manor of Bermingeham worth only 20 shillings. At the time of the Domesday survey, Birmingham was far smaller than other villages in the area, most notably Aston. Other manors recorded in the Domesday survey were Sutton, Erdington, Edgbaston, Selly, Northfield, Tessall And Rednal. A settlement called "Machitone '' was also mentioned in the survey. This was to later become Sheldon. The Manor of Birmingham was located at the foot of the eastern side of the Keuper Sandstone ridge. It would have been, at the time of the Domesday survey, a small house. However, it later developed into a timber - framed house surrounded by a moat fed by the River Rea. The transformation of Birmingham from the purely rural manor recorded in the Domesday Book started decisively in 1166, with the purchase by the Lord of the Manor Peter de Birmingham of a royal charter from Henry II permitting him to hold a weekly market "at his castle at Birmingham '' and to charge tolls on the market 's traffic. This was one of the earliest of the two thousand such charters that would be granted in England in the two centuries up to 1350, and may have recognised a market that was already taking place, as lawsuits of 1285 and 1308 both upheld the claim that the Birmingham market had been held without interruption since before the Norman conquest. Its significance remains, however, as Peter followed the investment with the deliberate creation of a planned market town within his demesne, or manorial estate. This era saw the laying out of the triangular marketplace that became the Bull Ring; the selling of burgage plots on the surrounding frontages granting privileges in the market and freedom from tolls; the diversion of local trade routes towards the new site and its associated crossing of the River Rea at Deritend; the rebuilding of the Birmingham Manor House in stone and probably the first establishment of the parish church of St Martin in the Bull Ring. By the time Peter 's son William de Birmingham sought confirmation of the market 's status from Richard I twenty three years later, its location was no longer "his castle at Birmingham '', but "the town of Birmingham ''. The following period saw the new town expand rapidly in highly favourable economic circumstances. The Birmingham market was the earliest to be established on the Birmingham Plateau -- an area which accounted for most of the doubling or tripling of the population of Warwickshire between 1086 and 1348 as population growth nationally encouraged the settlement and cultivation of previously marginal land. Surviving documents record the widespread enclosure of wasteland and clearance of woodland in King 's Norton, Yardley, Perry Barr and Erdington during the 13th century, a period over which the cultivated area of the manor of Bordesley also increased twofold. Demand for agricultural trade was further fuelled by the increasing requirement for rents to be paid in cash rather than labour, leading tenant farmers to sell more of their produce. It would be almost a century before markets at Solihull, Halesowen and Sutton Coldfield provided the Birmingham market with any local competition, and by then the success of Birmingham itself provided the model, with tenure at Solihull explicitly being granted "according to the liberties and customs merchant of the market of Birmingham ''. Within a century of the 1166 charter Birmingham had grown into a prosperous town of craftsmen and merchants. The signing of Letters patent indicate visits by the King to Birmingham in 1189, 1235 and 1237, and two burgesses were summoned to represent the town in Parliament in 1275. This event was not repeated until the 19th century, but established Birmingham as a town of comparable significance to older Warwickshire towns such as Alcester, Coleshill, Stratford - upon - Avon and Tamworth. Fifty years later the lay subsidy rolls of 1327 and 1332 show Birmingham to have overtaken all of these towns to become the third largest in the county, behind only Warwick itself and Coventry -- at the time the fourth largest urban centre in England. Birmingham 's market is likely to have remained primarily one for agricultural produce throughout the medieval period. The land of the Birmingham Plateau, particularly the unenclosed area of the manor of Birmingham to the west of the town, was more suited to pastoral then arable agriculture and excavated animal bones indicate that cattle were the dominant livestock, with some sheep but very few pigs. References in 1285 and 1306 to stolen cattle being sold in the town suggest that the size of the trade at this time was sufficient for such sales to go unnoticed. Trade through Birmingham diversified as a merchant class arose, however: mercers and purveyors are mentioned in early 13th century deeds, and a legal dispute involving traders from Wednesbury in 1403 reveals that they were dealing in iron, linen, wool, brass and steel as well as cattle in the town. By the 14th century Birmingham seems to have been established as a particular centre of the wool trade. Two Birmingham merchants represented Warwickshire at the council held in York in 1322 to discuss the standardisation of wool staples, and others attended the Westminster wool merchants assemblies of 1340, 1342 and 1343, a period when at least one Birmingham merchant was trading considerable amounts of wool with continental Europe. Aulnage records for 1397 give some indication of the size of Birmingham 's textile trade at the time, the 44 broadcloths sold being a tiny fraction of the 3,000 sold in the major textile centre of Coventry, but making up almost a third of the trade of the rest of Warwickshire. Birmingham was also situated on several significant overland trade routes. By the end of the 13th century the town was an important transit point for the trade in cattle along drovers ' roads from Wales to Coventry and the South East of England. Exchequer accounts for 1340 record wine imported through Bristol being unloaded at Worcester and transported by cart to Birmingham and Lichfield. This route from Droitwich is shown on the Gough Map of the mid-14th century and described by the contemporary Ranulf Higdon as forming part of one of the "Four Great Royal Roads '' of England, running from Worcester to the River Tyne. The de Birmingham family were active in promoting the market, whose tolls would have formed an important part of the income from the manor of Birmingham, by then the most valuable of their estates. The establishment of a rival market at Deritend in the neighbouring parish of Aston had led them to acquire the hamlet by 1270, and the family is recorded enforcing the payment of tolls by traders from King 's Norton, Bromsgrove, Wednesbury, and Tipton in 1263, 1308 and 1403. In 1250 William de Birmingham gained permission to hold a fair for three days about Ascensiontide. By 1400 a second fair was being held at Michaelmas and in 1439 the then lord negotiated for the town to be free of the presence of royal purveyors. Archaeological evidence of small - scale industries in Birmingham appears from as early as the 12th century, and the first documentary evidence of craftsmen in the town comes from 1232, when a group of burgesses negotiating to be released from their obligation to help with the Lord 's haymaking are listed as including a smith, a tailor and four weavers. Manufacturing is likely to have been stimulated by the existence of the market, which would have provided a source of raw materials such as hides and wool, as well as a demand for goods from prosperous merchants in the town and from visitors from the countryside selling produce. By 1332 the number of craftsmen in Birmingham was similar to that of other Warwickshire towns associated with industry such as Tamworth, Henley - in - Arden, Stratford - upon - Avon, and Alcester. Analysis of craftsmen 's names in medieval records suggests that the major industries of medieval Birmingham were textiles, leather working and iron working, with archaeological evidence also suggesting the presence of pottery, tile manufacture and probably the working of bone and horn. By the 13th century there were tanning pits in use in Edgbaston Street; and hemp and flax were being used for making rope, canvas and linen. Kilns producing the distinctive local Deritend Ware pottery, examples of which can be seen in The Birmingham History Galleries, existed in the 12th and 13th centuries, and skinners, tanners and saddlers are recorded in the 14th century. The presence of slag and hearth bloom in pits excavated behind Park Street also suggests the early presence of working in iron. The borough rentals of 1296 provide evidence of at least four forges in the town, four smiths are mentioned on a poll tax return of 1379 and seven more are documented in the following century. Although fifteen to twenty weavers, dyers and fullers have been identified in Birmingham up to 1347, this is not a significantly greater number of cloth - workers than that found in surrounding villages and at least some of the cloth sold on the Birmingham market had rural origins. This was the first local industry to benefit from mechanisation, however, and nearly a dozen fulling mills existed in the Birmingham area by the end of the 14th century, many converted from corn mills, but including one at Holford near Perry Barr that was purpose - built in 1358. While most manufactured goods would have been produced for a local market, there is some evidence that Birmingham was already a specialised and widely recognised centre of the jewellery trade during the medieval period. An inventory of the personal possessions of the Master of the Knights Templar in England at the time of their suppression in 1308 includes twenty two Birmingham Pieces: small, high value items, possibly jewellery or metal ornaments, that were sufficiently well known to be referred to without explanation as far away as London. In 1343 three Birmingham men were punished for selling base metal items while asserting they were silver, and there is documentary evidence of goldsmiths in the town in 1384 and 1460 -- a trade that could not have been supported purely through local demand in a town of Birmingham 's size. The growth of the urban economy of 13th century and early 14th century Birmingham was reflected in the development of its institutions. St Martin in the Bull Ring was rebuilt on a lavish scale around 1250 with two aisles, a clerestory and a 200 ft (61 m) high spire, and two chantries were endowed in the church by wealthy local merchants in 1330 and 1347. The Priory of St Thomas of Canterbury is first recorded in 1286, and by 1310 had received six major endowments of land totalling 60 acres (240,000 m) and 27 smaller endowments. The Priory was reformed in 1344 after criticism by the Bishop of Lichfield and a chantry was established in its chapel. St John 's Chapel, Deritend was established around 1380 as a chapel of ease of the parish church of Aston, with its priest supported by the associated Guild of St John, Deritend, which also maintained a school. The parish of Birmingham gained its own religious guild with the foundation in 1392 of the Guild of the Holy Cross, which provided a social and political focus for the elite of the town as well as supporting chaplains, almshouses, a midwife, a clock, the bridge over the River Rea and "divers ffoule and daungerous high wayes ''. Economic success also saw the town expand. New Street is first recorded in 1296, Moor Street was created in the late 13th century and Park Street in the early 14th century. Population growth was driven by immigrants attracted by the opportunity of establishing themselves as traders, free from duties of agricultural labour. Analysis of rentals suggest that two - thirds came from within 10 miles (16 km) of the town, but others came from further afield, including Wales, Oxfordshire, Lincolnshire, Hampshire and even Paris. There is also some evidence that the town may have had a Jewish population before the Edict of Expulsion of 1290. Although medieval Birmingham was never incorporated as a self - governing entity independent of its manor, the Borough -- the built - up area -- was governed separately from the Foreign -- the open agricultural area to the west -- from at least 1250. The burgesses of the town elected the two bailiffs; the "commonalty of the town '' is first recorded in 1296 and a pavage grant of 1318 was made out not to the Lord of the Manor but "to the bailiffs and good men of the town of Bermyngeham ''. The town thus remained free of the restrictive trade guilds of fully chartered boroughs, but was free also from the constraints of a strict manorial regime. If the 12th and 13th centuries were a period of growth for the town, however, this ceased during the 14th century in the face of a series of calamities. The manorial court records of nearby Halesowen record a "Great Fire of Birmingham '' between 1281 and 1313, an event possibly reflected in late 13th century pits containing large quantities of charcoal and charred and burnt pottery found beneath Moor Street. Famines of 1315 to 1322 and the Black Death of 1348 -- 50 halted the growth in population and the decline in archaeological evidence of pottery from the 14th and 15th centuries may indicate a prolonged period of economic hardship. The Tudor and Stuart eras marked a period of transition for Birmingham. In the 1520s the town was the third largest in Warwickshire with a population of about 1,000 -- a situation little changed from that two centuries earlier. Despite a series of plagues throughout the 17th century, by 1700 Birmingham 's population had increased fifteenfold and the town was the fifth - largest in England and Wales, with a nationally important economy based on the expanding and diversifying metal trades, and a reputation for political and religious radicalism firmly established by its role in the English Civil War. The principal institutions of medieval Birmingham collapsed within the space of eleven years between 1536 and 1547. The Priory of St Thomas was suppressed and its property sold at the Dissolution of the Monasteries in 1536, with the Guild of the Holy Cross, the Guild of St John and their associated chantries also being disbanded in 1547. Most significantly, the de Birmingham family lost possession of the manor of Birmingham in 1536, probably as a result of a feud between Edward de Birmingham and John Sutton, 3rd Baron Dudley. After brief periods in the possession of the Crown and the Duke of Northumberland, the manor was sold in 1555 to Thomas Marrow of Berkswell. Birmingham would never again have a resident Lord of the Manor, and the district as a whole was to remain an area of weak lordship throughout the following centuries. With local government remaining essentially manorial, the townspeoples ' resulting high degree of economic and social freedom was to be a highly significant factor in Birmingham 's subsequent development. The period was also one of significant cultural development. Although the dissolution of the Guild of St John in Deritend saw the closure of its associated school, the former hall of the Guild of the Holy Cross in New Street, together with property worth £ 21 per year from its estate, was saved for the establishment of King Edward 's Free Grammar School in 1552. John Rogers, born in Deritend in 1500, became the town 's first notable literary figure when he compiled and partially translated the Matthew Bible, the first complete authorised edition of the Bible to appear in the English language. The first Birmingham library had been established by 1642, the same year that Nathaniel Nye -- the town 's first known scientist -- published his New Almanacke and Prognostication calculated exactly for the faire and populous Towne of Birmicham in Warwickshire. Finally 1652 marks the first record of a Birmingham bookseller and the first Birmingham - published book: The Font Guarded, by the local puritan Thomas Hall. Although the leather and textile trades were still major features of the Birmingham economy in the early 16th century, the increasing importance of the manufacture of iron goods, as well as the interdependence between the manufacturers of Birmingham and the raw materials of the area that later became known as the Black Country, were recognised by the antiquary John Leland when he travelled through in 1538, providing the town 's earliest surviving eyewitness description. The bewty of Bremischam, a good market towne in the extreme partes that way of Warwike - shire, is in one strete goynge up alonge almoste from the lefte ripe of the broke up a mene hille by the lengthe of a quartar of a mile. I saw but one paroche churche in the towne. There be many smithes in the towne that use to make knives and all maner of cuttynge tooles, and many lorimers that make byts, and a greate many naylors. So that a great parte of the towne is mayntayned by smithes. The smithes have there yren out of Staffordshire and Warwikeshire and see coale out of Staffordshire. The importance of the iron trades increased further as the century progressed, with most of the fulling mills in the Birmingham area being converted into mills for grinding blades by the end of the century. In 1586 William Camden described the town as "swarming with inhabitants, and echoing with the noise of the anvils, for here are great numbers of smiths and of other artificers in iron and steel, whose performances in that way are greatly admired both at home and abroad ''. Birmingham itself operated as the commercial hub for manufacturing activity which took place across the Birmingham Plateau, which itself formed the centre of a network of iron forges and furnaces stretching from South Wales and the Forest of Dean to Cheshire. While 16th century deeds record nailers in Moseley, Harborne, Handsworth and King 's Norton; bladesmiths in Witton, Erdington and Smethwick and scythesmiths in Aston, Erdington, Yardley and Bordesley; the ironmongers -- the merchants who organised finance, supplied raw materials and marketed products, acting as middlemen between the smiths, their suppliers and their customers -- were concentrated in the town of Birmingham itself. Birmingham ironmongers are recorded selling the Royal Armouries large quantities of bills as early as 1514; by the 1550s Birmingham merchants were trading as far afield as London, Bristol and Norwich, in 1596 Birmingham men are recorded selling arms in Ireland, and by 1657 the reputation of the Birmingham metalware market had reached the West Indies. By 1600 Birmingham ranked alongside London as one of the two great concentrations of iron merchants in the country, flourishing from its economic freedom and its proximity to manufacturers and raw materials, while London 's traders remained under the control of a powerful corporate body. The concentration of iron merchants in Birmingham was significant in the development of the town 's manufacturing as well as its commercial activities. Able to serve a wider market due to the town 's extensive trading links, the metalworkers of Birmingham could diversify into increasingly specialised activities. Over the course of the 17th century the lower - skilled nail, scythe and bridle trades -- producing basic iron goods for local agricultural markets -- moved west to the towns that would later make up the Black Country, while Birmingham itself focused on an increasingly wide range of more specialist, higher - skilled and more lucrative activities. The hearth tax returns for Birmingham for 1671 and 1683 show the number of forges in the town increasing from 69 to 202, but the later figures also show a far wider variety of trades, including hiltmakers, bucklemakers, scalemakers, pewterers, wiredrawers, locksmiths, swordmakers, and workers in solder and lead. Birmingham 's economic flexibility was already apparent at this early stage: workers and premises often changed trades or practiced more than one, and the presence of a wide range of skilled manufacturers in the absence of restrictive trade guilds encouraged the development of entirely new industries. The range of goods produced in late - 17th century Birmingham, and its international reputation, were illustrated by the French traveller Maximilien Misson who visited Milan in 1690, finding "fine works of rock crystal, swords, heads of canes, snuff boxes, and other fine works of steel '', before remarking "but they can be had better and cheaper at Birmingham ''. By the early 17th century Birmingham 's booming economy, dominated by the self - made merchants and manufacturers of the new metal trades rather than traditional landed interests; its expanding population and high degree of social mobility; and the almost complete absence from the area of a resident aristocracy; had seen it develop a new form of social structure very different from that of more traditional towns and rural areas: relationships were based more around pragmatic commercial linkages than the rigid paternalism and deference of feudal society, and the town was widely seen as one where loyalty to the traditional hierarchies of the established church and aristocracy was weak. The first signs of Birmingham 's growing political self - consciousness can be seen from the 1630s, when a series of puritan lectureships provided a focus for questioning the doctrines and structures of the Church of England and had a widespread influence throughout surrounding counties. A 1640 puritan - inspired petition raised by the townspeople against the local Justice of the Peace Sir Thomas Holte provides the first evidence of conflict with the country gentlemen who dominated local government. The outbreak of the English Civil Wars in 1642 saw Birmingham emerge as a symbol of puritan and Parliamentarian radicalism, with the Royalist Earl of Clarendon 's History of the Rebellion and Civil Wars in England condemning the town as being "of as great fame for hearty, wilful, affected dis - loyalty to the king, as any place in England ''. The arrival of 400 armed men from Birmingham was decisive in securing Coventry 's refusal to admit Charles I in August 1642, which established Warwickshire as a Parliamentarian stronghold. The march of the King and his army south from Shrewsbury in the days leading up to the Battle of Edge Hill in October 1642 met strong local resistance, with troops headed by Prince Rupert of the Rhine and the Earl of Derby being ambushed by local trainbands in Moseley and King 's Norton, and the King 's baggage train attacked by Birmingham townspeople and his personal possessions plundered and transported to Warwick Castle while the King stayed with the Royalist Sir Thomas Holte at Aston Hall. The strategic importance of Birmingham 's metal trades was also significant: one report suggested that the town 's main mill had manufactured 15,000 swords solely for the use of Parliamentarian forces. Royalist revenge was exacted on Easter Monday, 3 April 1643, when Prince Rupert returned to Birmingham with 1,200 cavalry, 700 footsoldiers and 4 guns. Twice repulsed by the small force of 200 defenders behind hastily erected earthworks, the Royalist cavalry outflanked the defences and overran the town in the Battle of Birmingham, before burning 80 houses and leaving behind the naked corpses of fifteen townspeople. Although the Royalist victory was militarily insignificant, and came at the expense of the death of the Earl of Denbigh, the resistance of a largely civilian force against the royalist cavalry, and the subsequent sack of the town, presented the Roundheads with a major propaganda boost that was eagerly exploited by a series of widely circulated pamphlets. During the later years of the Civil War the subversive potential of Birmingham 's manufacturing - based society was personified by the local parliamentarian colonel John "Tinker '' Fox, who recruited a garrison of 200 men from the Birmingham area and occupied Edgbaston Hall from 1643. From there he attacked and removed the Royalist garrison from nearby Aston Hall, established control over the countryside leading out to Royalist Worcestershire, and launched a series of audacious raids as far afield as Bewdley. Highly active, and operating largely independently of the parliamentarian hierarchy, to Royalists Fox came to symbolise a dangerous and uncontrolled overturning of the established order, with his background in the Birmingham metal trades seeing him caricatured as a tinker. By 1649 his national notoriety was such that he was widely rumoured to have been Charles I 's executioner. The 18th century saw the sudden emergence of Birmingham at the forefront of worldwide developments in science, technology, medicine, philosophy and natural history as part of the cultural transformation now known as the Midlands Enlightenment. By the second half of the century the town 's leading thinkers -- particularly members of the Lunar Society of Birmingham such as Joseph Priestley, James Keir, Matthew Boulton, James Watt, William Withering and Erasmus Darwin -- had become widely influential participants in the Republic of Letters, the free circulation of ideas and information among the developing pan-European and trans - Atlantic intellectual elite. The Lunar Society was "the most important private scientific association in eighteenth - century England '' and the Midlands Enlightenment "dominated the English experience of enlightenment '', but also maintained close links with other major centres of the Age of Enlightenment, particularly the universities of the Scottish Enlightenment, the Royal Society in London, and scientists, philosophers and academicians in France, Sweden, Saxony, Russia and America. This "miracle birth '' has traditionally been seen as a result of Birmingham 's status as a stronghold of religious Nonconformism, creating a free - thinking culture unconstrained by the established Church of England. This accords with wider historical theories such as the Merton thesis and the Weber thesis, that see Protestant culture as a major factor in the rise of experimental science and industrial capitalism within Europe. Birmingham had a vigorous and confident Nonconformist community by the 1680s, at a time when freedom of worship for Nonconformists nationally had yet to be granted; and by the 1740s this had developed into an influential group of Rational Dissenters. Around 15 % of households in Birmingham were members of Nonconformist congregations in the mid-18th century, compared to a national average of 4 -- 5 %. Presbyterians and Quakers in particular also had a level of influence within the town that was disproportionate to their numbers, customarily holding the position of Low Bailiff -- the most powerful position in the town 's local government -- from 1733, and making up over a quarter of the Street Commissioners appointed in 1769, despite being legally barred from holding office until 1828. Despite this, Birmingham 's Enlightenment was by no means a purely Nonconformist phenomenon: the members of the Lunar Society had a wide range of religious backgrounds, and Anglicans formed a majority of all sections of Birmingham society throughout the period. Recent scholarship no longer sees the Midlands Enlightenment as primarily having an industrial or technological focus. Analysis of the subject - matter of Lunar Society meetings shows that its main concern was with pure scientific investigation rather than manufacturing, and the influence of Midlands Enlightenment thinkers can be seen in areas as diverse as education, the philosophy of mind, Romantic poetry, the theory of evolution and the invention of photography. A distinctive and significant feature of the Midlands Enlightenment, however, that partly resulted from Birmingham 's unusually high level of social mobility, was the close relationship between the practitioners of theoretical science and those of practical manufacturing. The Lunar Society included industrialists such as Samuel Galton, Jr. as well as intellectuals such as Erasmus Darwin and Joseph Priestley; scientific lecturers such as John Warltire and Adam Walker communicated basic Newtonian principles widely to the town 's manufacturing classes; and men such as Matthew Boulton, James Keir, James Watt and John Roebuck were simultaneously highly regarded both as scientists and as technologists, and in some cases also as businessmen. The intellectual climate of 18th century Birmingham was therefore unusually conducive to the transfer of knowledge from the pure sciences to the technology and processes of manufacturing, and the feeding back of the results of this to create a "chain reaction of innovation ''. The Midlands Enlightenment therefore occupies a key cultural position linking the expansion of knowledge of the earlier Scientific Revolution with the economic expansion of the Industrial Revolution. 18th century Birmingham saw the widespread and systematic application of reason, experiment and scientific knowledge to manufacturing processes to an unprecedented degree, resulting in a series of technological and economic innovations that transformed the economic landscape of a wide variety of industries, laying many of the foundations for modern industrial society. In 1709 Abraham Darby I, who had trained as an apprentice in Birmingham and worked in Bristol for the Birmingham ironmonger Sampson Lloyd, moved to Coalbrookdale in Shropshire and established the first blast furnace to successfully smelt iron with coke. In 1732 Lewis Paul and John Wyatt invented roller spinning -- the "one novel idea of the first importance '' in the development of the mechanised cotton industry -- and in 1741 they opened the world 's first cotton mill in Birmingham 's Upper Priory. In 1765 Matthew Boulton opened the Soho Manufactory, pioneering the combination and mechanisation of previously separate manufacturing activities under one roof through a system known as "rational manufacture ''. By the end of the decade this was the largest manufacturing unit in Europe with over 1,000 employees, and the foremost icon of the emerging factory system. John Roebuck 's 1746 invention of the lead chamber process first enabled the large - scale manufacture of sulphuric acid, while James Keir pioneered the manufacture of alkali at his plant in Tipton; between them these two developments marked the birth of the modern chemical industry. The most notable technological innovation of the Midlands Enlightenment, however, was the 1775 development by James Watt and Matthew Boulton of the industrial steam engine, which incorporated four separate technical advances to allow it to cheaply and efficiently generate the rotary motion needed to power manufacturing machinery. Freeing the productive potential of society from the limited capacity of hand, water and animal power, this was arguably the pivotal development of the entire industrial revolution, without which the spectacular increases in economic activity of the subsequent century would have been impossible. The explosive industrial growth of Birmingham started before that of the textile towns of the North of England and can be traced as far back as the 1680s. Birmingham 's population quadrupled between 1700 and 1750. By 1775 -- before the start of the mechanisation of the Lancashire cotton trade -- Birmingham was already the third most - populous town in England, smaller only than the older southern ports of London and Bristol and growing faster than any of its rivals. As early as 1791 Birmingham was being described by the economist Arthur Young as "the first manufacturing town in the world ''. The factors that drove Birmingham 's rapid industrialisation were also different from those behind the later development of textile manufacturing towns such as Manchester, whose spectacular growth from the 1780s onwards was based on the economies of scale inherent in mechanised manufacture: the ability of a low - wage, unskilled labour force to produce bulk commodities such as cotton in huge quantities. Although the developments that enabled this transformation -- roller spinning, the factory system and the industrial steam engine -- often had Birmingham origins, they had little role in Birmingham 's own expansion. Birmingham 's relatively inaccessible location meant that its industries were dominated by the production of a wide variety of small, high value metal items -- from buttons and buckles to guns and jewellery. Its economy was characterised by high wages and a diverse range of specialised skills that were not susceptible to wholesale automation. The small - scale workshop, rather than the large factory or mill, remained the typical Birmingham manufacturing unit throughout the 18th century, and the use of steam power was not economically significant in Birmingham until the 1830s. Rather than low wages, efficiency and scale, Birmingham 's manufacturing growth was fuelled by a highly skilled workforce, specialisation, flexibility and, above all, innovation. The low barriers to entry to the Birmingham trades gave the town a high degree of social mobility and a distinctly entrepreneurial economy; the contemporary William Hutton described trades that "spring up with the expedition of a blade of grass, and, like that, wither in the summer ''. In terms of manufacturing technology Birmingham was by far the most inventive town of the era: between 1760 and 1850 Birmingham residents registered over three times as many patents as those of any other town or city. Despite the landmark inventions of engineers such as James Watt, most of these developments are better characterised as part of a continuous flow of small - scale technological improvements. Matthew Boulton remarked in 1770 how "by the many mechanical contrivances and extensive apparatus wh we are possess 'd of, our men are enabled to do from twice to ten times the work that can be done without the help of such contrivances ''. Innovation also took place in production processes, particularly in the development of an extreme division of labour. Contemporary sources noted that in Birmingham even simple products such as buttons would pass through between fifty and seventy different processes, performed by a similar number of different workers. The resulting competitive advantage was noticed by Lunar Society member Richard Lovell Edgeworth when he visited Paris, observing that "each artisan in Paris... must in his time ' play many parts ', and among these many to which he is incompetent '', concluding that "even supposing French artisans to be of equal ability and industry with English competitors, they are left at least a century behind ''. Innovation also extended to Birmingham 's products, which were increasingly tailored to its merchants ' national and international connections. During the first half of the century growth was largely driven by the domestic market and based on increased national prosperity, but foreign trade was important even in the mid-century and was the dominant factor from the 1760s and 1770s, dictated by the markets and fashions of London, France, Italy, Germany, Russia and the American colonies. The adaptability and speed to market of the Birmingham economy allowed it to be heavily influenced by fashion -- the market for buckles collapsed in the late 1780s, but workers transferred skills to brass or button trades. Birmingham manufacturers sought design leadership in fashionable circles in London or Paris, but then priced their goods to appeal to the emerging middle class consumer economy; "for the London season the Spitalfields silk weavers produced each year their new designs, and the Birmingham toy - makers their buttons, buckles, patchboxes, snuff boxes, chatelaines, watches, watch seals... and other jewellery. '' Competitive pressure drove Birmingham manufacturers to adopt new products and materials: the town 's first glasshouse opened in 1762, manufacture in papier - mâché developed from the award of a patent to Henry Clay in 1772, foreshadowing the Birmingham invention of plastics the following century, the minting of coins grew from 1786. In addition to its own specialist manufacturing role, Birmingham retained its position as the commercial and mercantile centre for an integrated regional economy that included the more basic manufacturing and raw material production areas of Staffordshire and Worcestershire. The demand for capital to feed rapid economic expansion saw the town become a major financial centre with extensive international connections. During the early 18th century finance was largely provided by the iron merchants and by extensive systems of trade credit between manufacturers; it was an iron merchant Sampson Lloyd and major manufacturer John Taylor who combined to form the town 's first bank -- the early Lloyds Bank -- in 1765. Further establishments followed and by 1800 the West Midlands had more banking offices per head than any other region in Britain, including London. Birmingham 's booming economy gave the town a growing professional sector, with the number of physicians and lawyers increasing by 25 % between 1767 and 1788. It has been estimated that between a quarter and a third of Birmingham 's inhabitants could be thought of as middle class in the last quarter of the 18th century and this fuelled a large growth in the demand for tailors, mercers and drapers. Trade directories reveal that there more drapers than edge tool makers in 1767, and insurance records from 1777 to 1786 suggest that well over half of local businesses were engaged in the trading or service sectors rather than directly in industry. Georgian Birmingham was marked by the dramatic growth of the vigorous public sphere for the exchange of ideas that was the hallmark of enlightenment society. Most characteristic was the development of clubs, taverns and coffeehouses as places for free association, cooperation and debate, often crossing lines of social class and status. Although the exceptional historic influence of the Lunar Society of Birmingham has made it much the best - known, the contemporary William Hutton described hundreds of such associations in Birmingham with thousands of members, and the German visitor Philipp Nemnich commented late in the century that "the inhabitants of Birmingham are fonder of associations in clubs than almost any other place I know ''. Particularly notable examples included Freeth 's Coffee House, one of the most celebrated meeting places of Georgian England; Ketley 's Building Society, the world 's first building society, founded at the Golden Cross in Snow Hill in 1775; the Birmingham Book Club, whose radical politics saw it nicknamed the "Jacobin Club '', and which alongside other debating societies such as the Birmingham Free Debating Society and the Amicable Debating Society played a prominent role in the growing expression of popular political consciousness within the town; and the more conservative Birmingham Bean Club, a dining club uniting leading loyalist figures in the town with prominent landowners from the surrounding counties, which played a prominent role in the emergence of a distinct "Birmingham interest '' in regional politics from 1774. The printed word represented another expanding medium for the exchange of ideas. Georgian Birmingham was a highly literate society with at least seven booksellers by 1733, the largest in 1786 claiming a stock of 30,000 titles in several languages. These were supplemented by eight or nine commercial lending libraries established over the course of the 18th century, to the extent that it was claimed in the later part of the century that Birmingham 's population of around 50,000 read 100,000 books per month. More specialist libraries included St. Philip 's Parish Library established in 1733, and the Birmingham Library, established for research purposes in 1779 by a largely dissenting group of subscribers. Birmingham had been a centre for the printing and publishing of books since the 1650s but the rise of John Baskerville and the nine other printers he attracted to the town in the 1750s saw this achieve international significance. The town 's first newspaper was the Birmingham Journal founded in 1732; it was short - lived but notable as the vehicle for the first published work of Samuel Johnson. Aris 's Birmingham Gazette, founded in 1741, became "one of the most lucrative and important provincial papers '' of 18th century England, and by the 1770s circulated as far afield as Chester, Birmingham and London. The more radical Swinney 's Birmingham Chronicle, which caused controversy in 1771 by serialising Voltaire 's Thoughts on Religion, boasted in 1776 that it circulated across nine counties and was "filed by coffeehouses in practically every centre of any size from Edinburgh southwards ''. News from outside the town also circulated widely: Freeth 's Coffeehouse in 1772 maintained an archive of all the London newspapers going back 37 years and received direct personal reports of proceedings in Parliament, while Overton 's Coffeehouse in New Street in 1777 had the London papers delivered by express messenger the day after their publication, also having available all of the main country, Irish and European papers, and Parliamentary division lists. Birmingham 's booming economy attracted immigrants from a wide area, many of whom retained freeholds -- and thus votes -- in their previous constituencies, giving Birmingham politics a wide electoral influence despite the town having no parliamentary representation of its own. A distinct and powerful "Birmingham interest '' emerged decisively with the election of Thomas Skipwith at the Warwickshire by - election of 1769, and over following decades candidates for seats as far afield as Worcester, Newcastle - under - Lyme, Leicester and Lincoln sought support in the town. The first map of Birmingham was produced in 1731 by William Westley, though the year before, he produced the first documentation of a newly constructed square named Old Square. It became one of the most prestigious addresses in Birmingham. This was not the first map to show Birmingham, something that had been done in 1335, albeit showing Birmingham as a small symbol. Birmingham was again surveyed in 1750 by S. Bradford. Until the 1760s, Birmingham 's local government system, consisted of manorial and parish officials, most of whom served on a part - time and honorary basis. However this system proved completely inadequate to cope with Birmingham 's rapid growth. In 1768, Birmingham gained a rudimentary local government system, when a body of "Commissioners of the Streets '' was established, who had powers to levy a rate for functions such as cleaning and street lighting. They were later given powers to provide policing and build public buildings. From the 1760s onwards, Birmingham became a centre of the canal system. The canals provided an efficient transport system for raw materials and finished goods, and greatly aided the town 's industrial growth. The first canal to be built into Birmingham, was opened in November 1769 and connected Birmingham with the coal mines at Wednesbury in the Black Country. Within a year of the canal opening, the price of coal in Birmingham had fallen by 50 %. The canal network across Birmingham and the Black Country expanded rapidly over the following decades, with most of it owned by the Birmingham Canal Navigations Company. Other canals such as the Worcester and Birmingham Canal, the Birmingham and Fazeley Canal, the Warwick and Birmingham Canal (now the Grand Union) and the Stratford - upon - Avon Canal linked Birmingham to the rest of the country. By 1830, some 160 miles (260 km) of canal had been constructed across the Birmingham and Black Country area. Due to Birmingham 's vast array of industries, it was nicknamed "workshop of the World ''. The expansion of the population of the town and the increased prosperity led to it acquiring a library in 1779, a hospital in 1766 and a variety of recreational institutions. Horatio Nelson and the Hamiltons visited Birmingham. Nelson was fêted, and visited Matthew Boulton on his sick - bed at Soho House, before taking a tour of the Soho Manufactory and commissioning the Battle of the Nile medal. In 1809, a statue of Horatio Nelson by Richard Westmacott Jr. was erected by public subscription. It still stands, in the Bull Ring, albeit on a 1960s plinth. The Birmingham Manor House and its moat were demolished and removed in 1816. The site was constructed upon to create the Smithfield Markets, which concentrated various marketing activities upon one area close to the Bull Ring which had developed into a retail - led area. At the beginning of the 19th century, Birmingham had a population of around 74,000. By the end of the century it had grown to 630,000. This rapid population growth meant that by the middle of the century Birmingham had become the second largest population centre in Britain. Railways arrived in Birmingham in 1837 with the opening of the Grand Junction Railway which linked Birmingham with Manchester and Liverpool. The following year the London and Birmingham Railway opened, linking to the capital. This was soon followed by the Birmingham and Derby Junction Railway and the Birmingham and Gloucester Railway. These all initially had separate stations around Curzon Street. However, in the 1840s, these early railway companies had merged to become the London and North Western Railway and the Midland Railway respectively. The two companies jointly constructed Birmingham New Street which was opened in 1854, and Birmingham became a central hub of the British railway system. In 1852, the Great Western Railway (GWR) arrived in Birmingham, and a second smaller station, Birmingham Snow Hill was opened. The GWR line linked the city with Oxford and London Paddington. Also in the 1830s, due to its growing size and importance, Birmingham was granted Parliamentary representation by the Reform Act of 1832. The new Birmingham constituency was created with two MPs representing it. Thomas Attwood and Joshua Scholefield both Liberals, were elected as Birmingham 's first MP 's. In 1838, local government reform meant that Birmingham was one of the first new towns to be incorporated as a municipal borough by the Municipal Corporations Act 1835. This allowed Birmingham to have its first elected town council. The council initially worked alongside the existing Street Commissioners, until they were wound up in 1851. Birmingham 's growth and prosperity was based upon metalworking industries, of which many different kinds existed. Birmingham became known as the "City of a thousand trades '' because of the wide variety of goods manufactured there -- buttons, cutlery, nails and screws, guns, tools, jewellery, toys, locks, and ornaments were amongst the many products manufactured. For most of the 19th century, industry in Birmingham was dominated by small workshops rather than large factories or mills. Large factories became increasingly common towards the end of the century when engineering industries became increasingly important. The industrial wealth of Birmingham allowed merchants to fund the construction of some fine institutional buildings in the city. Some buildings of the 19th century included: the Birmingham Town Hall built in 1834, the Birmingham Botanical Gardens opened in 1832, the Council House built in 1879, and the Museum and Art Gallery in the extended Council House, opened in 1885. The mid-19th century saw major immigration into the city from Ireland, following the Great Irish Famine (1845 -- 1849). Birmingham became a county borough and a city in 1889. As in many industrial towns during the 19th century, many of Birmingham 's residents lived in overcrowded and unsanitary conditions. During the early - to - mid-19th century, thousands of back - to - back houses were built to house the growing population, many of which were poorly built and badly drained, and many soon became slums. In 1851, a network of sewers was built under the city which was connected to the River Rea, although only new houses were connected to it, and many older houses had to wait decades until they were connected. Birmingham gained gas lighting in 1818, and a water company in 1826, to provide piped water, although clean water was only available to people who could pay. Birmingham gained its first electricity supply in 1882. Horse - drawn trams ran through Birmingham from 1873, and electric trams from 1890. In the mid 1840s, the charismatic nonconformist preacher George Dawson began to promote a doctrine of social responsibility and enlightened municipal improvement that became known as the Civic Gospel. His philosophy inspired a group of reformers -- including Joseph Chamberlain, Jesse Collings, George Dixon, and others -- who from the late 1860s onwards began to be elected to the Town Council as Liberals, and to apply these ideas in practice. This period reached its peak during the mayoralty of Chamberlain, 1873 -- 1876. Under his leadership, Birmingham was transformed, as the council introduced one of the most ambitious improvement schemes outside London. The council purchased the city 's gas and water works, and moved to improve the lighting and provide clean drinking water to the city, income from these utilities also provided a healthy income for the council, which was re-invested into the city to provide new amenities. Under Chamberlain, some of Birmingham 's worst slums were cleared. And through the city - centre a new thoroughfare was constructed, Corporation Street, which soon became a fashionable shopping street. He was instrumental in building of the Council House and the Victoria Law Courts in Corporation Street. Numerous public parks were also created, central lending and reference libraries were opened in 1865 -- 6, and the city 's Museum and Art Gallery in 1885. The improvements introduced by Chamberlain and his colleagues were to prove the blueprint for municipal government, and were soon copied by other cities. By 1890, a visiting American journalist could describe Birmingham as "the best - governed city in the world ''. Although he resigned as mayor to become an MP, Chamberlain took close interest in the city for many years afterwards. Birmingham 's water problems were not fully solved through the creation of reservoirs in Walmley Ash, fed by Plants Brook. Other larger reservoirs were constructed at Witton Lakes and Brookvale Park Lake to help ease the problems. The problems were finally solved, however, by Birmingham Corporation Water Department with the completion of a 73 - mile (117 km) long Elan aqueduct was built to a reservoir in the Elan Valley in Wales; this project was approved in 1891 and completed in 1904. The First World War exacted a terrible human cost in Birmingham. More than 150,000 men from the city -- over half of the male population -- served in the armed forces of whom 13,000 were killed and 35,000 wounded. The onset of mechanised warfare also served to increase the strategic importance of Birmingham as a centre of industrial production, with the British Commander - in - Chief John French describing the war at its outset as "a battle between Krupps and Birmingham ''. At the war 's close the Prime Minister David Lloyd George also recognised the significance of Birmingham in the allied victory, remarking how "the country, the empire and the world owe to the skill, the ingenuity and the resource of Birmingham a deep debt of gratitude ''. Some Birmingham men were conscientious objectors. In 1918, the Birmingham Civic Society was founded to bring public interest to bear upon all proposals put forward by public bodies and private owners for building, new open spaces and parks, and any and all matters concerned with the amenities of the city. The society set about making suggestions for improvements in the city, sometimes designing and paying for improvements themselves and buying a number of open spaces and later gifting them to the city for use as parks. After the Great War ended in 1918, the city council decided to build modern housing across the city to rehouse families from inner city slums. Recent boundary expansions which brought areas including Aston, Handsworth, Erdington, Yardley and Northfield within the city 's boundaries provided extra space for housing developments. By 1939, the year that the Second World War broke out, almost 50,000 council houses had been built across the city within 20 years. Some 65,000 houses were also built for owner occupiers. New council estates built during this era included Weoley Castle between Selly Oak and Harborne, Pype Hayes near Erdington and the Stockfield Estate at Acocks Green. In 1936, King Edward 's Grammar School on New Street was demolished and moved to Edgbaston. The school had been on that site for 384 years. The site was later transformed into an office block which was destroyed in the bombing of the Second World War. It was later rebuilt and named "King Edward 's House ''. It is used as an office block and on the ground floor as shops and restaurants. In the First and Second World Wars, the Longbridge car plant switched to production of munitions and military equipment, from ammunition, mines and depth charges to tank suspensions, steel helmets, Jerricans, Hawker Hurricanes, Fairey Battle fighters and Airspeed Horsa gliders, with the mammoth Avro Lancaster bomber coming into production towards the end of WWII. The Spitfire fighter aircraft was mass - produced at Castle Bromwich by Vickers - Armstrong throughout the war. Birmingham 's industrial importance and contribution to the war effort may have been decisive in winning the war. The city was heavily bombed by the German Luftwaffe during the Birmingham Blitz in World War II. By the war 's end 2,241 citizens had been killed by the bombing and over 3,000 seriously injured. 12,932 buildings were destroyed (including 300 factories) and thousands more damaged. The air raids also destroyed many of Birmingham 's fine buildings. The council declared five redevelopment areas in 1946: The defining feature of Birmingham 's politics in the post-war era was the loss of much of city 's independence. World War II had seen a huge expansion in the role of central government in British life, and this pattern continued into the post-war years: for Birmingham, this meant major decisions about the city 's future tended to be made outside the city, mainly in Westminster. Planning, development and municipal functions were increasingly dictated by national policy and legislation; council finances came to be dominated by central government subsidies; and institutions such as gas, water and transport were taken out of the city 's control. Birmingham 's unrivalled size and wealth may have given it more political influence than any other provincial city, but like all such cities it was essentially subordinate to Whitehall; the days of Birmingham as a semi-autonomous city - state, with its leading citizens dictating the agenda of national politics, were over. This was to have major implications for the direction of the city 's development. Up until the 1930s it had been a basic assumption of Birmingham 's leaders that their role was to encourage the city 's growth. Post-war national governments, however, saw Birmingham 's accelerating economic success as a damaging influence on the stagnating economies of the North of England, Scotland and Wales, and saw its physical expansion as a threat to its surrounding areas -- "from Westminster 's point of view (Birmingham) was too large, too prosperous, and had to be held in check ''. A series of measures, starting with the Distribution of Industry Act 1945, aimed to prevent industrial growth in the "Congested Areas '' -- essentially the booming cities of London and Birmingham -- instead encouraging the dispersal of industry to the economically stagnant "Development Areas '' in the north and west. The West Midlands Plan, commissioned by the Minister for Town and Country Planning from Patrick Abercrombie and Herbert Jackson in 1946, set Birmingham a target population for 1960 of 990,000, far less than its actual 1951 population of 1,113,000. This meant that 220,000 people would have to leave the city over the following 14 years, that some of the city 's industries would have to be removed, and that new industries would need to be prevented from establishing themselves in the city. By 1957 the council had explicitly accepted that it was obliged "to restrain the growth of population and employment potential within the city ''. With the city 's power over its own destiny reduced, Birmingham 's lost most of its political distinctiveness. The General Election of 1945 was the first for 70 years in which no member of the Chamberlain family stood for Birmingham. Birmingham 's economy flourished in the 30 years that followed the end of the Second World War, its economic vitality and affluence far outstripping Britain 's other major provincial cities. Economic adaptation and restructuring over the previous half century had left the West Midlands well represented in two of the British economy 's three major growth areas -- motor vehicles and electrical equipment -- and Birmingham itself was second only to London for the creation of new jobs between 1951 and 1961. Unemployment in Birmingham between 1948 and 1966 rarely exceeded 1 %, and only exceeded 2 % in one year. By 1961 household incomes in the West Midlands were 13 % above the national average, exceeding even than those of London and the South East. This prosperity was achieved despite severe restrictions placed on the city 's economy by central government, which explicitly sought to limit Birmingham 's growth. The Distribution of Industry Act 1945 prohibited all industrial development over a specific size without an "Industrial Development Certificate '', in the hope that companies refused permission to expand in London or Birmingham would move instead to one of the struggling cities of the North of England. At least 39,000 jobs were directly transferred out of the West Midlands by factory movement between 1960 and 1974, and planning measures were instrumental in local firms such as the British Motor Corporation and Fisher and Ludlow expanding in South Wales, Scotland and Merseyside instead of Birmingham. Government measures also acted indirectly to deter economic development within the city: restrictions on the city 's physical growth in an era of economic boom led to extremely high land prices and shortages of premises and development sites, and moves to reduce the city 's population led to labour shortages and escalating wages. Although employment in Birmingham 's restricted manufacturing sector shrank by 10 % between 1951 and 1966, this was more than made up during the early post-war period for by employment in the service sector, which grew from 35 % of the city 's workforce in 1951 to 45 % in 1966. As the commercial centre of the country 's most successful regional economy, Central Birmingham was the main focus outside London for the post-war office building boom. Service sector employment in the Birmingham conurbation grew faster than in any other region between 1953 and 1964, and the same period saw 3 million sq ft of office space constructed in the city centre and Edgbaston. The city 's economic boom saw the rapid growth of a substantial merchant banking sector, as major London and international banks established themselves within the city, and professional and scientific services, finance and insurance also grew particularly strongly. However this service sector growth itself attracted government restrictions from 1965. Declaring the growth in population and employment within Birmingham to be a "threatening situation '', the incoming Labour Government of 1964 sought "to control the growth of office accommodation in Birmingham and the rest of the Birmingham conurbation before it got out of hand, in the same way as they control the growth of industrial employment ''. Although the City Council had encouraged service sector expansion during the late 1950s and early 1960s, central government extended the Control of Office Employment Act 1965 to the Birmingham conurbation from 1965, effectively banning all further office development for almost two decades. These policies had a major structural effect on the city 's economy. While government policy had limited success in preventing the growth of Birmingham 's existing industries, it was much more successful in preventing new industries establishing themselves in the city. Birmingham 's economic success over the previous two centuries had been built on its economic diversity and its ongoing ability to adapt and innovate -- attracting new businesses and developing new industries with its large supply of skilled labour and dynamic entrepreneurial culture -- but this was exactly the process that government industrial location policy sought to prevent. Birmingham 's existing industries grew strongly and kept the economy buoyant, but growing local fears that the city 's economy was becoming over-specialised were dismissed by central government, even though danger signs were growing by the early 1970s. In 1950 Birmingham 's economy could still be described as "more broadly based than that of any city of equivalent size in the world '', but by 1973 the West Midlands had an above - average reliance on large firms for employment, and the small firms that remained were increasingly dependent as suppliers and sub-contractors to the few larger firms. The "City of Thousand Trades '' had become over-specialised on one industry -- the motor trade -- much of which by the 1970s had itself consolidated into a single company -- British Leyland. Trade union organisation grew and the motor industry in particular saw industrial disputes from the 1950s onwards. A city that for most of its history had a reputation for weak trade unionism and strong cooperation between workers and management, developed a reputation for trade union militancy and industrial conflict. Despite more than 150,000 houses being built in the city for both the private and public sector since 1919, 20 % of the city 's homes were still deemed unfit for human habitation by 1954. 37,000 council properties had been built between 1945 and 1954 to rehouse families from the slums, but by 1970 the housing crisis was being eased as that figure had now exceeded 80,000. In the postwar years, a massive program of slum clearances took place, and vast areas of the city were re-built, with overcrowded "back to back '' housing being replaced by high rise blocks of flats (the last remaining block of four back - to - backs have become a museum run by the National Trust). Due largely to bomb damage, the city centre was also extensively re-built under the supervision of the city council 's chief engineer Herbert Manzoni during the postwar years. He was assisted by the City Architect position which was held by several people. Emblematic of this was the new Bull Ring Shopping Centre. Birmingham also became a centre of the national motorway network, with Spaghetti Junction. Much of the re-building of the postwar period would in later decades be regarded as mistaken, especially the large numbers of concrete buildings and ringroads which gave the city a reputation for ugliness. Council house building in the first decade following the end of the Second World War was extensive; with more than 37,000 new homes being completed between then and the end of 1954. These included the first of several hundred multi-storey blocks of flats. Despite this, some 20 % of the city 's houses were still reported to be unfit for human habitation. As a result, mass council house building continued for some 20 years afterwards. The existing suburbs continued to expand, while several completely new estates were developed. Castle Bromwich, which grew into a new town, was developed approximately six miles to the east of the town centre during the 1960s. Castle Vale, near the Fort Dunlop tyre factory to the north - east of the city centre, was developed in the 1960s as Britain 's largest postwar housing estate; featuring a total of 34 tower blocks, although 32 of them had been demolished by the end of 2003 as part of a massive regeneration of the estate brought on by the general unpopularity and defects associated with such developments. In 1974, 21 people were killed and 182 people were injured when two city - centre pubs were bombed by the IRA. In the same year as part of a local government reorganisation, Birmingham expanded again, this time taking over the borough of Sutton Coldfield to the north. Birmingham lost its county borough status and instead became a metropolitan borough under the new West Midlands County Council. It was also finally removed from Warwickshire. There were further waves of immigration from Ireland in the 1950s and 1980s as emigrants sought to escape the economic deprivation and unemployment in their homeland. There remains a strong Irish tradition in the city, most notably in Digbeth 's Irish Quarter and in the annual St Patrick 's Day parade, claimed to be the third - largest in the world after New York and Dublin. In the years following World War II, a major influx of immigrants from the Commonwealth of Nations changed the face of Birmingham, with large communities from Southern Asia and the Caribbean settling in the city, turning Birmingham into one of the UK 's leading multicultural cities. The developments were not welcomed by everyone however -- the right - wing Wolverhampton MP Enoch Powell delivered his famous Rivers of Blood speech in the city on 20 April 1968. On the other hand, some arts prospered, especially music: Birmingham had thriving heavy metal (with bands such as Black Sabbath, Napalm Death and Judas Priest) and reggae scenes (notable locals including Steel Pulse, Pato Banton, Musical Youth, and UB40). Since the early 1980s, Birmingham has seen a new wave of migration, this time from communities which do not have Commonwealth roots, such as Kosovo and Somalia. Further immigration from Eastern Europe (especially Poland) came with the expansion of the European Union in 2004. Tension between ethnic groups and the authorities led to the Handsworth riots in 1981 and 1985. October 2005 saw the 2005 Birmingham riots in the Lozells and Handsworth regions of the city, with street battles between black and Asian gangs, caused by an unsubstantiated rumour resulting in two deaths and much damage. The Birmingham City Council, further to the Local Strategic Partnership ' Be Birmingham ', continue to strive towards a better, more united Birmingham. The collapse of Birmingham 's industrial economy was sudden and catastrophic. As late as 1976 the West Midlands region -- with Birmingham as its principal economic dynamo -- still had the highest GDP of any in the UK outside the South East, but within five years it was lowest in England. Birmingham itself lost 200,000 jobs between 1971 and 1981, with the losses concentrated in the manufacturing sector; relative earnings in the West Midlands went from being the highest in Britain in 1970 to the lowest in 1983. By 1982 the city 's unemployment rate approached 20 %, and was around twice that level in inner city areas including Aston, Handsworth and Sparkbrook. The City Council undertook a policy of diversifying the city 's economy into service industries, retailing and tourism to lessen the dependence upon manufacturing. A number of initiatives were undertaken to make the city more attractive to visitors. In the 1970s, the National Exhibition Centre (NEC) was built, 10 miles (16 km) southeast of the centre, close to Birmingham International Airport. Although it is actually just inside neighbouring Solihull, it was instigated, and largely owned by, Birmingham Council, and is thought by most people to be in the city. It has been expanded several times since then. The International Convention Centre (ICC) opened in central Birmingham in the early 1990s. The area around Broad Street, including Centenary Square, the ICC and Brindleyplace, was extensively renovated at the turn of 2000. In 1998, a G8 summit was held in Birmingham, and US president Bill Clinton was clearly impressed by the city. The regeneration of the city during the 1990s and 2000s also saw many residential areas of the city substantially altered. A notable example was the Pype Hayes council estate in the Erdington area of the city, which was built in the interwar years but was eventually completely redeveloped due to structural defects. Many of the city 's 1960s council properties, mostly multi-storey flats and maisonettes, have also been demolished in similar redevelopments, including the large Castle Vale estate in the north - east of the city, where all but two of the estate 's 34 tower blocks were demolished as part of the regeneration of an estate which had been blighted by crime, unemployment and poor housing. The city was shocked, on 2 January 2003, by the murder of Letisha Shakespeare and Charlene Ellis. In September 2003, the Bullring shopping complex was opened following a three - year project. In 2003, the city failed in its bid to become the 2008 European Capital of Culture, under the banner "Be in Birmingham 2008 ''. Birmingham continues to develop, following the removal of the Inner Ring Road, which acted as a ' concrete collar ' preventing the expansion of the city centre, a massive urban regeneration project known as the Big City Plan in progress. For example, in the city 's new Eastside district which is undergoing work which is expected to total £ 6 billion. The city was affected by the riots that spread through the country in August 2011, resulting in extensive criminal damage in several inner city areas and three deaths in the Winson Green area.
who is ion what role does he play in ion
Ion (dialogue) - wikipedia In Plato 's Ion (/ ˈaɪɒn /; Greek: Ἴων) Socrates discusses with the titular character, a professional rhapsode who also lectures on Homer, the question of whether the rhapsode, a performer of poetry, gives his performance on account of his skill and knowledge or by virtue of divine possession. It is one of the shortest of Plato 's dialogues. Ion has just come from a festival of Asclepius at the city of Epidaurus, after having won first prize in the competition. Socrates engages Ion in a philosophical discussion. Ion admits when Socrates asks, that his skill in performance recitation is limited to Homer, and that all other poets bore him. Socrates finds this puzzling, and sets out to solve the "riddle '' of Ion 's limited expertise. He points out to Ion that art critics and judges of sculpture normally do not limit themselves to judging the work of only a single artist, but can criticize the art no matter who the particular artist. Socrates deduces from this observation that Ion has no real skill, but is like a soothsayer or prophet in being divinely possessed: "For a poet is a light and winged and sacred thing, and is unable ever to indite until he has been inspired and put out of his senses, and his mind is no longer in him: every man, whilst he retains possession of that, is powerless to indite a verse or chant an oracle. Seeing then that it is not by art that they compose and utter so many fine things about the deeds of men -- as you do about Homer -- but by a divine dispensation, each is able only to compose that to which the Muse has stirred him, this man dithyrambs, another laudatory odes, another dance - songs, another epic or else iambic verse; but each is at fault in any other kind. For not by art do they utter these things, but by divine influence; since, if they had fully learnt by art to speak on one kind of theme, they would know how to speak on all. And for this reason God takes away the mind of these men and uses them as his ministers, just as he does soothsayers and godly seers, in order that we who hear them may know that it is not they who utter these words of great price, when they are out of their wits, but that it is God himself who speaks and addresses us through them. '' (534b -- d) Socrates offers the metaphor of a magnet to explain how the rhapsode transmits the poet 's original inspiration from the muse to the audience. He says that the god speaks first to the poet, then gives the rhapsode his skill, and thus, gods communicate to the people. Socrates posits that Ion must be out of his mind when he acts, because he can weep even though he has lost nothing, and recoil in fear when in front of an admiring audience. Ion says that the explanation for this is very simple: it is the promise of payment that inspires his deliberate disconnection from reality. Ion says that when he looks at the audience and sees them weeping, he knows he will laugh because it has made him richer, and that when they laugh, he will be weeping at losing the money (535e). Ion tells Socrates that he can not be convinced that he is possessed or mad when he performs (536d, e). Socrates then recites passages from Homer which concern various arts such as medicine, divining, fishing, and making war. He asks Ion if these skills are distinct from his art of recitation. Ion admits that while Homer discusses many different skills in his poetry, he never refers specifically to the rhapsode 's craft, which is acting. Socrates presses him about the exact nature of his skill. Ion maintains that his knowledge makes him a capable military general but states that when he recites passages concerning military matters, he can not tell whether he does it with a general 's skill, or with a rhapsode 's. Socrates notices that Ion changes his occupation. He was first a rhapsode and then has become a general. He gently berates the rhapsode for being Protean, which after all, is exactly what a rhapsode is: a man who is convincingly capable of being different people on stage. Through his character Socrates, Plato argues that "Ion 's talent as an interpreter can not be an art, a definable body of knowledge or an ordered system of skills, '' but instead must come from the divine inspiration of the Muses. Plato 's argument is supposed to be an early example of a so - called genetic fallacy since his conclusion arises from his famous lodestone (magnet) analogy. Ion, the rhapsode "dangles like a lodestone at the end of a chain of lodestones. The muse inspires the poet (Homer in Ion 's case) and the poet inspires the rhapsode. '' Plato 's dialogues are themselves "examples of artistry that continue to be stageworthy; '' it is a paradox that "Plato the supreme enemy of art is also the supreme artist. '' Plato develops a more elaborate critique of poetry in other dialogues such as in Phaedrus 245a, Symposium 209a, Republic 398a, Laws 817 b -- d. However, some researchers perceive it as a critique of unjustified belief rather than a critique of poetry in general.
the mountain which is known as horst is
Horst (geology) - Wikipedia In physical geography and geology, a horst is the raised fault block bounded by normal faults or graben. A horst is a raised block of the Earth 's crust that has lifted, or has remained stationary, while the land on either side has subsided. Horst is Dutch and German for heap. The Vosges Mountains in France and Black Forest in Germany are examples of horsts, as are the Table, Jura and the Dole mountains. The word is also applied to larger areas, such as the Russian Plain, Arabia, India and Central South Africa, where the continent remains stable, with horizontal table - land stratification, in distinction to folded regions such as the Eurasian chains. In many rift basins around the world, the vast majority of discovered hydrocarbons are found in conventional traps associated with horsts. For example, much of the petroleum found in the Sirte Basin, Libya (of the order of tens of billions of barrels of reserves) are found on large horst blocks such as the Zelten Platform and the Dahra Platform and on smaller horsts such as the Gialo High and the Bu - Attifel Ridge.
where does the two finger swear come from
V sign - wikipedia The V sign is a hand gesture in which the index and middle fingers are raised and parted, while the other fingers are clenched. It has various meanings, depending on the cultural context and how it is presented. When displayed with the palm inward toward the signer, it has long been an offensive gesture in some Commonwealth nations. In the 1940s, during the Second World War, a campaign by the Western Allies to use the sign with the back of the hand toward the signer (U + 270C ✌ Victory hand in Unicode) as a "V for Victory '' sign proved quite effective. During the Vietnam War, in the 1960s, the "V sign '' was widely adopted by the counterculture as a symbol of peace. Shortly thereafter, it also became adopted as a gesture used in photographs, especially in Japan. The meaning of the V sign is partially dependent on the manner in which the hand is positioned: The insulting version of the gesture (with the palm inward U + 1F594 🖔 Reversed victory hand) is often compared to the offensive gesture known as "the finger ''. The "two - fingered salute '' (also "the forks '' in Australia) is commonly performed by flicking the V upwards from wrist or elbow. The V sign, when the palm is facing toward the person giving the sign, has long been an insulting gesture in England, and later in the rest of the United Kingdom, Ireland, Australia, India, Pakistan and New Zealand. It is frequently used to signify defiance (especially to authority), contempt, or derision. As an example of the V sign (palm inward) as an insult, on November 1, 1990, The Sun, a British tabloid, ran an article on its front page with the headline "Up Yours, Delors '' next to a large hand making a V sign protruding from a Union Jack cuff. The Sun urged its readers to stick two fingers up at then President of the European Commission, Jacques Delors, who had advocated an EU central government. The article attracted a number of complaints about its alleged racism, but the now defunct Press Council rejected the complaints after the editor of The Sun stated that the paper reserved the right to use vulgar abuse in the interests of Britain. On April 3, 2009, Scottish football players Barry Ferguson and Allan McGregor were permanently banned from the Scottish national squad for showing the V sign while sitting on the bench during the game against Iceland. Both players were in their hotel bar drinking alcohol after the Scottish defeat to The Netherlands until around 11 am the next morning, meaning that both of the players breached the SFA discipline code before the incident as well, but the attitude shown by the V sign was considered to be so rude that the SFA decided never to include these players in the national line - up again. Ferguson also lost the captaincy of Rangers as a result of the controversy. McGregor 's ban was lifted by then SFA manager Craig Levein and he returned to Scotland national squad in 2010. Steve McQueen gives the sign in the closing scene of the 1971 motorsport movie, Le Mans. A still picture of the gesture was recorded by photographer Nigel Snowdon and has become an icon of both McQueen and the film itself. The gesture was also flashed by Spike (played by James Marsters) in "Hush '', a Season 4 episode of Buffy the Vampire Slayer. The scene was also featured in the series ' opening credits for all of Season 5. It was censored by BBC Two only in its early - evening showings of the program. For a time in the UK, "a Harvey (Smith) '' became a way of describing the insulting version of the V sign, much as "the word of Cambronne '' is used in France, or "the Trudeau salute '' is used to describe the one - fingered salute in Canada. This happened because, in 1971, show - jumper Harvey Smith was disqualified for making a televised V sign to the judges after winning the British Show Jumping Derby at Hickstead. His win was reinstated two days later. Harvey Smith pleaded that he was using a Victory sign, a defence also used by other figures in the public eye. Sometimes foreigners visiting the countries mentioned above use the "two - fingered salute '' without knowing it is offensive to the natives, for example when ordering two beers in a noisy pub, or in the case of the United States president George H.W. Bush, who, while touring Australia in 1992, attempted to give a "peace sign '' to a group of farmers in Canberra -- who were protesting about U.S. farm subsidies -- and instead gave the insulting V sign. A commonly repeated legend claims that the two - fingered salute or V sign derives from a gesture made by longbowmen fighting in the English and Welsh archers at the Battle of Agincourt (1415) during the Hundred Years ' War, but no historical primary sources support this contention. This origin legend states that archers who were captured by the French had their index and middle fingers cut off so that they could no longer operate their longbows, and that the V sign was used by uncaptured and victorious archers in a display of defiance against the enemy. However, it was common practice in warfare of that period to summarily execute common soldiers, since they held no ransom value. Alternatively, there is evidence against this interpretation as the chronicler Jean de Wavrin, contemporary of the battle of Agincourt, reports that the captured archers would have three fingers cut, and not two. Wielding an English longbow requires three fingers, as is the case for modern bows. The first unambiguous evidence of the use of the insulting V sign in the United Kingdom dates to 1901, when a worker outside Parkgate ironworks in Rotherham used the gesture (captured on the film) to indicate that he did not like being filmed. Peter Opie interviewed children in the 1950s and observed in The Lore and Language of Schoolchildren that the much - older thumbing of the nose (cocking a snook) had been replaced by the V sign as the most common insulting gesture used in the playground. Between 1975 and 1977 a group of anthropologists including Desmond Morris studied the history and spread of European gestures and found the rude version of the V - sign to be basically unknown outside the British Isles. In his Gestures: Their Origins and Distribution, published in 1979, Morris discussed various possible origins of this sign but came to no definite conclusion: because of the strong taboo associated with the gesture (its public use has often been heavily penalised). As a result, there is a tendency to shy away from discussing it in detail. It is "known to be dirty '' and is passed on from generation to generation by people who simply accept it as a recognised obscenity without bothering to analyse it... Several of the rival claims are equally appealing. The truth is that we will probably never know... On 14 January 1941, Victor de Laveleye, former Belgian Minister of Justice and director of the Belgian French - language broadcasts on the BBC (1940 -- 44), suggested in a broadcast that Belgians use a V for victoire (French: "victory '') and vrijheid (Dutch: "freedom '') as a rallying emblem during the Second World War. In the BBC broadcast, de Laveleye said that "the occupier, by seeing this sign, always the same, infinitely repeated, (would) understand that he is surrounded, encircled by an immense crowd of citizens eagerly awaiting his first moment of weakness, watching for his first failure. '' Within weeks chalked up Vs began appearing on walls throughout Belgium, the Netherlands and Northern France. Buoyed by this success, the BBC started the "V for Victory '' campaign, for which they put in charge the assistant news editor Douglas Ritchie posing as "Colonel Britton ''. Ritchie suggested an audible V using its Morse code rhythm (three dots and a dash). As the rousing opening bars of Beethoven 's Fifth Symphony had the same rhythm, the BBC used this as its call - sign in its foreign language programmes to occupied Europe for the rest of the war. The more musically educated also understood that it was the Fate motif "knocking on the door '' of the Third Reich. (Listen to this call - sign. (help info)). The BBC also encouraged the use of the V gesture introduced by de Laveleye. By July 1941, the emblematic use of the letter V had spread through occupied Europe. On 19 July, Prime Minister Winston Churchill referred approvingly to the V for Victory campaign in a speech, from which point he started using the V hand sign. Early on he sometimes gestured palm in (sometimes with a cigar between the fingers). Later in the war, he used palm out. After aides explained to the aristocratic Churchill what the palm in gesture meant to other classes, he made sure to use the appropriate sign. Yet the double - entendre of the gesture might have contributed to its popularity, "for a simple twist of hand would have presented the dorsal side in a mocking snub to the common enemy ''. Other allied leaders used the sign as well. The Germans could not remove all the signs, so adopted the V Sign as a German symbol, sometimes adding laurel leaves under it, painting their own V 's on walls, vehicles and adding a massive V on the Eiffel Tower. Resistance graffiti on a Norwegian road, depicting the V - sign together with the initials of King Haakon VII. The V - sign (and its morse code equivalent) incorporated on an American propaganda poster for the War Production Board, 1942 or 1943. A German V - sign and slogan on the Palais Bourbon in occupied Paris. The banner beneath the "V '' reads "Germany is Victorious on All Fronts ''. In 1942, Aleister Crowley, a British occultist, claimed to have invented the usage of a V - sign in February 1941 as a magical foil to the Nazis ' use of the Swastika. He maintained that he passed this to friends at the BBC, and to the British Naval Intelligence Division through his connections in MI5, eventually gaining the approval of Winston Churchill. Crowley noted that his 1913 publication Magick featured a V - sign and a swastika on the same plate. U.S. President Richard Nixon used the gesture to signal victory in the Vietnam War, an act which became one of his best - known trademarks. He also used it on his departure from public office following his resignation in 1974. Protesters against the Vietnam War (and subsequent anti-war protests) and counterculture activists adopted the gesture as a sign of peace. Because the hippies of the day often flashed this sign (palm out) while saying "Peace '', it became popularly known (through association) as "the peace sign ''. The V sign, primarily palm - outward, is very commonly made by Japanese people, especially younger people, when posing for informal photographs, and is known as pīsu sain (ピース サイン, peace sign), or more commonly simply pīsu (ピース, peace). As the name reflects, this dates to the Vietnam War era and anti-war activists, though the precise origin is disputed. The V sign was known in Japan from the post-World War II Allied occupation of Japan, but did not acquire the use in photographs until later. In Japan, it is generally believed to have been influenced by Beheiren 's anti-Vietnam War activists in the late 1960s and Konica 's advertisement in 1971. A more colorful account of this practice claims it was influenced by the American figure skater Janet Lynn during the 1972 Winter Olympics in Sapporo, Hokkaidō. She fell during a free - skate period, but continued to smile even as she sat on the ice. Though she placed third in the competition, her cheerful diligence and persistence resonated with many Japanese viewers. Lynn became an overnight foreign celebrity in Japan. A peace activist, Lynn frequently flashed the V sign when she was covered in Japanese media, and she is credited by some Japanese for having popularized its use since the 1970s in amateur photographs. In Mainland China, Hong Kong, South Korea, and Taiwan, the V sign is a popular pose in photographs. It is used in both casual and formal settings. For the most part in these countries, the gesture is divorced from its previous meanings as a peace sign or as an insult; for most the meaning of the sign is "victory '' or "yeah '', implying a feeling of happiness. It is used in both directions (palm facing the signer and palm facing forward). In certain contexts the sign simply means "two '', such as when ordering or boarding a bus. The pose is gaining significant popularity in South Korea due to the common usage amongst Kpop idols and young people -- especially in selfies. V signing is commonly linked with aegyo, a popular trend in Korea meaning ' acting cutely '. In the United States, the usage of the V sign as a photography gesture is known but not widely used. The original poster for the 2003 film What a Girl Wants showed star Amanda Bynes giving a V sign as an American girl visiting London. In the US, the poster was altered to instead show Bynes with both arms down, to avoid giving the perception that the film was criticizing the then - recently commenced Iraq War. Lech Wałęsa and George H.W. Bush, July 1989 Singer Rihanna using the V sign, 2011. 2009 Iranian election protests An investigator flashes V - for - victory signs upon the 2006 arrival of material gathered by the Stardust spacecraft at the Johnson Space Center in Texas. Notes Bibliography
8 countries that were invaded by the axis powers
Axis powers - wikipedia The Axis powers (German: Achsenmächte; Italian: Potenze dell'Asse; Japanese: 枢軸 国 Sūjikukoku), also known as the Axis and the Rome -- Berlin -- Tokyo Axis, were the nations that fought in World War II against the Allied forces. The Axis powers agreed on their opposition to the Allies, but did not completely coordinate their activity. The Axis grew out of the diplomatic efforts of Germany, Italy, and Japan to secure their own specific expansionist interests in the mid-1930s. The first step was the treaty signed by Germany and Italy in October 1936. Benito Mussolini declared on 1 November that all other European countries would from then on rotate on the Rome -- Berlin axis, thus creating the term "Axis ''. The almost simultaneous second step was the signing in November 1936 of the Anti-Comintern Pact, an anti-communist treaty between Germany and Japan. Italy joined the Pact in 1937. The "Rome -- Berlin Axis '' became a military alliance in 1939 under the so - called "Pact of Steel '', with the Tripartite Pact of 1940 leading to the integration of the military aims of Germany, Italy and Japan. At its zenith during World War II, the Axis presided over territories that occupied large parts of Europe, North Africa, and East Asia. There were no three - way summit meetings and cooperation and coordination was minimal, with slightly more between Germany and Italy. The war ended in 1945 with the defeat of the Axis powers and the dissolution of their alliance. As in the case of the Allies, membership of the Axis was fluid, with some nations switching sides or changing their degree of military involvement over the course of the war. The term "axis '' was first applied to the Italo - German relationship by the Italian prime minister Benito Mussolini in September 1923, when he wrote in the preface to Roberto Suster 's Germania Repubblica that "there is no doubt that in this moment the axis of European history passes through Berlin '' (non v'ha dubbio che in questo momento l'asse della storia europea passa per Berlino). At the time he was seeking an alliance with the Weimar Republic against Yugoslavia and France in the dispute over the Free State of Fiume. The term was used by Hungary 's prime minister Gyula Gömbös when advocating an alliance of Hungary with Germany and Italy in the early 1930s. Gömbös ' efforts did affect the Italo - Hungarian Rome Protocols, but his sudden death in 1936 while negotiating with Germany in Munich and the arrival of Kálmán Darányi, his successor, ended Hungary 's involvement in pursuing a trilateral axis. Contentious negotiations between the Italian foreign minister, Galeazzo Ciano, and the German ambassador, Ulrich von Hassell, resulted in a Nineteen - Point Protocol, signed by Ciano and his German counterpart, Konstantin von Neurath, in 1936. When Mussolini publicly announced the signing on 1 November, he proclaimed the creation of a Rome -- Berlin axis. Italy under Duce Benito Mussolini had pursued a strategic alliance of Italy with Germany against France since the early 1920s. Prior to becoming head of government in Italy as leader of the Italian Fascist movement, Mussolini had advocated alliance with recently defeated Germany after the Paris Peace Conference of 1919 settled World War I. He believed that Italy could expand its influence in Europe by allying with Germany against France. In early 1923, as a goodwill gesture to Germany, Italy secretly delivered weapons for the German Army, which had faced major disarmament under the provisions of the Treaty of Versailles. In September 1923, Mussolini offered German Chancellor Gustav Stresemann a "common policy '': he sought German military support against potential French military intervention over Italy 's diplomatic dispute with Yugoslavia over Fiume, should an Italian seizure of Fiume result in war between Italy and Yugoslavia. The German ambassador to Italy in 1924 reported that Mussolini saw a nationalist Germany as an essential ally for Italy against France, and hoped to tap into the desire within the German army and the German political right for a war of revenge against France. During the Weimar Republic, the German government did not respect the Treaty of Versailles that it had been pressured to sign, and various government figures at the time rejected Germany 's post-Versailles borders. General Hans von Seeckt (head of the Reichswehr command from 1920 to 1926) supported an alliance between Germany and the Soviet Union to invade and partition Poland between them and restore the German - Russian border of 1914. Gustav Streseman as German foreign minister in 1925 declared that the reincorporation of territories lost to Poland and Danzig in the Treaty of Versailles was a major task of German foreign policy. The Reichswehr Ministry memorandum of 1926 declared its intention to seek the reincorporation of German territory lost to Poland as its first priority, to be followed by the return of the Saar territory, the annexation of Austria, and remilitarization of the Rhineland. Since the 1920s Italy had identified the year 1935 as a crucial date for preparing for a war against France, as 1935 was the year when Germany 's obligations under the Treaty of Versailles were scheduled to expire. Meetings took place in Berlin in 1924 between Italian General Luigi Capello and prominent figures in the German military, such as von Seeckt and Erich Ludendorff, over military collaboration between Germany and Italy. The discussions concluded that Germans still wanted a war of revenge against France but were short on weapons and hoped that Italy could assist Germany. However at this time Mussolini stressed one important condition that Italy must pursue in an alliance with Germany: that Italy "must... tow them, not be towed by them ''. Italian foreign minister Dino Grandi in the early 1930s stressed the importance of "decisive weight '', involving Italy 's relations between France and Germany, in which he recognized that Italy was not yet a major power, but perceived that Italy did have strong enough influence to alter the political situation in Europe by placing the weight of its support onto one side or another. However Grandi stressed that Italy must seek to avoid becoming a "slave of the rule of three '' in order to pursue its interests, arguing that although substantial Italo - French tensions existed, Italy would not unconditionally commit itself to an alliance with Germany, just as it would neither unconditionally commit itself to an alliance with France over conceivable Italo - German tensions. Grandi 's attempts to maintain a diplomatic balance between France and Germany were challenged in 1932 by pressure from the French, who had begun to prepare an alliance with Britain and the United States against the threat of a revanchist Germany. The French government warned Italy that it had to choose whether to be on the side of the pro-Versailles powers or that of the anti-Versailles revanchists. Grandi responded that Italy would be willing to offer France support against Germany if France gave Italy its mandate over Cameroon and allowed Italy a free hand in Ethiopia. France refused Italy 's proposed exchange for support, as it believed Italy 's demands were unacceptable and the threat from Germany was not yet immediate. On 23 October 1932, Mussolini declared support for a Four Power Directorate that included Britain, France, Germany, and Italy, to bring about an orderly treaty revision outside of what he considered the outmoded League of Nations. The proposed Directorate was pragmatically designed to reduce French hegemony in continental Europe, in order to reduce tensions between the great powers in the short term to buy Italy relief from being pressured into a specific war alliance while at the same time allowing them to benefit from diplomatic deals on treaty revisions. In 1932, Gyula Gömbös and the Party of National Unity rose to power in Hungary, and immediately sought an alliance with Italy. Gömbös sought to alter Hungary 's post -- Treaty of Trianon borders, but knew that Hungary alone was not capable of challenging the Little Entente powers by forming an alliance with Austria and Italy. Mussolini was elated by Gömbös ' offer of alliance with Italy, and they cooperated in seeking to persuade Austrian Chancellor Engelbert Dollfuss to join a tripartite economic agreement with Italy and Hungary. At the meeting between Gömbös and Mussolini in Rome on 10 November 1932, the question came up of the sovereignty of Austria in relation to the predicted rise to power in Germany of the Nazi Party. Mussolini was worried about Nazi ambitions towards Austria, and indicated that at least in the short term he was committed to maintaining Austria as a sovereign state. Italy had concerns over a Germany which included Austria laying land claims to German - populated territories of the South Tyrol (also known as Alto - Adige) within Italy, which bordered Austria on the Brenner Pass. Gömbös responded to Mussolini that as the Austrians primarily identified as Germans, the Anschluss of Austria to Germany was inevitable, and advised that it would be better for Italy to have a friendly Germany across the Brenner Pass than a hostile Germany bent on entering the Adriatic. Mussolini said he hoped the Anschluss could be postponed as long as possible until the breakout of a European war that he estimated would begin in 1938. In 1933, Adolf Hitler and the Nazi Party came to power in Germany. His first diplomatic visitor was Gömbös. In a letter to Hitler within a day of his being appointed Chancellor, Gömbös told the Hungarian ambassador to Germany to remind Hitler "that ten years ago, on the basis of our common principles and ideology, we were in contact via Dr. Scheubner - Richter ''. Gömbös told the Hungarian ambassador to inform Hitler of Hungary 's intentions "for the two countries to cooperate in foreign and economic policy ''. Hitler had advocated an alliance between Germany and Italy since the 1920s. Shortly after being appointed Chancellor, Hitler sent a personal message to Mussolini, declaring "admiration and homage '' and declaring his anticipation of the prospects of German - Italian friendship and even alliance. Hitler was aware that Italy held concerns over potential German land claims on South Tyrol, and assured Mussolini that Germany was not interested in South Tyrol. Hitler in Mein Kampf had declared that South Tyrol was a non-issue considering the advantages that would be gained from a German -- Italian alliance. After Hitler 's rise to power, the Four Power Directorate proposal by Italy had been looked at with interest by Britain, but Hitler was not committed to it, resulting in Mussolini urging Hitler to consider the diplomatic advantages Germany would gain by breaking out of isolation by entering the Directorate and avoiding an immediate armed conflict. The Four Power Directorate proposal stipulated that Germany would no longer be required to have limited arms and would be granted the right to re-armament under foreign supervision in stages. Hitler completely rejected the idea of controlled rearmament under foreign supervision. Mussolini did not trust Hitler 's intentions regarding Anschluss nor Hitler 's promise of no territorial claims on South Tyrol. Mussolini informed Hitler that he was satisfied with the presence of the anti-Marxist government of Dollfuss in Austria, and warned Hitler that he was adamantly opposed to Anschluss. Hitler responded in contempt to Mussolini that he intended "to throw Dollfuss into the sea ''. With this disagreement over Austria, relations between Hitler and Mussolini steadily became more distant. Hitler attempted to break the impasse with Italy over Austria by sending Hermann Göring to negotiate with Mussolini in 1933 to convince Mussolini to press the Austrian government to appoint members of Austria 's Nazis to the government. Göring claimed that Nazi domination of Austria was inevitable and that Italy should accept this, as well as repeating to Mussolini of Hitler 's promise to "regard the question of the South Tyrol frontier as finally liquidated by the peace treaties ''. In response to Göring 's visit with Mussolini, Dollfuss immediately went to Italy to counter any German diplomatic headway. Dollfuss claimed that his government was actively challenging Marxists in Austria and claimed that once the Marxists were defeated in Austria, that support for Austria 's Nazis would decline. In 1934, Hitler and Mussolini met for the first time, in Venice. The meeting did not proceed amicably. Hitler demanded that Mussolini compromise on Austria by pressuring Dollfuss to appoint Austrian Nazis to his cabinet, to which Mussolini flatly refused the demand. In response, Hitler promised that he would accept Austria 's independence for the time being, saying that due to the internal tensions in Germany (referring to sections of the Nazi SA that Hitler would soon kill in the Night of the Long Knives) that Germany could not afford to provoke Italy. Galeazzo Ciano told the press that the two leaders had made a "gentleman 's agreement '' to avoid interfering in Austria. Several weeks after the Venice meeting, on 25 July 1934, Austrian Nazis assassinated Dollfuss. Mussolini was outraged as he held Hitler directly responsible for the assassination that violated Hitler 's promise made only weeks ago to respect Austrian independence. Mussolini rapidly deployed several army divisions and air squadrons to the Brenner Pass, and warned that a German move against Austria would result in war between Germany and Italy. Hitler responded by both denying Nazi responsibility for the assassination and issuing orders to dissolve all ties between the German Nazi Party and its Austrian branch, which Germany claimed was responsible for the political crisis. Italy effectively abandoned diplomatic relations with Germany while turning to France in order to challenge Germany 's intransigence by signing a Franco - Italian accord to protect Austrian independence. French and Italian military staff discussed possible military cooperation involving a war with Germany should Hitler dare to attack Austria. As late as May 1935, Mussolini spoke of his desire to destroy Hitler. Relations between Germany and Italy recovered due to Hitler 's support of Italy 's invasion of Ethiopia in 1935, while other countries condemned the invasion and advocated sanctions against Italy. Interest in Germany and Japan in forming an alliance began when Japanese diplomat Oshima Hiroshi visited Joachim von Ribbentrop in Berlin in 1935. Oshima informed von Ribbentrop of Japan 's interest in forming a German - Japanese alliance against the Soviet Union. Von Ribbentrop expanded on Oshima 's proposal by advocating that the alliance be based in a political context of a pact to oppose the Comintern. The proposed pact was met with mixed reviews in Japan, with a faction of ultra-nationalists within the government supporting the pact while the Japanese Navy and the Japanese Foreign Ministry were staunchly opposed to the pact. There was great concern in the Japanese government that such a pact with Germany could disrupt Japan 's relations with Britain, endangering years of a beneficial Anglo - Japanese accord, that had allowed Japan to ascend in the international community in the first place. The response to the pact was met with similar division in Germany; while the proposed pact was popular amongst the upper echelons of the Nazi Party, it was opposed by many in the Foreign Ministry, the Army, and the business community who held financial interests in China to which Japan was hostile. On learning of German -- Japanese negotiations, Italy also began to take an interest in forming an alliance with Japan. Italy had hoped that due to Japan 's long - term close relations with Britain, that an Italo - Japanese alliance could pressure Britain into adopting a more accommodating stance towards Italy in the Mediterranean. In the summer of 1936, Italian Foreign Minister Ciano informed Japanese Ambassador to Italy, Sugimura Yotaro, "I have heard that a Japanese - German agreement concerning the Soviet Union has been reached, and I think it would be natural for a similar agreement to be made between Italy and Japan ''. Initially Japan 's attitude towards Italy 's proposal was generally dismissive, viewing a German -- Japanese alliance against the Soviet Union as imperative while regarding an Italo - Japanese alliance as secondary, as Japan anticipated that an Italo - Japanese alliance would antagonize Britain that had condemned Italy 's invasion of Ethiopia. This attitude by Japan towards Italy altered in 1937 after the League of Nations condemned Japan for aggression in China and faced international isolation, while Italy remained favourable to Japan. As a result of Italy 's support for Japan against international condemnation, Japan took a more positive attitude towards Italy and offered proposals for a non-aggression or neutrality pact with Italy. The "Axis powers '' formally took the name after the Tripartite Pact was signed by Germany, Italy, and Japan on 27 September 1940, in Berlin. The pact was subsequently joined by Hungary (20 November 1940), Romania (23 November 1940), Slovakia (24 November 1940), and Bulgaria (1 March 1941). The Axis powers ' primary goal was territorial expansion at the expense of their neighbors. In ideological terms, the Axis described their goals as breaking the hegemony of the plutocratic Western powers and defending civilization from communism. The Axis championed a number of variants on fascism, militarism, and autarky. The Axis population in 1938 was 258.9 million, while the Allied population (excluding the Soviet Union and the United States, which later joined the Allies) was 689.7 million. Thus the Allied powers outnumbered the Axis powers by 2.7 to 1. The leading Axis states had the following domestic populations: Germany 75.5 million (including 6.8 million from recently annexed Austria), Japan 71.9 million (excluding its colonies), and Italy 43.4 million (excluding its colonies). The United Kingdom (excluding its colonies) had a population of 47.5 million and France (excluding its colonies) 42 million. The wartime gross domestic product (GDP) of the Axis was $911 billion at its highest in 1941 in international dollars by 1990 prices. The GDP of the Allied powers was $1,798 billion. The United States stood at $1,094 billion, more than the Axis combined. The burden of the war upon participating countries has been measured through the percentage of gross national product (GNP) devoted to military expenditures. Nearly one - quarter of Germany 's GNP was committed to the war effort in 1939, and this rose to three - quarters of GNP in 1944, prior to the collapse of the economy. In 1939, Japan committed 22 percent of its GNP to its war effort in China; this rose to three - quarters of GNP in 1944. Italy did not mobilize its economy; its GNP committed to the war effort remained at prewar levels. Italy and Japan lacked industrial capacity; their economies were small, dependent on international trade, external sources of fuel and other industrial resources. As a result, Italian and Japanese mobilization remained low, even by 1943. Among the three major Axis powers, Japan had the lowest per capita income, while Germany and Italy had an income level comparable to the United Kingdom. Hitler in 1941 described the outbreak of World War II as the fault of the intervention of Western powers against Germany during its war with Poland, describing it as the result of "the European and American warmongers ''. Hitler denied accusations by the Allies that he wanted a World War, and invoked anti-Semitic claims that the war was wanted and provoked by politicians of Jewish origin or associated with Jewish interests. However Hitler clearly had designs for Germany to become the dominant and leading state in the world, such as his intention for Germany 's capital of Berlin to become the Welthauptstadt ("World Capital ''), renamed Germania. The German government also justified its actions by claiming that Germany inevitably needed to territorially expand because it was facing an overpopulation crisis that Hitler described: "We are overpopulated and can not feed ourselves from our own resources ''. Thus expansion was justified as an inevitable necessity to provide lebensraum ("living space '') for the German nation and end the country 's overpopulation within existing confined territory, and provide resources necessary to its people 's well - being. Since the 1920s, the Nazi Party publicly promoted the expansion of Germany into territories held by the Soviet Union. However, from 1939 to 1941, the Nazi regime claimed to have discarded those plans in light of improved relations with the Soviet Union via the Molotov -- Ribbentrop Pact, and claimed that central Africa was where Germany sought to achieve lebensraum. Hitler publicly claimed that Germany wanted to settle the lebensraum issue peacefully through diplomatic negotiations that would require other powers to make concessions to Germany. At the same time however Germany did prepare for war in the cause of lebensraum, and in the late 1930s Hitler emphasized the need for a military build - up to prepare for a potential clash between the peoples of Germany and the Soviet Union. Germany justified its war against Poland on the issues of German minority within Poland and Polish opposition to the incorporation of the ethnically German - majority Free City of Danzig into Germany. While Hitler and the Nazi party before taking power openly talked about destroying Poland and were hostile to Poles, after gaining power until February 1939 Hitler tried to conceal his true intentions towards Poland, and signed a 10 - year Non-Aggression Pact in 1934, revealing his plans to only to his closest associates. Relations between Germany and Poland altered from the early to the late 1930s, as Germany sought rapprochement with Poland to avoid the risk of Poland entering the Soviet sphere of influence, and appealed to anti-Soviet sentiment in Poland. The Soviet Union in turn at this time competed with Germany for influence in Poland. At the same time Germany was preparing for a war with Poland and was secretly preparing the German minority in Poland for a war. And since 1935 weapons were being smuggled and gathered in frontier Polish regions by German intelligence. In November 1938, Germany organized German paramilitary units in the Polish region of Pomerania that were trained to engage in diversion, sabotage as well as murder and ethnic cleansing upon a German invasion of Poland. At the end of 1938 one of the first editions of Sonderfahndungsbuch Polen was printed by the Nazis, containing several thousand names of Poles targeted for execution and imprisonment after an invasion of Poland From late 1938 to early 1939, Germany in talks with Poland suggested that as reward for Poland transferring territories in Pomerania to Germany that Poland could annex Ukrainian territories from the Soviet Union after a war with Soviet Union. In January 1939, Ribbentrop held negotiations with Józef Beck, the Polish minister of foreign affairs; and Edward Rydz - Śmigły, the commander - in - chief of the Polish Army; in which Ribbentrop urged them to have Poland enter the Anti-Comintern Pact and work together with Germany for a mutual war in the East, whereby Poland would take Slovakia and Ukraine. Ribbentrop in private discussion with German officials stated that he hoped that by offering Poland large new territories in the Soviet Union, that Germany would gain not only from Polish cooperation in a war with the Soviet Union, but also that Poland would cooperate by transferring the Polish Corridor to Germany in exchange for these gains, because though it would lose access to the Baltic Sea, it would gain access to the Black Sea via Ukraine. However Beck refused to discuss German demands for the Corridor and was recalcitrant to the idea of a war with the Soviet Union. The Polish government distrusted Hitler and saw the plan as a threat to Polish sovereignty, practically subordinating Poland to the Axis and the Anti-Comintern Bloc while reducing the country to a state of near - servitude as its entire trade with Western Europe through the Baltic Sea would become dependent on Germany. A diplomatic crisis erupted following Hitler demanding that the Free City of Danzig be annexed to Germany, as it was led by a Nazi government seeking annexation to Germany. Germany used legal precedents to justify its intervention against Poland and annexation of the Free City of Danzig (led by a local Nazi government that sought incorporation into Germany) in 1939. Germany noted one such violation as being in 1933 when Poland sent additional troops into the city in violation of the limit of Polish troops admissible to Danzig as agreed to by treaty. Hitler believed that Poland could be pressured to cede claimed territory through diplomatic means combined with the threat of military force, and believed that Germany could gain such concessions from Poland without provoking a war with Britain or France. Hitler believed that Britain 's guarantee of military support to Poland was a bluff, and with a German - Soviet agreement on both countries recognizing their mutual interests involving Poland. The Soviet Union had diplomatic grievances with Poland since the Soviet - Polish War of 1919 -- 1921 in which the Soviets agreed that North - eastern Poland, Western Belarus and Western Ukraine will become part of restored Polish state after intense fighting in those years over the territories, and the Soviet Union sought to regain those territories. Poland rejected Germany 's demands and Germany in response prepared a general mobilization on the morning of 30 August 1939. Hitler believed that one of two outcomes would occur. The first was that the British would accept Germany 's demands and pressure Poland to agree to them. The second was that a conflict with Poland would be an isolated conflict, as Britain would not engage in a war with both Germany and the Soviet Union. At midnight 30 August 1939, German foreign minister Joachim Ribbentrop was expecting the arrival of the British ambassador Nevile Henderson as well as a Polish plenipotentiary to negotiate terms with Germany. Only Henderson arrived, and Henderson informed Ribbentrop that no Polish plenipotentiary was arriving. Ribbentrop became extremely upset and demanded the immediate arrival of a Polish diplomat, informing Henderson that the situation was "damned serious! '', and read out to Henderson Germany 's demands that Poland accept Germany annexing Danzig as well as Poland granting Germany the right to increase the connection of the infrastructure of East Prussia to mainland Germany by building an extraterritorial highway and railway that passed through the Polish Gdansk Pomerania, and a plebiscite to determine whether the Polish Corridor, that had a mixed composition of ethnic Poles and ethnic Germans, should remain within Poland or be transferred to Germany. Germany justified its invasion of the Low Countries of Belgium, Luxembourg, and the Netherlands in May 1940 by claiming that it suspected that Britain and France were preparing to use the Low Countries to launch an invasion of the industrial Ruhr region of Germany. When war between Germany versus Britain and France appeared likely in May 1939, Hitler declared that the Netherlands and Belgium would need to be occupied, saying: "Dutch and Belgian air bases must be occupied... Declarations of neutrality must be ignored ''. In a conference with Germany 's military leaders on 23 November 1939, Hitler declared to the military leaders that "We have an Achilles heel, the Ruhr '', and said that "If England and France push through Belgium and Holland into the Ruhr, we shall be in the greatest danger '', and thus claimed that Belgium and the Netherlands had to be occupied by Germany to protect Germany from a British - French offensive against the Ruhr, irrespective of their claims to neutrality. In April 1941, shortly after Germany and Yugoslavia completed negotiations for Yugoslavia to join the Axis, a coup d'état occurred in Yugoslavia that led to the Axis invasion of Yugoslavia. Germany needed access to the territory held by Yugoslavia to allow German forces to have a direct route to travel through, to reach and rescue Italian military forces that were faltering in their campaign in Greece. There was substantial animosity towards the alliance amongst Serbs, Yugoslavia 's largest ethnic group, who had fought German Austrians and Germany on the side of the Allies in World War I, and three Serb cabinet ministers resigned their positions in protest after the alliance was signed. Hitler initially attempted to be conciliatory to the Serbs who held animosity to the agreement, saying that he "understood the feelings '' of those Serbs who opposed the alliance. Amidst the negotiations, Hitler expressed concern to Italian foreign minister Ciano that he sensed trouble coming in Belgrade. A coup d'état occurred in Yugoslavia in which a government rose to power and abandoned its association with the Axis. Hitler accused the coup of being engineered by the British. The coup was at least partly supported by the British though there was substantial patriotic enthusiasm against the Pact with rallies in Belgrade. At the rallies in Belgrade immediately after the coup, people were heard to be shouting "Better war than pact! '' and waving British, American, and French flags. Days after the coup d'état, Hitler ordered the German General Staff to plan for an invasion of Yugoslavia. Germany 's invasion of the Soviet Union in 1941 involved issues of lebensraum, anti-communism, and Soviet foreign policy. Hitler in his early years as Nazi leader had claimed that he would be willing to accept friendly relations with Russia on the tactical condition that Russia agree to return to the borders established by the German -- Russian peace agreement of the Treaty of Brest - Litovsk signed by Vladimir Lenin of the Russian Soviet Federated Socialist Republic in 1918 which gave large territories held by Russia to German control in exchange for peace. Hitler in 1921 had commended the Treaty of Brest Litovsk as opening the possibility for restoration of relations between Germany and Russia, saying: Through the peace with Russia the sustenance of Germany as well as the provision of work were to have been secured by the acquisition of land and soil, by access to raw materials, and by friendly relations between the two lands. From 1921 to 1922 Hitler evoked rhetoric of both the achievement of lebensraum involving the acceptance of a territorially reduced Russia as well as supporting Russian nationals in overthrowing the Bolshevik government and establishing a new Russian government. However Hitler 's attitudes changed by the end of 1922, in which he then supported an alliance of Germany with Britain to destroy Russia. Later Hitler declared how far into Russia he intended to expand Germany to: Asia, what a disquieting reservoir of men! The safety of Europe will not be assured until we have driven Asia back behind the Urals. No organized Russian state must be allowed to exist west of that line. Policy for lebensraum planned mass expansion of Germany 's borders as far eastwards as the Ural Mountains. Hitler planned for the "surplus '' Russian population living west of the Urals to be deported to the east of the Urals. After Germany invaded the Soviet Union in 1941, the Nazi regime 's stance towards an independent, territorially - reduced Russia was affected by pressure beginning in 1942 from the German Army on Hitler to endorse a Russian national liberation army led by Andrey Vlasov that officially sought to overthrow Joseph Stalin and the communist regime and establish a new Russian state. Initially the proposal to support an anti-communist Russian army was met with outright rejection by Hitler, however by 1944 as Germany faced mounting losses on the Eastern Front, Vlasov 's forces were recognized by Germany as an ally, particularly by Reichsführer - SS Heinrich Himmler. After the Molotov -- Ribbentrop Pact was signed, in 1940 when Molotov arrived in Berlin on a diplomatic visit during which Ribbentrop stated that Germany was directing its lebensraum southward. Ribbentrop described to Molotov that further extension of Germany 's lebensraum was now going to be founded in Central Africa, and suggested that Germany would accept the Soviet Union taking part in the partitioning of the British Empire upon a British defeat in the war. Germany and the Soviet Union in 1940 were in dispute over their respective influences in the Balkans, Bulgaria, the Danube and the Turkish Straits. The Soviet seizure of Bessarabia from Romania in June 1940 placed the Soviet -- Romanian frontier dangerously close to Romania 's oil fields in Ploiești that Germany needed oil trade from to support its war effort. When negotiations with Molotov led to no resolution, Hitler determined that Britain was only continuing to fight in hope of Soviet intervention and therefore the defeat of the Soviet Union would result in the defeat of Britain and in July 1940 began planning for a possible invasion of the Soviet Union. After the Japanese attack on Pearl Harbor and the outbreak of war between Japan and the United States, Germany supported Japan by declaring war on the US. During the war Germany denounced the Atlantic Charter and the Lend - Lease Act that the US adopted to support the Allied powers prior to entry into the alliance, as imperialism directed at dominating and exploit countries outside of the continental Americas. Hitler denounced American President Roosevelt 's invoking of the term "freedom '' to describe US actions in the war, and accused the American meaning of "freedom '' to be the freedom for democracy to exploit the world and the freedom for plutocrats within such democracy to exploit the masses. At the end of World War I, German citizens felt that their country had been humiliated as a result of the Treaty of Versailles, which included a war guilt clause and forced Germany to pay enormous reparations payments and forfeit territories formerly controlled by German Empire and all its colonies. The pressure of the reparations on the German economy led to hyperinflation during the early 1920s. In 1923 the French occupied the Ruhr region when Germany defaulted on its reparations payments. Although Germany began to improve economically in the mid-1920s, the Great Depression created more economic hardship and a rise in political forces that advocated radical solutions to Germany 's woes. The Nazis, under Hitler, promoted the nationalist stab - in - the - back legend stating that Germany had been betrayed by Jews and Communists. The party promised to rebuild Germany as a major power and create a Greater Germany that would include Alsace - Lorraine, Austria, Sudetenland, and other German - populated territories in Europe. The Nazis also aimed to occupy and colonize non-German territories in Poland, the Baltic states, and the Soviet Union, as part of the Nazi policy of seeking Lebensraum ("living space '') in eastern Europe. Germany renounced the Versailles treaty and remilitarized the Rhineland in March 1936. Germany had already resumed conscription and announced the existence of a German air force, the Luftwaffe, and naval force, the Kriegsmarine in 1935. Germany annexed Austria in 1938, the Sudetenland from Czechoslovakia, and the Memel territory from Lithuania in 1939. Germany then invaded the rest of Czechoslovakia in 1939, creating the Protectorate of Bohemia and Moravia and the country of Slovakia. On 23 August 1939, Germany and the Soviet Union signed the Molotov -- Ribbentrop Pact, which contained a secret protocol dividing eastern Europe into spheres of influence. Germany 's invasion of its part of Poland under the Pact eight days later triggered the beginning of World War II. By the end of 1941, Germany occupied a large part of Europe and its military forces were fighting the Soviet Union, nearly capturing Moscow. However, crushing defeats at the Battle of Stalingrad and the Battle of Kursk devastated the German armed forces. This, combined with Western Allied landings in France and Italy, led to a three - front war that depleted Germany 's armed forces and resulted in Germany 's defeat in 1945. There was substantial internal opposition within the German military to the Nazi regime 's aggressive strategy of rearmament and foreign policy in the 1930s. From 1936 to 1938, Germany 's top four military leaders, Ludwig Beck, Werner von Blomberg, Werner von Fritsch, Walther von Reichenau, were all in opposition to the rearmament strategy and foreign policy. They criticized the hurried nature of rearmament, the lack of planning, Germany 's insufficient resources to carry out a war, the dangerous implications of Hitler 's foreign policy, and the increasing subordination of the army to the Nazi Party 's rules. These four military leaders were outspoken and public in their opposition to these tendencies. The Nazi regime responded with contempt to the four military leaders ' opposition, and Nazi members brewed a false crass scandal that alleged that the two top army leaders von Blomberg and von Fritsch were homosexual lovers, in order to pressure them to resign. Though started by lower - ranking Nazi members, Hitler took advantage of the scandal by forcing von Blomberg and von Fritsch to resign and replaced them with opportunists who were subservient to him. Shortly afterwards Hitler announced on 4 February 1938 that he was taking personal command over Germany 's military with the new High Command of the Armed Forces with the Führer as its head. The opposition to the Nazi regime 's aggressive foreign policy in the military became so strong from 1936 to 1938, that considerations of overthrowing the Nazi regime were discussed within the upper echelons of the military and remaining non-Nazi members of the German government. Minister of Economics, Hjalmar Schacht met with Beck in 1936 in which Schacht declared to Beck that he was considering an overthrow of the Nazi regime and was inquiring what the stance was by the German military on support of an overthrow of the Nazi regime. Beck was lukewarm to the idea, and responded that if a coup against the Nazi regime began with support at the civilian level, the military would not oppose it. Schacht considered this promise by Beck to be inadequate because he knew that without the support of the army, any coup attempt would be crushed by the Gestapo and the SS. However, by 1938, Beck became a firm opponent of the Nazi regime out of his opposition to Hitler 's military plans of 1937 -- 38 that told the military to prepare for the possibility of a world war as a result of German annexation plans for Austria and Czechoslovakia. The Protectorate of Bohemia and Moravia was created from the dismemberment of Czechoslovakia. Shortly after Germany annexed the Sudetenland region of Czechoslovakia, Slovakia declared its independence. The new Slovak State allied itself with Germany. The remainder of the country was occupied by German military forces and organized into the Protectorate. Czech civil institutions were preserved but the Protectorate was considered within the sovereign territory of Germany. The General Government was the name given to the territories of occupied Poland that were not directly annexed into German provinces, but like Bohemia and Moravia was considered within the sovereign territory of Germany. Belgium quickly surrendered to Germany, and the Belgian King remained in the country during the German military occupation from 1940 to 1944. The Belgian King cooperated closely with Germany and repeatedly sought assurances that Belgian rights would be retained once Germany achieved total victory. However, Hitler intended to annex Belgium and its Germanic population into the Greater Germanic Reich, initiated by the creation of Reichskommissariat Belgien, an authority run directly by the German government that sought the incorporation of the territory into the planned Germanic Reich. However Belgium was soon occupied by Allied forces in 1944. Reichskommissariat Niederlande was an occupation authority and territory established in the Netherlands in 1940 designated as a colony to be incorporated into the planned Greater Germanic Reich. Reichskommissariat Norwegen was established in Norway in 1940. Like the Reichskommissariats in Belgium and the Netherlands, its Germanic peoples were to be incorporated into the Greater Germanic Reich. In Norway, the Quisling regime, headed by Vidkun Quisling, was installed by the Germans as a client regime during the occupation, while king Haakon VII and the legal government were in exile. Quisling encouraged Norwegians to serve as volunteers in the Waffen - SS, collaborated in the deportation of Jews, and was responsible for the executions of members of the Norwegian resistance movement. About 45,000 Norwegian collaborators joined the pro-Nazi party Nasjonal Samling (National Union), and some police units helped arrest many Jews. However, Norway was one of the first countries where resistance during World War II was widespread before the turning point of the war in 1943. After the war, Quisling and other collaborators were executed. Quisling 's name has become an international eponym for traitor. Reichskommissariat Ostland was established in the Baltic region in 1941. Unlike the western Reichskommissariats that sought the incorporation of their majority Germanic peoples, Ostland were designed for settlement by Germans who would displace the non-Germanic majority living there, as part of lebensraum. Reichskommissariat Ukraine was established in Ukraine in 1941. Like Ostland it was slated for settlement by Germans. The Military Administration in Serbia was established on occupied Yugoslav territory in April 1941, following the invasion of the country. On 30 April a pro-German Serbian administration was formed under Milan Aćimović to serve as a civil administration in the military occupation zone. A joint Partisan and Chetnik uprising in late 1941 became a serious concern for the Germans, as most of their forces were deployed to Russia; only three divisions were in the country. On 13 August 546 Serbs, including some of the country 's prominent and influential leaders, issued an appeal to the Serbian nation that condemned the Partisan and royalist resistance as unpatriotic. Two weeks after the appeal, with the Partisan and royalist insurgency beginning to gain momentum, 75 prominent Serbs convened a meeting in Belgrade and formed a Government of National Salvation under Serbian General Milan Nedić to replace the existing Serbian administration. The Germans were short of police and military forces in Serbia, and came to rely on poorly armed Serbian formations, the Serbian State Guard and Serbian Volunteer Corps, to maintain order. These forces, however, were not able to contain the resistance, and for the most of the war large parts of Serbia were under control of the Partisans or Chetniks (the two resistance movements soon became mutually - hostile). The Government of National Salvation, imbued with few powers upon formation, saw its functions further decreased and taken over by the Wehrmacht occupation authorities as the war progressed. After the initial mass revolts, the German authorities instituted an extreme regime of reprisals, proclaiming that 100 civilians would be executed for every German soldier killed, and 50 for each one wounded. These measures were actually implemented on more than one occasion: large - scale shootings took place in the Serbian towns of Kraljevo and Kragujevac during October 1941. Duce Benito Mussolini described Italy 's declaration of war against the Western Allies of Britain and France in June 1940 as the following: "We are going to war against the plutocratic and reactionary democracies of the West who have invariably hindered the progress and often threatened the very existence of the Italian people ''. Italy condemned the Western powers for enacting sanctions on Italy in 1935 for its actions in the Second Italo - Ethiopian War that Italy claimed was a response to an act of Ethiopian aggression against tribesmen in Italian Eritrea in the Walwal incident of 1934. Italy, like Germany, also justified its actions by claiming that Italy needed to territorially expand to provide spazio vitale ("vital space '') for the Italian nation. In October 1938 in the aftermath of the Munich Agreement, Italy demanded concessions from France to yield to Italy: a free port at Djibouti, control of the Addis Ababa - Djibouti railroad, Italian participation in the management of Suez Canal Company, some form of French - Italian condominium over Tunisia, and the preservation of Italian culture in French - held Corsica with no French assimilation of the people. Italy opposed the French monopoly over the Suez Canal because under the French - dominated Suez Canal Company all Italian merchant traffic to its colony of Italian East Africa was forced to pay tolls upon entering the canal. Mussolini hoped that in light of Italy 's role in settling the Munich Agreement that prevented the outbreak of war, that Britain would react by putting pressure on France to yield to Italy 's demands to preserve the peace. France refused to accept Italy 's demands as it was widely suspected that Italy 's true intentions were territorial acquisition of Nice, Corsica, Tunisia, and Djibouti and not the milder official demands put forth. Relations between Italy and France deteriorated with France 's refusal to accept Italy 's demands. France responded to Italy 's demands with threatening naval maneuvers as a warning to Italy. As tensions between Italy and France grew, Hitler made a major speech on 30 January 1939 in which he promised German military support in the case of an unprovoked war against Italy. Italy justified its intervention against Greece in October 1940 on the allegation that Greece was being used by Britain against Italy, Mussolini informed this to Hitler, saying: "Greece is one of the main points of English maritime strategy in the Mediterranean ''. Italy justified its intervention against Yugoslavia in April 1941 by appealing to both Italian irredentist claims and the fact of Albanian, Croatian, and Macedonian separatists not wishing to be part of Yugoslavia. Croatian separatism soared after the assassination of Croatian political leaders in the Yugoslav parliament in 1928 including the death of Stjepan Radić, and Italy endorsed Croatian separatist Ante Pavelić and his fascist Ustaše movement that was based and trained in Italy with the Fascist regime 's support prior to intervention against Yugoslavia. In the late 19th century, after Italian unification, a nationalist movement had grown around the concept of Italia irredenta, which advocated the incorporation into Italy of Italian - populated areas still under foreign rule. There was a desire to annex Dalmatian territories, which had formerly been ruled by the Venetians, and which consequently had Italian - speaking elites. The intention of the Fascist regime was to create a "New Roman Empire '' in which Italy would dominate the Mediterranean. In 1935 -- 1936 Italy invaded and annexed Ethiopia and the Fascist government proclaimed the creation of the "Italian Empire ''. Protests by the League of Nations, especially the British, who had interests in that area, led to no serious action, although The League did try to enforce economic sanctions upon Italy, but to no avail. The incident highlighted French and British weakness, exemplified by their reluctance to alienate Italy and lose her as their ally. The limited actions taken by the Western powers pushed Mussolini 's Italy towards alliance with Hitler 's Germany anyway. In 1937 Italy left the League of Nations and joined the Anti-Comintern Pact, which had been signed by Germany and Japan the preceding year. In March / April 1939 Italian troops invaded and annexed Albania. Germany and Italy signed the Pact of Steel on May 22. Italy entered World War II on 10 June 1940. In September 1940 Germany, Italy, and Japan signed the Tripartite Pact. Italy was ill - prepared for war, in spite of the fact that it had continuously been involved in conflict since 1935, first with Ethiopia in 1935 -- 1936 and then in the Spanish Civil War on the side of Francisco Franco 's Nationalists. Mussolini refused to heed warnings from his minister of exchange and currency, Felice Guarneri, who said that Italy 's actions in Ethiopia and Spain meant that Italy was on the verge of bankruptcy. By 1939 military expenditures by Britain and France far exceeded what Italy could afford. As a result of Italy 's economic difficulties its soldiers were poorly paid, often being poorly equipped and poorly supplied, and animosity arose between soldiers and class - conscious officers; these contributed to low morale amongst Italian soldiers. Military planning was deficient, as the Italian government had not decided on which theatre would be the most important. Power over the military was overcentralized to Mussolini 's direct control; he personally undertook to direct the ministry of war, the navy, and the air force. The navy did not have any aircraft carriers to provide air cover for amphibious assaults in the Mediterranean, as the Fascist regime believed that the air bases on the Italian Peninsula would be able to do this task. Italy 's army had outmoded artillery and the armoured units used outdated formations not suited to modern warfare. Diversion of funds to the air force and navy to prepare for overseas operations meant less money was available for the army; the standard rifle was a design that dated back to 1891. The Fascist government failed to learn from mistakes made in Ethiopia and Spain; it ignored the implications of the Italian Fascist volunteer soldiers being routed at the Battle of Guadalajara in the Spanish Civil War. Military exercises by the army in the Po Valley in August 1939 disappointed onlookers, including King Victor Emmanuel III. Mussolini who was angered by Italy 's military unpreparedness, dismissed Alberto Pariani as Chief of Staff of the Italian military in 1939. Italy 's only strategic natural resource was an abundance of aluminum. Petroleum, iron, copper, nickel, chrome, and rubber all had to be imported. The Fascist government 's economic policy of autarky and a recourse to synthetic materials was not able to meet the demand. Prior to entering the war, the Fascist government sought to gain control over resources in the Balkans, particularly oil from Romania. The agreement between Germany and the Soviet Union to invade and partition Poland between them resulted in Hungary that bordered the Soviet Union after Poland 's partition, and Romania viewing Soviet invasion as an immediate threat, resulting in both countries appealing to Italy for support, beginning in September 1939. Italy - then still officially neutral - responded to appeals by the Hungarian and Romanian governments for protection from the Soviet Union, by proposing a Danube - Balkan neutrals bloc. The proposed bloc was designed to increase Italian influence in the Balkans: it met resistance from France, Germany, and the Soviet Union that did not want to lose their influence in the Balkans; however Britain, who believed that Italy would not enter the war on Germany 's side, supported the neutral bloc. The efforts to form the bloc failed by November 1939 after Turkey made an agreement that it would protect Allied Mediterranean territory, along with Greece and Romania. Initially upon the outbreak of war between Germany and the Allies, Mussolini pursued a non-belligerent role for Italy out of concerns that Germany may not win its war with the Allies. However Mussolini in private grew anxious that Italy not intervening in support of Germany in September 1939 upon Britain and France waging war on Germany, would eventually result in retribution by Germany if Italy did not get involved in the war on Germany 's side. By early 1940, Italy was still a non-belligerent, and Mussolini communicated to Hitler that Italy was not prepared to intervene soon. By March 1940, Mussolini decided that Italy would intervene, but the date was not yet chosen. His senior military leadership unanimously opposed the action because Italy was unprepared. No raw materials had been stockpiled and the reserves it did have would soon be exhausted, Italy 's industrial base was only one - tenth of Germany 's, and even with supplies the Italian military was not organized to provide the equipment needed to fight a modern war of a long duration. An ambitious rearmament program was impossible because of Italy 's limited reserves in gold and foreign currencies and lack of raw materials. Mussolini ignored the negative advice. An April 1938 report by German Naval High Command (OKM) warned that Italy as a combatant ally would be a serious "burden '' to Germany if a war between Germany and Britain occurred, and recommended that it would be preferable for Germany to seek for Italy to be a "benevolent neutral '' during the war. On 18 March 1940, Hitler told Mussolini in person that the war would be over by the summer and that Italy 's military involvement was not required. Mussolini on 29 May 1940 discussed the situation of the Italian Army in which he acknowledged that it was not ideal but believed that it was satisfactory, and discussed the timeline for a declaration of war on Britain and France. He said: "a delay of two weeks or a month would not be an improvement, and Germany could think we entered the war when the risk was very small... And this could be a burden on us when peace comes. '' After entering the war in 1940, Italy had been slated to be granted a series of territorial concessions from France that Hitler had agreed to with Italian foreign minister Ciano, that included Italian annexation of claimed territories in southeastern France, a military occupation of southeastern France up to the river Rhone, and receiving the French colonies of Tunisia and Djibouti. However, on 22 June 1940, Mussolini suddenly informed Hitler that Italy was abandoning its claims "in the Rhone, Corsica, Tunisia, and Djibouti '', instead requesting a demilitarized zone along the French border, and on 24 June Italy agreed to an armistice with the Vichy regime to that effect. Later on 7 July 1940, the Italian government changed its decision, and Ciano attempted to make an agreement with Hitler to have Nice, Corsica, Tunisia, and Djibouti be transferred to Italy; Hitler adamantly rejected any new settlement or separate French - Italian peace agreement for the time being prior to the defeat of Britain in the war. However Italy continued to press Germany for the incorporation of Nice, Corsica, and Tunisia into Italy, with Mussolini sending a letter to Hitler in October 1940, informing him that as the 850,000 Italians living under France 's current borders formed the largest minority community, that ceding these territories to Italy would be beneficial to both Germany and Italy as it would reduce France 's population from 35 million to 34 and forestall any possibility of resumed French ambitions for expansion or hegemony in Europe. Germany had considered the possibility of invading and occupying the non-occupied territories of Vichy France including occupying Corsica; Germany capturing the Vichy French fleet for use by Germany, in December 1940 with the proposed Operation Attila. An invasion of Vichy France by Germany and Italy took place with Case Anton in November 1942. In mid-1940, in response to an agreement by Romanian Conducător Ion Antonescu to accept German "training troops '' to be sent to Romania, both Mussolini and Stalin in the Soviet Union were angered by Germany 's expanding sphere of influence into Romania, and especially because neither was informed in advance of the action in spite of German agreements with Italy and the Soviet Union at that time. Mussolini in a conversation with Ciano responded to Hitler 's deployment of troops into Romania, saying: "Hitler always faces me with accomplished facts. Now I 'll pay him back by his same currency. He 'll learn from the papers that I have occupied Greece. So the balance will be re-established. ''. However Mussolini later decided to inform Hitler in advance of Italy 's designs on Greece. Upon hearing of Italy 's intervention against Greece, Hitler was deeply concerned as he said that the Greeks were not bad soldiers that Italy might not win in its war with Greece, as he did not want Germany to become embroiled in a Balkan conflict. By 1941, Italy 's attempts to run an autonomous campaign from Germany 's, collapsed as a result of military setbacks in Greece, North Africa, and Eastern Africa; and the country became dependent and effectively subordinate to Germany. After the German - led invasion and occupation of Yugoslavia and Greece, that had both been targets of Italy 's war aims, Italy was forced to accept German dominance in the two occupied countries. Furthermore, by 1941, German forces in North Africa under Erwin Rommel effectively took charge of the military effort ousting Allied forces from the Italian colony of Libya, and German forces were stationed in Sicily in that year. Germany 's insolence towards Italy as an ally was demonstrated that year when Italy was pressured to send 350,000 "guest workers '' to Germany who were used as forced labour. While Hitler was disappointed with the Italian military 's performance, he maintained overall favorable relations with Italy because of his personal friendship with Mussolini. Mussolini by mid-1941 recognized that Italy 's war objectives had failed. Mussolini henceforth believed that Italy was left with no choice in such a subordinate status other than to follow Germany in its war and hope for a German victory. However Germany supported Italian propaganda of the creation of a "Latin Bloc '' of Italy, Vichy France, Spain, and Portugal to ally with Germany against the threat of communism, and after the German invasion of the Soviet Union, the prospect of a Latin Bloc seemed plausible. From 1940 to 1941, Francisco Franco of Spain had endorsed a Latin Bloc of Italy, Vichy France, Spain and Portugal, in order to balance the countries ' powers to that of Germany; however, the discussions failed to yield an agreement. After the invasion and occupation of Yugoslavia, Italy annexed numerous Adriatic islands and a portion of Dalmatia that was formed into the Italian Governorship of Dalmatia including territory from the provinces of Spalato, Zara, and Cattaro. Though Italy had initially larger territorial aims that extended from the Velebit mountains to the Albanian Alps, Mussolini decided against annexing further territories due to a number of factors, including that Italy held the economically valuable portion of that territory within its possession while the northern Adriatic coast had no important railways or roads and because a larger annexation would have included hundreds of thousands of Slavs who were hostile to Italy, within its national borders. Mussolini and foreign minister Ciano demanded that the Yugoslav region of Slovenia to be directly annexed into Italy, however in negotiations with German foreign minister Ribbentrop in April 1941, Ribbentrop insisted on Hitler 's demands that Germany be allocated the eastern Slovenia while Italy would be allocated western Slovenia, Italy conceded to this German demand and Slovenia was partitioned between Germany and Italy. With the commencing of the Allies ' Operation Torch against Vichy French - held Morocco and Algeria, Germany and Italy intervened in Vichy France and in Vichy French - held Tunisia. Italy seized military control over a significant portion of southern France and Corsica, while a joint German - Italian force seized control over most of Tunisia. When the issue of sovereign control over Tunisia arose from seizure of control by the German - Italian force from Vichy French control, Ribbentrop proclaimed Italian predominance in Tunisia. However, in spite of Germany 's claim to respect Italian predominance, Germans supervised public services and local government in Tunisia, and the German presence was more popular in Tunisia with both the local Arab population and Vichy French collaborators since Germany had no imperial aspirations in Tunisia while Italy did. Internal opposition by Italians to the war and the Fascist regime accelerated by 1942, though significant opposition to the war had existed at the outset in 1940, as police reports indicated that many Italians were secretly listening to the BBC rather than Italian media in 1940. Underground Catholic, Communist, and socialist newspapers began to become prominent by 1942. In spring 1941, Victor Emmanuel III visited Italian soldiers on the front in Yugoslavia and Albania, he was dismayed by the Fascist regime 's brutal imperialism in Dalmatia, Slovenia, and Montenegro because he suspected it would impose impossible burdens on Italy by creating new enemies among the occupied peoples that Italy would be forced to fight. Victor Emmanuel was disappointed with the Italian military 's performance in the war, as he noted the army, navy, and air force could not drop their mutual jealousies and competition to work together. Furthermore, he feared that overly ambitious generals attempting to win promotion were attempting to persuade Mussolini to divert military resources in an ever - widening field of action. In June 1941, Mussolini 's decision to follow Germany by waging war on the Soviet Union in which Victor Emmanuel was informed at the last moment giving him time only to advice to Mussolini against sending anything more than a token force to fight against the Soviet Union; his advice was not taken. A few weeks after Italy 's declaration of war against the Soviet Union, a senior general of the Carabinieri informed the royal palace that the military police were awaiting a royal order to act against the Fascist regime. In September 1941, Victor Emmanuel held a private discussion with Ciano, in which Ciano said to the King that Fascism was doomed. In 1942, opposition to Italy 's involvement in the war expanded among the Fascist regime 's senior officials, with Giuseppe Bottai in private stating that he and other Fascist officials should have resigned from office when Mussolini declared war on Britain and France in June 1940, while Dino Grandi approached the King urging him to dismantle Mussolini 's dictatorship in order to withdraw Italy from the war as he saw Italy facing ruin. By January 1943, King Victor Emmanuel III was persuaded by the Minister of the Royal Household, the Duke of Acquarone that Mussolini had to be removed from office. In March 1943, the first sign of serious rebellion by Italians against the Fascist regime and the war began with a strike by factory workers who were joined by soldiers singing communist songs and even rank - in - file Fascist party members. The Fascist regime also faced passive resistance by civil servants who had begun to refuse to obey orders or pretend to obey orders. On 25 July 1943, following the Allied invasion of Sicily, King Victor Emmanuel III dismissed Mussolini, placed him under arrest, and began secret negotiations with the Western Allies. An armistice was signed on 8 September 1943, and Italy joined the Allies as a co-belligerent. On 12 September 1943, Mussolini was rescued by the Germans in Operation Oak and placed in charge of a puppet state called the Italian Social Republic (Repubblica Sociale Italiana / RSI, or Repubblica di Salò) in northern Italy. The war went on for months as the Allies, the Italian Co-Belligerent Army and the partisans contended the Social Republic 's forces and its German allies. Some areas in Northern Italy were liberated from the Germans as late as May, 1945. Mussolini was killed by Communist partisans on 28 April 1945 while trying to escape to Switzerland. The Dodecanese Islands were an Italian dependency from 1912 to 1943. Montenegro was an Italian dependency from 1941 to 1943 known as the Governorate of Montenegro that was under the control of an Italian military governor. Initially, the Italians intended that Montenegro would become an "independent '' state closely allied with Italy, reinforced through the strong dynastic links between Italy and Montenegro, as Queen Elena of Italy was a daughter of the last Montenegrin king Nicholas I. The Italian - backed Montenegrin nationalist Sekula Drljević and his followers attempted to create a Montenegrin state. On 12 July 1941, they proclaimed the "Kingdom of Montenegro '' under the protection of Italy. In less than 24 hours, that triggered a general uprising against the Italians. Within three weeks, the insurgents managed to capture almost all the territory of Montenegro. Over 70,000 Italian troops and 20,000 of Albanian and Muslim irregulars were deployed to suppress the rebellion. Drljevic was expelled from Montenegro in October 1941. Montenegro then came under full direct Italian control. With the Italian capitulation of 1943, Montenegro came directly under the control of Germany. Albania was an Italian protectorate and dependency from 1939 to 1943. In spite of Albania 's long - standing protection and alliance with Italy, on 7 April 1939 Italian troops invaded Albania, five months before the start of the Second World War. Following the invasion, Albania became a protectorate under Italy, with King Victor Emmanuel III of Italy being awarded the crown of Albania. An Italian governor controlled Albania. Albanian troops under Italian control were sent to participate in the Italian invasion of Greece and the Axis occupation of Yugoslavia. Following Yugoslavia 's defeat, Kosovo was annexed to Albania by the Italians. Politically and economically dominated by Italy from its creation in 1913, Albania was occupied by Italian military forces in 1939 as the Albanian king Zog l fled the country with his family. The Albanian parliament voted to offer the Albanian throne to the King of Italy, resulting in a personal union between the two countries. The Albanian army, having been trained by Italian advisors, was reinforced by 100,000 Italian troops. A Fascist militia was organized, drawing its strength principally from Albanians of Italian descent. Albania served as the staging area for the Italian invasions of Greece and Yugoslavia. Albania annexed Kosovo in 1941 when Yugoslavia was dissolved, creating a Greater Albania. Albanian troops were dispatched to the Eastern Front to fight the Soviets as part of the Italian Eighth Army. Albania declared war on the United States in 1941. When the Fascist regime of Italy fell, in September 1943 Albania fell under German occupation. Italian East Africa was an Italian colony existing from 1936 to 1943. Prior to the invasion and annexation of Ethiopia into this united colony in 1936, Italy had two colonies, Eritrea and Somalia since the 1880s. Libya was an Italian colony existing from 1912 to 1943. The northern portion of Libya was incorporated directly into Italy in 1939, however the region remained united as a colony under a colonial governor. There was also a minor Italian concession territory in Tientsin, Republic of China. The Japanese government justified its actions by claiming that it was seeking to unite East Asia under Japanese leadership in a Greater East Asia Co-Prosperity Sphere that would free East Asians from domination and rule by clients of Western powers and particularly the United States. Japan invoked themes of Pan-Asianism and said that the Asian people needed to be free from Western influence. The United States opposed the Japanese war in China, and recognized Chiang Kai - Shek 's Nationalist Government as the legitimate government of China. As a result, the United States sought to bring the Japanese war effort to a halt by imposing an embargo on all trade between the United States and Japan. Japan was dependent on the United States for 80 percent of its petroleum, and as a consequence the embargo resulted in an economic and military crisis for Japan, as Japan could not continue its war effort against China without access to petroleum. In order to maintain its military campaign in China with the major loss of petroleum trade with the United States, Japan saw the best means to secure an alternative source of petroleum in the petroleum - rich and natural - resources - rich Southeast Asia. This threat of retaliation by Japan to the total trade embargo by the United States was known by the American government, including American Secretary of State Cordell Hull who was negotiating with the Japanese to avoid a war, fearing that the total embargo would pre-empt a Japanese attack on the Dutch East Indies. Japan identified the American Pacific fleet based in Pearl Harbor as the principal threat to its designs to invade and capture Southeast Asia. Thus Japan initiated the attack on Pearl Harbor on 7 December 1941 as a means to inhibit an American response to the invasion of Southeast Asia, and buy time to allow Japan to consolidate itself with these resources to engage in a total war against the United States, and force the United States to accept Japan 's acquisitions. On 7 December 1941 Japan declared war on the United States and the British Empire. The Empire of Japan, a constitutional monarchy ruled by Hirohito, was the principal Axis power in Asia and the Pacific. Under the emperor were a political cabinet and the Imperial General Headquarters, with two chiefs of staff. By 1945 the Emperor of Japan was more than a symbolic leader; he played a major role in devising a strategy to keep himself on the throne. At its peak, Japan 's Greater East Asia Co-Prosperity Sphere included Manchuria, Inner Mongolia, large parts of China, Malaysia, French Indochina, Dutch East Indies, The Philippines, Burma, a small part of India, and various Pacific Islands in the central Pacific. As a result of the internal discord and economic downturn of the 1920s, militaristic elements set Japan on a path of expansionism. As the Japanese home islands lacked natural resources needed for growth, Japan planned to establish hegemony in Asia and become self - sufficient by acquiring territories with abundant natural resources. Japan 's expansionist policies alienated it from other countries in the League of Nations and by the mid-1930s brought it closer to Germany and Italy, who had both pursued similar expansionist policies. Cooperation between Japan and Germany began with the Anti-Comintern Pact, in which the two countries agreed to ally to challenge any attack by the Soviet Union. Japan entered into conflict against the Chinese in 1937. The Japanese invasion and occupation of parts of China resulted in numerous atrocities against civilians, such as the Nanking massacre and the Three Alls Policy. The Japanese also fought skirmishes with Soviet -- Mongolian forces in Manchukuo in 1938 and 1939. Japan sought to avoid war with the Soviet Union by signing a non-aggression pact with it in 1941. Japan 's military leaders were divided on diplomatic relationships with Germany and Italy and the attitude towards the United States. The Imperial Japanese Army was in favour of war with the United States, but the Imperial Japanese Navy was generally strongly opposed. When Prime Minister of Japan General Hideki Tojo refused American demands that Japan withdraw its military forces from China, a confrontation became more likely. War with the United States was being discussed within the Japanese government by 1940. Commander of the Combined Fleet Admiral Isoroku Yamamoto was outspoken in his opposition, especially after the signing of the Tripartite Pact, saying on 14 October 1940: "To fight the United States is like fighting the whole world. But it has been decided. So I will fight the best I can. Doubtless I shall die on board Nagato (his flagship). Meanwhile Tokyo will be burnt to the ground three times. Konoe and others will be torn to pieces by the revengeful people, I (should n't) wonder. '' In October and November 1940, Yamamoto communicated with Navy Minister Oikawa, and stated, "Unlike the pre-Tripartite days, great determination is required to make certain that we avoid the danger of going to war. '' With the European powers focused on the war in Europe, Japan sought to acquire their colonies. In 1940 Japan responded to the German invasion of France by occupying French Indochina. The Vichy France regime, a de facto ally of Germany, accepted the takeover. The allied forces did not respond with war. However, the United States instituted an embargo against Japan in 1941 because of the continuing war in China. This cut off Japan 's supply of scrap metal and oil needed for industry, trade, and the war effort. To isolate the US forces stationed in the Philippines and to reduce US naval power, the Imperial General Headquarters ordered an attack on the US naval base at Pearl Harbor, Hawaii, on 7 December 1941. They also invaded Malaya and Hong Kong. Initially achieving a series of victories, by 1943 the Japanese forces were driven back towards the home islands. The Pacific War lasted until the atomic bombings of Hiroshima and Nagasaki in 1945. The Soviets formally declared war in August 1945 and engaged Japanese forces in Manchuria and northeast China. Taiwan, then known as Formosa, was a Japanese dependency established in 1895. Korea was a Japanese protectorate and dependency formally established by the Japan -- Korea Treaty of 1910. The South Pacific Mandate were territories granted to Japan in 1919 in the peace agreements of World War I, that designated to Japan the German South Pacific islands. Japan received these as a reward by the Allies of World War I, when Japan was then allied against Germany. Japan occupied the Dutch East Indies during the war. Japan planned to transform these territories into a client state of Indonesia and sought alliance with Indonesian nationalists including future Indonesian President Sukarno, however these efforts did not deliver the creation of an Indonesian state until after Japan 's surrender. In addition to the 3 major Axis powers, 4 more countries and 2 puppet regimes signed the Tri-Partite Pact as its member states. Of the 4 countries, Romania, Hungary and Bulgaria participated in various Axis military operations with their national armed forces, while the 4th, Yugoslavia, saw its pro-Nazi government overthrown in a coup merely days after it signed the Pact, and the membership was reversed. The 2 puppet regimes that signed the Tri-Partite Pact, Tiso - led Slovakia and the Independent State of Croatia are listed among the client states section below. The Kingdom of Bulgaria was ruled by Тsar Boris III when it signed the Tripartite Pact on 1 March 1941. Bulgaria had been on the losing side in the First World War and sought a return of lost ethnically and historically Bulgarian territories, specifically in Macedonia and Thrace (all within Kingdom of Yugoslavia, Kingdom of Greece and Turkey). During the 1930s, because of traditional right - wing elements, Bulgaria drew closer to Nazi Germany. In 1940 Germany pressured Romania to sign the Treaty of Craiova, returning to Bulgaria the region of Southern Dobrudja, which it had lost in 1913. The Germans also promised Bulgaria -- if it joined the Axis -- an enlargement of its territory to the borders specified in the Treaty of San Stefano. Bulgaria participated in the Axis invasion of Yugoslavia and Greece by letting German troops attack from its territory and sent troops to Greece on April 20. As a reward, the Axis powers allowed Bulgaria to occupy parts of both countries -- southern and south - eastern Yugoslavia (Vardar Banovina) and north - eastern Greece (parts of Greek Macedonia and Greek Thrace). The Bulgarian forces in these areas spent the following years fighting various nationalist groups and resistance movements. Despite German pressure, Bulgaria did not take part in the Axis invasion of the Soviet Union and actually never declared war on the Soviet Union. The Bulgarian Navy was nonetheless involved in a number of skirmishes with the Soviet Black Sea Fleet, which attacked Bulgarian shipping. Following the Japanese attack on Pearl Harbor in December 1941, the Bulgarian government declared war on the Western Allies. This action remained largely symbolic (at least from the Bulgarian perspective), until August 1943, when Bulgarian air defense and air force attacked Allied bombers, returning (heavily damaged) from a mission over the Romanian oil refineries. This turned into a disaster for the citizens of Sofia and other major Bulgarian cities, which were heavily bombed by the Allies in the winter of 1943 -- 1944. On 2 September 1944, as the Red Army approached the Bulgarian border, a new Bulgarian government came to power and sought peace with the Allies, expelled the few remaining German troops, and declared neutrality. These measures however did not prevent the Soviet Union from declaring war on Bulgaria on 5 September, and on 8 September the Red Army marched into the country, meeting no resistance. This was followed by the coup d'état of 9 September 1944, which brought a government of the pro-Soviet Fatherland Front to power. After this, the Bulgarian army (as part of the Red Army 's 3rd Ukrainian Front) fought the Germans in Yugoslavia and Hungary, sustaining numerous casualties. Despite this, the Paris Peace Treaty treated Bulgaria as one of the defeated countries. Bulgaria was allowed to keep Southern Dobruja, but had to give up all claims to Greek and Yugoslav territory. Hungary, ruled by Regent Admiral Miklós Horthy, was the first country apart from Germany, Italy, and Japan to adhere to the Tripartite Pact, signing the agreement on 20 November 1940. Slovakia had been a client state of Germany since 1938. Political instability plagued the country until Miklós Horthy, a Hungarian nobleman and Austro - Hungarian naval officer, became regent in 1920. Hungarian nationalists desired to recover territories lost through the Trianon Treaty. The country drew closer to Germany and Italy largely because of a shared desire to revise the peace settlements made after World War I. Many people sympathized with the anti-Semitic policy of the Nazi regime. Due to its pro-German stance, Hungary received favourable territorial settlements when Germany annexed Czechoslovakia in 1938 -- 1939 and received Northern Transylvania from Romania via the Vienna Awards of 1940. Hungarians permitted German troops to transit through their territory during the invasion of Yugoslavia, and Hungarian forces took part in the invasion. Parts of Yugoslavia were annexed to Hungary; the United Kingdom immediately broke off diplomatic relations in response. Although Hungary did not initially participate in the German invasion of the Soviet Union, Hungary declared war on the Soviet Union on 27 June 1941. Over 500,000 soldiers served on the Eastern Front. All five of Hungary 's field armies ultimately participated in the war against the Soviet Union; a significant contribution was made by the Hungarian Second Army. On 25 November 1941, Hungary was one of thirteen signatories to the revived Anti-Comintern Pact. Hungarian troops, like their Axis counterparts, were involved in numerous actions against the Soviets. By the end of 1943, the Soviets had gained the upper hand and the Germans were retreating. The Hungarian Second Army was destroyed in fighting on the Voronezh Front, on the banks of the Don River. In 1944, with Soviet troops advancing toward Hungary, Horthy attempted to reach an armistice with the Allies. However, the Germans replaced the existing regime with a new one. After fierce fighting, Budapest was taken by the Soviets. A number of pro-German Hungarians retreated to Italy and Germany, where they fought until the end of the war. Relations between Germany and the regency of Miklós Horthy collapsed in Hungary in 1944. Horthy was forced to abdicate after German armed forces held his son hostage as part of Operation Panzerfaust. Hungary was reorganized following Horthy 's abdication in December 1944 into a totalitarian fascist regime called the Government of National Unity, led by Ferenc Szálasi. He had been Prime Minister of Hungary since October 1944 and was leader of the anti-Semitic fascist Arrow Cross Party. In power, his government was a puppet regime with little authority, and the country was effectively under German control. Days after the Szálasi government took power, the capital of Budapest was surrounded by the Soviet Red Army. German and Hungarian fascist forces tried to hold off the Soviet advance but failed. In March 1945, Szálasi fled to Germany as the leader of a government in exile, until the surrender of Germany in May 1945. When war erupted in Europe in 1939, the Kingdom of Romania was pro-British and allied to the Poles. Following the invasion of Poland by Germany and the Soviet Union, and the German conquest of France and the Low Countries, Romania found itself increasingly isolated; meanwhile, pro-German and pro-Fascist elements began to grow. The August 1939 Molotov -- Ribbentrop Pact between Germany and the Soviet Union contained a secret protocol ceding Bessarabia, and Northern Bukovina to the Soviet Union. On June 28, 1940, the Soviet Union occupied and annexed Bessarabia, as well as part of northern Romania and the Hertza region. On 30 August 1940, Germany forced Romania to cede Northern Transylvania to Hungary as a result of the Second Vienna Award. Southern Dobruja was ceded to Bulgaria in September 1940. In an effort to appease the Fascist elements within the country and obtain German protection, King Carol II appointed the General Ion Antonescu as Prime Minister on September 6, 1940. Two days later, Antonescu forced the king to abdicate and installed the king 's young son Michael (Mihai) on the throne, then declared himself Conducător ("Leader '') with dictatorial powers. The National Legionary State was proclaimed on 14 September, with the Iron Guard ruling together with Antonescu as the sole legal political movement in Romania. Under King Michael I and the military government of Antonescu, Romania signed the Tripartite Pact on November 23, 1940. German troops entered the country on 10 October 1941, officially to train the Romanian Army. Hitler 's directive to the troops on 10 October had stated that "it is necessary to avoid even the slightest semblance of military occupation of Romania ''. The entrance of German troops in Romania determined Italian dictator Benito Mussolini to launch an invasion of Greece, starting the Greco - Italian War. Having secured Hitler 's approval in January 1941, Antonescu ousted the Iron Guard from power. Romania was subsequently used as a platform for invasions of Yugoslavia and the Soviet Union. Despite not being involved militarily in the Invasion of Yugoslavia, Romania requested that Hungarian troops not operate in the Banat. Paulus thus modified the Hungarian plan and kept their troops west of the Tisza. Romania 's military industry was small but versatile, able to copy and produce thousands of French and Soviet mortars, hundreds of German 37 mm anti-aircraft guns, 200 British Vickers Model 1931 75 mm anti-aircraft guns, hundreds of French 47 mm anti-tank guns, thousands of Czechoslovak machine guns and 126 French Renault UE armored tractors. Original products include the Orița M1941 sub-machinegun, the 75 mm Reșița Model 1943 anti-tank gun with a muzzle velocity of over 1 km / second of which up to 400 were made and about a hundred tank destroyers, the most notable being the Mareșal tank destroyer, which is credited with being the inspiration for the German Hetzer. Romania also built sizable warships, such as the minelayer Amiral Murgescu and the submarines Rechinul and Marsuinul. Hundreds of originally - designed aircraft were also produced, such as the fighter IAR - 80 and the light bomber IAR - 37. Romania had also been a major power in the oil industry since the 1800s. It was one of the largest producers in Europe and the Ploiești oil refineries provided about 30 % of all Axis oil production. Romania joined the German - led invasion of the Soviet Union on June 22, 1941. Antonescu was the only foreign leader Hitler consulted on military matters and the two would meet no less than ten times throughout the war. Romania re-captured Bessarabia and Northern Bukovina during Operation Munchen before conquering further Soviet territory and establishing the Transnistria Governorate. After the Siege of Odessa, the city became the capital of the Governorate. Romanian troops fought their way into the Crimea alongside German troops and contributed significantly to the Siege of Sevastopol. Later, Romanian mountain troops joined the German campaign in the Caucasus, reaching as far as Nalchik. After suffering devastating losses at Stalingrad, Romanian officials began secretly negotiating peace conditions with the Allies. By 1943, the tide began to turn. The Soviets pushed further west, retaking Ukraine and eventually launching an unsuccessful invasion of eastern Romania in the spring of 1944. Romanian troops in the Crimea helped repulse initial Soviet landings, but eventually all of the peninsula was re-conquered by Soviet forces and the Romanian Navy evacuated over 100,000 German and Romanian troops, an achievement which earned Romanian Admiral Horia Macellariu the Knight 's Cross of the Iron Cross. During the Jassy - Kishinev Offensive of August 1944, Romania switched sides on August 23, 1944. Romanian troops then fought alongside the Soviet Army until the end of the war, reaching as far as Czechoslovakia and Austria. Yugoslavia was largely surrounded by members of the pact and now bordered the German Reich. From late 1940 Hitler sought a non-aggression pact with Yugoslavia. In February 1941, Hitler called for Yugoslavia 's accession to the Tripartite Pact, the Yugoslav delayed. In March, divisions of the German army arrived at the Bulgarian - Yugoslav border and permission was sought for them to pass through to attack Greece. On 25 March 1941, fearing that Yugoslavia would be invaded otherwise, the Yugoslav government signed the Tripartite Pact with significant reservations. Unlike other Axis powers, Yugoslavia was not obliged to provide military assistance, nor to provide its territory for Axis to move military forces during the war. Less than two days later, after demonstrations in the streets of Belgrade, Prince Paul and the government were removed from office by a coup d'état. Seventeen - year - old King Peter was declared to be of age. The new Yugoslav government under General Dušan Simović, refused to ratify Yugoslavia 's signing of the Tripartite Pact, and started negotiations with Great Britain and Soviet Union. Winston Churchill commented that "Yugoslavia has found its soul ''; however, Hitler invaded and quickly took control. Various countries fought side by side with the Axis powers for a common cause. These countries were not signatories of the Tripartite Pact and thus not formal members of the Axis. Although Finland never signed the Tripartite Pact and legally (de jure) was not a part of the Axis, it was Axis - aligned in its fight against the Soviet Union. Finland signed the revived Anti-Comintern Pact of November 1941. The August 1939 Molotov -- Ribbentrop Pact between Germany and the Soviet Union contained a secret protocol dividing much of eastern Europe and assigning Finland to the Soviet sphere of influence. After unsuccessfully attempting to force territorial and other concessions on the Finns, the Soviet Union tried to invade Finland in November 1939 during the Winter War, intending to establish a communist puppet government in Finland. The conflict threatened Germany 's iron - ore supplies and offered the prospect of Allied interference in the region. Despite Finnish resistance, a peace treaty was signed in March 1940, wherein Finland ceded some key territory to the Soviet Union, including the Karelian Isthmus, containing Finland 's second - largest city, Viipuri, and the critical defensive structure of the Mannerheim Line. After this war, Finland sought protection and support from the United Kingdom and non-aligned Sweden, but was thwarted by Soviet and German actions. This resulted in Finland being drawn closer to Germany, first with the intent of enlisting German support as a counterweight to thwart continuing Soviet pressure, and later to help regain lost territories. In the opening days of Operation Barbarossa, Germany 's invasion of the Soviet Union, Finland permitted German planes returning from mine dropping runs over Kronstadt and Neva River to refuel at Finnish airfields before returning to bases in East Prussia. In retaliation, the Soviet Union launched a major air offensive against Finnish airfields and towns, which resulted in a Finnish declaration of war against the Soviet Union on 25 June 1941. The Finnish conflict with the Soviet Union is generally referred to as the Continuation War. Finland 's main objective was to regain territory lost to the Soviet Union in the Winter War. However, on 10 July 1941, Field Marshal Carl Gustaf Emil Mannerheim issued an Order of the Day that contained a formulation understood internationally as a Finnish territorial interest in Russian Karelia. Diplomatic relations between the United Kingdom and Finland were severed on 1 August 1941, after the British bombed German forces in the Finnish village and port of Petsamo. The United Kingdom repeatedly called on Finland to cease its offensive against the Soviet Union, and declared war on Finland on 6 December 1941, although no other military operations followed. War was never declared between Finland and the United States, though relations were severed between the two countries in 1944 as a result of the Ryti - Ribbentrop Agreement. Finland maintained command of its armed forces and pursued war objectives independently of Germany. Germans and Finns did work closely together during Operation Silverfox, a joint offensive against Murmansk. Finland refused German requests to participate actively in the Siege of Leningrad, and also granted asylum to Jews, while Jewish soldiers continued to serve in its army. The relationship between Finland and Germany more closely resembled an alliance during the six weeks of the Ryti - Ribbentrop Agreement, which was presented as a German condition for help with munitions and air support, as the Soviet offensive coordinated with D - Day threatened Finland with complete occupation. The agreement, signed by President Risto Ryti but never ratified by the Finnish Parliament, bound Finland not to seek a separate peace. After Soviet offensives were fought to a standstill, Ryti 's successor as president, Marshall Mannerheim, dismissed the agreement and opened secret negotiations with the Soviets, which resulted in a ceasefire on 4 September and the Moscow Armistice on 19 September 1944. Under the terms of the armistice, Finland was obliged to expel German troops from Finnish territory, which resulted in the Lapland War. Finland signed a peace treaty with the Allied powers in 1947. The Free City of Danzig, a semi-autonomous city - state under League of Nations protection, briefly aided the Nazis at the beginning of the invasion of Poland, attacking Polish territories bordering the city. The Free City of Danzig Police and militia fought with German soldiers during the Battle of Westerplatte and the attack on the Polish post office in Danzig. After the end of the Polish campaign, Danzig was annexed into Germany. The Kingdom of Iraq was briefly an ally of the Axis, fighting the United Kingdom in the Anglo - Iraqi War of May 1941. Anti-British sentiments were widespread in Iraq prior to 1941. Seizing power on 1 April 1941, the nationalist government of Prime Minister Rashid Ali repudiated the Anglo - Iraqi Treaty of 1930 and demanded that the British abandon their military bases and withdraw from the country. Ali sought support from Germany and Italy in expelling British forces from Iraq. On 9 May 1941, Mohammad Amin al - Husayni, the Mufti of Jerusalem and associate of Ali, declared holy war against the British and called on Arabs throughout the Middle East to rise up against British rule. On 25 May 1941, the Germans stepped up offensive operations in the Middle East. Hitler issued Order 30: "The Arab Freedom Movement in the Middle East is our natural ally against England. In this connection special importance is attached to the liberation of Iraq... I have therefore decided to move forward in the Middle East by supporting Iraq. '' Hostilities between the Iraqi and British forces began on 2 May 1941, with heavy fighting at the RAF air base in Habbaniyah. The Germans and Italians dispatched aircraft and aircrew to Iraq utilizing Vichy French bases in Syria, which would later invoke fighting between Allied and Vichy French forces in Syria. The Germans planned to coordinate a combined German - Italian offensive against the British in Egypt, Palestine, and Iraq. Iraqi military resistance ended by 31 May 1941. Rashid Ali and the Mufti of Jerusalem fled to Iran, then Turkey, Italy, and finally Germany, where Ali was welcomed by Hitler as head of the Iraqi government - in - exile in Berlin. In propaganda broadcasts from Berlin, the Mufti continued to call on Arabs to rise up against the British and aid German and Italian forces. He also helped recruit Muslim volunteers in the Balkans for the Waffen - SS. Thailand waged the Franco - Thai War in October 1940 to May 1941 to reclaim territory from French Indochina. It became a formal ally of Japan from 25 January 1942. Japanese forces invaded Thailand 's territory an hour and a half before the attack on Pearl Harbor, (because of the International Dateline, the local time was on the morning of 8 December 1941). Only hours after the invasion, Prime Minister Field Marshal Phibunsongkhram ordered the cessation of resistance against the Japanese. On 21 December 1941, a military alliance with Japan was signed and on 25 January 1942, Sang Phathanothai read over the radio Thailand 's formal declaration of war on the United Kingdom and the United States. The Thai ambassador to the United States, Mom Rajawongse Seni Pramoj, did not deliver his copy of the declaration of war. Therefore, although the British reciprocated by declaring war on Thailand and considered it a hostile country, the United States did not. The Thais and Japanese agreed that Shan State and Kayah State were to be under Thai control. The rest of Burma was to be under Japanese control. On 10 May 1942, the Thai Phayap Army entered Burma 's eastern Shan State, which had been claimed by Siamese kingdoms. Three Thai infantry and one cavalry division, spearheaded by armoured reconnaissance groups and supported by the air force, engaged the retreating Chinese 93rd Division. Kengtung, the main objective, was captured on 27 May. Renewed offensives in June and November saw the Chinese retreat into Yunnan. The area containing the Shan States and Kayah State was annexed by Thailand in 1942. The areas were ceded back to Burma in 1945. The Free Thai Movement ("Seri Thai '') was established during these first few months. Parallel Free Thai organizations were also established in the United Kingdom. Queen Rambai Barni was the nominal head of the British - based organization, and Pridi Banomyong, the regent, headed its largest contingent, which was operating within Thailand. Aided by elements of the military, secret airfields and training camps were established, while Office of Strategic Services and Force 136 agents slipped in and out of the country. As the war dragged on, the Thai population came to resent the Japanese presence. In June 1944, Phibun was overthrown in a coup d'état. The new civilian government under Khuang Aphaiwong attempted to aid the resistance while maintaining cordial relations with the Japanese. After the war, U.S. influence prevented Thailand from being treated as an Axis country, but the British demanded three million tons of rice as reparations and the return of areas annexed from Malaya during the war. Thailand also returned the portions of British Burma and French Indochina that had been annexed. Phibun and a number of his associates were put on trial on charges of having committed war crimes and of collaborating with the Axis powers. However, the charges were dropped due to intense public pressure. Public opinion was favourable to Phibun, as he was thought to have done his best to protect Thai interests. The collaborationist administrations of German - occupied countries in Europe had varying degrees of autonomy, and not all of them qualified as fully recognized sovereign states. The General Government in occupied Poland was a German administration, not a Polish government. In occupied Norway, the National Government headed by Vidkun Quisling -- whose name came to symbolize pro-Axis collaboration in several languages -- was subordinate to the Reichskommissariat Norwegen. It was never allowed to have any armed forces, be a recognized military partner, or have autonomy of any kind. In the occupied Netherlands, Anton Mussert was given the symbolic title of "Führer of the Netherlands ' people ''. His National Socialist Movement formed a cabinet assisting the German administration, but was never recognized as a real Dutch government. The following list of German client states includes only those entities that were officially considered to be independent countries allied with Germany. They were under varying degrees of German influence and control, but were not ruled directly by Germans. After the Italian armistice, a vacuum of power opened up in Albania. The Italian occupying forces were rendered largely powerless, as the National Liberation Movement took control of the south and the National Front (Balli Kombëtar) took control of the north. Albanians in the Italian army joined the guerrilla forces. In September 1943 the guerrillas moved to take the capital of Tirana, but German paratroopers dropped into the city. Soon after the battle, the German High Command announced that they would recognize the independence of a greater Albania. They organized an Albanian government, police, and military in collaboration with the Balli Kombëtar. The Germans did not exert heavy control over Albania 's administration, but instead attempted to gain popular appeal by giving their political partners what they wanted. Several Balli Kombëtar leaders held positions in the regime. The joint forces incorporated Kosovo, western Macedonia, southern Montenegro, and Presevo into the Albanian state. A High Council of Regency was created to carry out the functions of a head of state, while the government was headed mainly by Albanian conservative politicians. Albania was the only European country occupied by the Axis powers that ended World War II with a larger Jewish population than before the war. The Albanian government had refused to hand over their Jewish population. They provided Jewish families with forged documents and helped them disperse in the Albanian population. Albania was completely liberated on November 29, 1944. The Government of National Salvation, also referred to as the Nedić regime, was the second Serbian puppet government, after the Commissioner Government, established on the Territory of the (German) Military Commander in Serbia during World War II. It was appointed by the German Military Commander in Serbia and operated from 29 August 1941 to October 1944. The Serbian puppet state enjoyed significant support. The Prime Minister throughout was General Milan Nedić. The Government of National Salvation was evacuated from Belgrade to Kitzbühel, Germany in the first week of October 1944 before the German withdrawal from Serbia was complete. Racial laws were introduced in all occupied territories with immediate effects on Jews and Roma people, as well as causing the imprisonment of those opposed to Nazism. Several concentration camps were formed in Serbia and at the 1942 Anti-Freemason Exhibition in Belgrade the city was pronounced to be free of Jews (Judenfrei). On 1 April 1942, a Serbian Gestapo was formed. An estimated 120,000 people were interned in German - run concentration camps in Nedić 's Serbia between 1941 and 1944. 50,000 to 80,000 were killed during this period. Serbia became the second country in Europe, following Estonia, to be proclaimed Judenfrei (free of Jews). Approximately 14,500 Serbian Jews -- 90 percent of Serbia 's Jewish population of 16,000 -- were murdered in World War II. Collaborationist armed formations forces were involved, either directly or indirectly, in the mass killings of Jews, Roma and those Serbs who sided with any anti-German resistance or were suspects of being a member of such. These forces were also responsible for the killings of many Croats and Muslims; however, some Croats who took refuge in Nedić 's Serbia were not discriminated against. After the war, the Serbian involvement in many of these events and the issue of Serbian collaboration were subject to historical revisionism by later public figures. Nedić himself was captured by the Americans when they occupied the former territory of Austria, and was subsequently handed over to the Yugoslav communist authorities to act as a witness against war criminals, on the understanding he would be returned to American custody to face trial by the Allies. The Yugoslav authorities refused to return Nedić to United States custody. He died on 4 February 1946 after either jumping or falling out of the window of a Belgrade hospital, under circumstances which remain unclear. Italian Fascist leader Benito Mussolini formed the Italian Social Republic (Repubblica Sociale Italiana in Italian) on 23 September 1943, succeeding the Kingdom of Italy as a member of the Axis. Mussolini had been removed from office and arrested by King Victor Emmanuel III on 25 July 1943. After the Italian armistice, in a raid led by German paratrooper Otto Skorzeny, Mussolini was rescued from arrest. Once restored to power, Mussolini declared that Italy was a republic and that he was the new head of state. He was subject to German control for the duration of the war. The Slovak Republic under President Josef Tiso signed the Tripartite Pact on 24 November 1940. Slovakia had been closely aligned with Germany almost immediately from its declaration of independence from Czechoslovakia on 14 March 1939. Slovakia entered into a treaty of protection with Germany on 23 March 1939. Slovak troops joined the German invasion of Poland, having interest in Spiš and Orava. Those two regions, along with Cieszyn Silesia, had been disputed between Poland and Czechoslovakia since 1918. The Poles fully annexed them following the Munich Agreement. After the invasion of Poland, Slovakia reclaimed control of those territories. Slovakia invaded Poland alongside German forces, contributing 50,000 men at this stage of the war. Slovakia declared war on the Soviet Union in 1941 and signed the revived Anti-Comintern Pact in 1941. Slovak troops fought on Germany 's Eastern Front, furnishing Germany with two divisions totaling 80,000 men. Slovakia declared war on the United Kingdom and the United States in 1942. Slovakia was spared German military occupation until the Slovak National Uprising, which began on 29 August 1944, and was almost immediately crushed by the Waffen SS and Slovak troops loyal to Josef Tiso. After the war, Tiso was executed and Slovakia once again became part of Czechoslovakia. The border with Poland was shifted back to the pre-war state. Slovakia and the Czech Republic finally separated into independent states in 1993. Italy occupied several nations and set up clients in those regions to carry out administrative tasks and maintain order. The Principality of Monaco was officially neutral during the war. The population of the country was largely of Italian descent and sympathized with Italy. Its prince was a close friend of the Vichy French leader, Marshal Philippe Pétain, an Axis collaborator. A fascist regime was established under the nominal rule of the prince when the Italian Fourth Army occupied the country on November 10, 1942 as a part of Case Anton. Monaco 's military forces, consisting primarily of police and palace guards, collaborated with the Italians during the occupation. German troops occupied Monaco in 1943, and Monaco was liberated by Allied forces in 1944. On 10 April 1941, the Independent State of Croatia (Nezavisna Država Hrvatska, or NDH) declared itself a member of the Axis, co-signing the Tripartite Pact. The NDH remained a member of the Axis until the end of Second World War, its forces fighting for Germany even after its territory had been overrun by Yugoslav Partisans. On 16 April 1941, Ante Pavelić, a Croatian nationalist and one of the founders of the Ustaše ("Croatian Liberation Movement ''), was proclaimed Poglavnik (leader) of the new regime. Initially the Ustaše had been heavily influenced by Italy. They were actively supported by Mussolini 's Fascist regime in Italy, which gave the movement training grounds to prepare for war against Yugoslavia, as well as accepting Pavelić as an exile and allowing him to reside in Rome. Italy intended to use the movement to destroy Yugoslavia, which would allow Italy to expand its power through the Adriatic. Hitler did not want to engage in a war in the Balkans until the Soviet Union was defeated. The Italian occupation of Greece was not going well; Mussolini wanted Germany to invade Yugoslavia to save the Italian forces in Greece. Hitler reluctantly agreed; Yugoslavia was invaded and the Independent State of Croatia was created. Pavelić led a delegation to Rome and offered the crown of Croatia to an Italian prince of the House of Savoy, who was crowned Tomislav II, King of Croatia, Prince of Bosnia and Herzegovina, Voivode of Dalmatia, Tuzla and Knin, Prince of Cisterna and of Belriguardo, Marquess of Voghera, and Count of Ponderano. The next day, Pavelić signed the Contracts of Rome with Mussolini, ceding Dalmatia to Italy and fixing the permanent borders between the NDH and Italy. Italian armed forces were allowed to control all of the coastline of the NDH, effectively giving Italy total control of the Adriatic coastline. However, strong German influence began to be asserted soon after the NDH was founded. When the King of Italy ousted Mussolini from power and Italy capitulated, the NDH became completely under German influence. The platform of the Ustaše movement proclaimed that Croatians had been oppressed by the Serb - dominated Kingdom of Yugoslavia, and that Croatians deserved to have an independent nation after years of domination by foreign empires. The Ustaše perceived Serbs to be racially inferior to Croats and saw them as infiltrators who were occupying Croatian lands. They saw the extermination of Serbs as necessary to racially purify Croatia. While part of Yugoslavia, many Croatian nationalists violently opposed the Serb - dominated Yugoslav monarchy, and assassinated Alexander I of Yugoslavia, together with the Internal Macedonian Revolutionary Organization. The regime enjoyed support amongst radical Croatian nationalists. Ustashe forces fought against communist Yugoslav Partisan guerrilla throughout the war. Upon coming to power, Pavelić formed the Croatian Home Guard (Hrvatsko domobranstvo) as the official military force of the NDH. Originally authorized at 16,000 men, it grew to a peak fighting force of 130,000. The Croatian Home Guard included an air force and navy, although its navy was restricted in size by the Contracts of Rome. In addition to the Croatian Home Guard, Pavelić was also the supreme commander of the Ustaše militia, although all NDH military units were generally under the command of the German or Italian formations in their area of operations. The Ustaše government declared war on the Soviet Union, signed the Anti-Comintern Pact of 1941, and sent troops to Germany 's Eastern Front. Ustaše militia were garrisoned in the Balkans, battling the communist partisans. The Ustaše government applied racial laws on Serbs, Jews, Romani people, as well as targeting those opposed to the fascist regime, and after June 1941 deported them to the Jasenovac concentration camp or to German camps in Poland. The racial laws were enforced by the Ustaše militia. The exact number of victims of the Ustaše regime is uncertain due to the destruction of documents and varying numbers given by historians. According to the United States Holocaust Memorial Museum in Washington, DC, between 320,000 and 340,000 Serbs were killed in the NDH. Following the German invasion of Greece and the flight of the Greek government to Crete and then Egypt, the Hellenic State was formed in May 1941 as a puppet state of both Italy and Germany. Initially, Italy had wished to annex Greece, but was pressured by Germany to avoid civil unrest such as had occurred in Bulgarian - annexed areas. The result was Italy accepting the creation of a puppet regime with the support of Germany. Italy had been assured by Hitler of a primary role in Greece. Most of the country was held by Italian forces, but strategic locations (Central Macedonia, the islands of the northeastern Aegean, most of Crete, and parts of Attica) were held by the Germans, who seized most of the country 's economic assets and effectively controlled the collaborationist government. The puppet regime never commanded any real authority, and did not gain the allegiance of the people. It was somewhat successful in preventing secessionist movements like the Vlach "Roman Legion '' from establishing themselves. By mid-1943, the Greek Resistance had liberated large parts of the mountainous interior ("Free Greece ''), setting up a separate administration there. After the Italian armistice, the Italian occupation zone was taken over by the German armed forces, who remained in charge of the country until their withdrawal in autumn 1944. In some Aegean islands, German garrisons were left behind, and surrendered only after the end of the war. The Empire of Japan created a number of client states in the areas occupied by its military, beginning with the creation of Manchukuo in 1932. These puppet states achieved varying degrees of international recognition. The Japanese Army and Burma nationalists, led by Aung San, seized control of Burma from the United Kingdom during 1942. A State of Burma was formed on 1 August under the Burmese nationalist leader Ba Maw. The Ba Maw regime established the Burma Defence Army (later renamed the Burma National Army), which was commanded by Aung San. The Kingdom of Cambodia was a short - lived Japanese puppet state that lasted from 9 March 1945 to 15 August 1945. The Japanese entered Cambodia in mid-1941, but allowed Vichy French officials to remain in administrative posts. The Japanese calls for an "Asia for the Asiatics '' won over many Cambodian nationalists. This policy changed during the last months of the war. The Japanese wanted to gain local support, so they dissolved French colonial rule and pressured Cambodia to declare its independence within the Greater East Asia Co-Prosperity Sphere. Four days later, King Sihanouk declared Kampuchea (the original Khmer pronunciation of Cambodia) independent. Co-editor of the Nagaravatta, Son Ngoc Thanh, returned from Tokyo in May and was appointed foreign minister. On the date of Japanese surrender, a new government was proclaimed with Son Ngoc Thanh as prime minister. When the Allies occupied Phnom Penh in October, Son Ngoc Thanh was arrested for collaborating with the Japanese and was exiled to France. Some of his supporters went to northwestern Cambodia, which had been under Thai control since the French - Thai War of 1940, where they banded together as one faction in the Khmer Issarak movement, originally formed with Thai encouragement in the 1940s. During the Second Sino - Japanese War, Japan advanced from its bases in Manchuria to occupy much of East and Central China. Several Japanese puppet states were organized in areas occupied by the Japanese Army, including the Provisional Government of the Republic of China at Beijing, which was formed in 1937, and the Reformed Government of the Republic of China at Nanjing, which was formed in 1938. These governments were merged into the Reorganized National Government of China at Nanjing on 29 March 1940. Wang Jingwei became head of state. The government was to be run along the same lines as the Nationalist regime and adopted its symbols. The Nanjing Government had no real power; its main role was to act as a propaganda tool for the Japanese. The Nanjing Government concluded agreements with Japan and Manchukuo, authorising Japanese occupation of China and recognising the independence of Manchukuo under Japanese protection. The Nanjing Government signed the Anti-Comintern Pact of 1941 and declared war on the United States and the United Kingdom on 9 January 1943. The government had a strained relationship with the Japanese from the beginning. Wang 's insistence on his regime being the true Nationalist government of China and in replicating all the symbols of the Kuomintang led to frequent conflicts with the Japanese, the most prominent being the issue of the regime 's flag, which was identical to that of the Republic of China. The worsening situation for Japan from 1943 onwards meant that the Nanking Army was given a more substantial role in the defence of occupied China than the Japanese had initially envisaged. The army was almost continuously employed against the communist New Fourth Army. Wang Jingwei died on 10 November 1944, and was succeeded by his deputy, Chen Gongbo. Chen had little influence; the real power behind the regime was Zhou Fohai, the mayor of Shanghai. Wang 's death dispelled what little legitimacy the regime had. The state stuttered on for another year and continued the display and show of a fascist regime. On 9 September 1945, following the defeat of Japan, the area was surrendered to General He Yingqin, a nationalist general loyal to Chiang Kai - shek. The Nanking Army generals quickly declared their alliance to the Generalissimo, and were subsequently ordered to resist Communist attempts to fill the vacuum left by the Japanese surrender. Chen Gongbo was tried and executed in 1946. The Arzi Hukumat - e-Azad Hind, the Provisional Government of Free India was a state that was recognized by nine Axis governments. It was led by Subhas Chandra Bose, an Indian nationalist who rejected Mohandas K. Gandhi 's nonviolent methods for achieving independence. The First INA faltered after its leadership objected to being a propaganda tool for Japanese war aims, and the role of I Kikan. It was revived by the Indian Independence League with Japanese support in 1942 after the ex-PoWs and Indian civilians in South - east Asia agreed to participate in the INA venture on the condition it was led by Subhash Chandra Bose. Bose declared India 's independence on October 21, 1943. The Indian National Army was committed as a part of the U Go Offensive. It played a largely marginal role in the battle, and suffered serious casualties and had to withdraw with the rest of Japanese forces after the siege of Imphal was broken. It was later committed to the defence of Burma against the Allied offensive. It suffered a large number of desertions in this latter part. The remaining troops of the INA maintained order in Rangoon after the withdrawal of Ba Maw 's government. although The provisional government was given nominal control of the Andaman and Nicobar Islands from November 1943 to August 1945. Mengjiang was a Japanese puppet state in Inner Mongolia. It was nominally ruled by Prince Demchugdongrub, a Mongol nobleman descended from Genghis Khan, but was in fact controlled by the Japanese military. Mengjiang 's independence was proclaimed on 18 February 1936, following the Japanese occupation of the region. The Inner Mongolians had several grievances against the central Chinese government in Nanking, including their policy of allowing unlimited migration of Han Chinese to the region. Several of the young princes of Inner Mongolia began to agitate for greater freedom from the central government, and it was through these men that Japanese saw their best chance of exploiting Pan-Mongol nationalism and eventually seizing control of Outer Mongolia from the Soviet Union. Japan created Mengjiang to exploit tensions between ethnic Mongolians and the central government of China, which in theory ruled Inner Mongolia. When the various puppet governments of China were unified under the Wang Jingwei government in March 1940, Mengjiang retained its separate identity as an autonomous federation. Although under the firm control of the Japanese Imperial Army, which occupied its territory, Prince Demchugdongrub had his own independent army. Mengjiang vanished in 1945 following Japan 's defeat in World War II. As Soviet forces advanced into Inner Mongolia, they met limited resistance from small detachments of Mongolian cavalry, which, like the rest of the army, were quickly overwhelmed. Fears of Thai irredentism led to the formation of the first Lao nationalist organization, the Movement for National Renovation, in January 1941. The group was led by Prince Phetxarāt and supported by local French officials, though not by the Vichy authorities in Hanoi. This group wrote the current Lao national anthem and designed the current Lao flag, while paradoxically pledging support for France. The country declared its independence in 1945. The liberation of France in 1944, bringing Charles de Gaulle to power, meant the end of the alliance between Japan and the Vichy French administration in Indochina. The Japanese had no intention of allowing the Gaullists to take over, and in March 1945 they staged a military coup in Hanoi. Some French units fled over the mountains to Laos, pursued by the Japanese, who occupied Viang Chan in March 1945 and Luang Phrabāng in April. King Sīsavāngvong was detained by the Japanese, but his son Crown Prince Savāngvatthanā called on all Lao to assist the French, and many Lao died fighting against the Japanese occupiers. Prince Phetxarāt opposed this position. He thought that Lao independence could be gained by siding with the Japanese, who made him Prime Minister of Luang Phrabāng, though not of Laos as a whole. The country was in chaos, and Phetxarāt 's government had no real authority. Another Lao group, the Lao Sēri (Free Lao), received unofficial support from the Free Thai movement in the Isan region. Manchukuo, in the northeast region of China, had been a Japanese puppet state in Manchuria since the 1930s. It was nominally ruled by Puyi, the last emperor of the Qing Dynasty, but was in fact controlled by the Japanese military, in particular the Kwantung Army. While Manchukuo ostensibly was a state for ethnic Manchus, the region had a Han Chinese majority. Following the Japanese invasion of Manchuria in 1931, the independence of Manchukuo was proclaimed on 18 February 1932, with Puyi as head of state. He was proclaimed the Emperor of Manchukuo a year later. The new Manchu nation was recognized by 23 of the League of Nations ' 80 members. Germany, Italy, and the Soviet Union were among the major powers who recognised Manchukuo. Other countries who recognized the State were the Dominican Republic, Costa Rica, El Salvador, and Vatican City. Manchukuo was also recognised by the other Japanese allies and puppet states, including Mengjiang, the Burmese government of Ba Maw, Thailand, the Wang Jingwei regime, and the Indian government of Subhas Chandra Bose. The League of Nations later declared in 1934 that Manchuria lawfully remained a part of China. This precipitated Japanese withdrawal from the League. The Manchukuoan state ceased to exist after the Soviet invasion of Manchuria in 1945. After the surrender of the Filipino and American forces in Bataan Peninsula and Corregidor Island, the Japanese established a puppet state in the Philippines in 1942. The following year, the Philippine National Assembly declared the Philippines an independent Republic and elected José Laurel as its President. There was never widespread civilian support for the state, largely because of the general anti-Japanese sentiment stemming from atrocities committed by the Imperial Japanese Army. The Second Philippine Republic ended with Japanese surrender in 1945, and Laurel was arrested and charged with treason by the US government. He was granted amnesty by President Manuel Roxas, and remained active in politics, ultimately winning a seat in the post-war Senate. The Empire of Vietnam was a short - lived Japanese puppet state that lasted from 11 March to 23 August 1945. When the Japanese seized control of French Indochina, they allowed Vichy French administrators to remain in nominal control. This French rule ended on 9 March 1945, when the Japanese officially took control of the government. Soon after, Emperor Bảo Đại voided the 1884 treaty with France and Trần Trọng Kim, a historian, became prime minister. The state suffered through the Vietnamese Famine of 1945 and replaced French - speaking schools with Vietnamese language schools, taught by Vietnamese scholars. States listed in this section were not officially members of the Axis, but at some point during the war engaged in cooperation with one or more Axis members on level that makes their neutrality disputable. Denmark was occupied by Germany after April 1940 but never joined the Axis. On 31 May 1939, Denmark and Germany signed a treaty of non-aggression, which did not contain any military obligations for either party. On April 9, Germany attacked Scandinavia, and the speed of the German invasion of Denmark prevented King Christian X and the Danish government from going into exile. They had to accept "protection by the Reich '' and the stationing of German forces in exchange for nominal independence. Denmark coordinated its foreign policy with Germany, extending diplomatic recognition to Axis collaborator and puppet regimes, and breaking diplomatic relations with the Allied governments - in - exile. Denmark broke diplomatic relations with the Soviet Union and signed the Anti-Comintern Pact in 1941. However the United States and Britain ignored Denmark and worked with Denmark 's ambassadors when it came to dealings about using Iceland, Greenland, and the Danish merchant fleet against Germany. In 1941 Danish Nazis set up the Frikorps Danmark. Thousands of volunteers fought and many died as part of the German Army on the Eastern Front. Denmark sold agricultural and industrial products to Germany and made loans for armaments and fortifications. The German presence in Denmark, including the construction of the Danish paid for part of the Atlantic Wall fortifications and was never reimbursed. The Danish protectorate government lasted until 29 August 1943, when the cabinet resigned after the regularly scheduled and largely free election concluding the Folketing 's current term. The Germans imposed martial law, and Danish collaboration continued on an administrative level, with the Danish bureaucracy functioning under German command. The Danish navy scuttled 32 of its larger ships; Germany seized 64 ships and later raised and refitted 15 of the sunken vessels. 13 warships escaped to Sweden and formed a Danish naval flotilla in exile. Sweden allowed formation of a Danish military brigade in exile; it did not see combat. The resistance movement was active in sabotage and issuing underground newspapers and blacklists of collaborators. Relations between the Soviet Union and the major Axis powers were generally hostile before 1938. In the Spanish Civil War, the Soviet Union gave military aid to the Second Spanish Republic, against Spanish Nationalist forces, which were assisted by Germany and Italy. However, the Nationalist forces were victorious. The Soviets suffered another political defeat when their ally Czechoslovakia was partitioned and taken over by Germany in 1938 -- 39. In 1938 and 1939, the USSR fought and defeated Japan in two separate border conflicts, at Lake Khasan and Khalkhin Gol. The latter was a major Soviet victory that led the Japanese Army to avoid war with the Soviets and instead call for expansion south. In 1939 the Soviet Union considered forming an alliance with either Britain and France or with Germany. When negotiations with Britain and France failed, they turned to Germany and signed the Molotov -- Ribbentrop Pact in August 1939. Germany was now freed from the risk of war with the Soviets, and was assured a supply of oil. This included a secret protocol whereby the independent countries of Finland, Estonia, Latvia, Lithuania, Poland, and Romania were divided into spheres of interest of the parties. The Soviet Union had been forced to cede Western Belarus and Western Ukraine to Poland after losing the Soviet - Polish War of 1919 -- 1921, and the Soviet Union sought to regain those territories. On 1 September, barely a week after the pact had been signed, Germany invaded Poland. The Soviet Union invaded Poland from the east on 17 September and on 28 September signed a secret treaty with Nazi Germany to arrange coordination of fighting against Polish resistance. The Soviets targeted intelligence, entrepreneurs, and officers, committing a string of atrocities that culminated in the Katyn massacre and mass relocation to the Gulag in Siberia. Soon thereafter, the Soviet Union occupied the Baltic countries of Estonia, Latvia, and Lithuania, and annexed Bessarabia and Northern Bukovina from Romania. The Soviet Union attacked Finland on 30 November 1939, which started the Winter War. Finnish defences prevented an all - out invasion, resulting in an interim peace, but Finland was forced to cede strategically important border areas near Leningrad. The Soviet Union provided material support to Germany in the war effort against Western Europe through a pair of commercial agreements, the first in 1939 and the second in 1940, which involved exports of raw materials (phosphates, chromium and iron ore, mineral oil, grain, cotton, and rubber). These and other export goods transported through Soviet and occupied Polish territories allowed Germany to circumvent the British naval blockade. In October and November 1940, German - Soviet talks about the potential of joining the Axis took place in Berlin. Joseph Stalin later personally countered with a separate proposal in a letter on 25 November that contained several secret protocols, including that "the area south of Batum and Baku in the general direction of the Persian Gulf is recognized as the center of aspirations of the Soviet Union '', referring to an area approximating present day Iraq and Iran, and a Soviet claim to Bulgaria. Hitler never responded to Stalin 's letter. Shortly thereafter, Hitler issued a secret directive on the invasion of the Soviet Union. Germany ended the Molotov -- Ribbentrop Pact by invading the Soviet Union in Operation Barbarossa on 22 June 1941. That resulted in the Soviet Union becoming one of the main members of the Allies. Germany then revived its Anti-Comintern Pact, enlisting many European and Asian countries in opposition to the Soviet Union. The Soviet Union and Japan remained neutral towards each other for most of the war by the Soviet - Japanese Neutrality Pact. The Soviet Union ended the Soviet - Japanese Neutrality Pact by invading Manchukuo on 9 August 1945, due to agreements reached at the Yalta Conference with Roosevelt and Churchill. Caudillo Francisco Franco 's Spanish State gave moral, economic, and military assistance to the Axis powers, while nominally maintaining neutrality. Franco described Spain as a member of the Axis and signed the Anti-Comintern Pact in 1941 with Hitler and Mussolini. Members of the ruling Falange party in Spain held irredentist designs on Gibraltar. Falangists also supported Spanish colonial acquisition of Tangier, French Morocco and northwestern French Algeria. In addition, Spain held ambitions on former Spanish colonies in Latin America. In June 1940 the Spanish government approached Germany to propose an alliance in exchange for Germany recognizing Spain 's territorial aims: the annexation of the Oran province of Algeria, the incorporation of all Morocco, the extension of Spanish Sahara southward to the twentieth parallel, and the incorporation of French Cameroons into Spanish Guinea. Spain invaded and occupied the Tangier International Zone, maintaining its occupation until 1945. The occupation caused a dispute between Britain and Spain in November 1940; Spain conceded to protect British rights in the area and promised not to fortify the area. The Spanish government secretly held expansionist plans towards Portugal that it made known to the German government. In a communiqué with Germany on 26 May 1942, Franco declared that Portugal should be annexed into Spain. Franco had previously won the Spanish Civil War with the help of Nazi Germany and Fascist Italy. Both were eager to establish another fascist state in Europe. Spain owed Germany over $212 million for supplies of matériel during the Spanish Civil War, and Italian combat troops had actually fought in Spain on the side of Franco 's Nationalists. From 1940 to 1941, Franco endorsed a Latin Bloc of Italy, Vichy France, Spain, and Portugal, with support from the Vatican in order to balance the countries ' powers to that of Germany. Franco discussed the Latin Bloc alliance with Pétain of Vichy France in Montpellier, France in 1940, and with Mussolini in Bordighera, Italy. When Germany invaded the Soviet Union in 1941, Franco immediately offered to form a unit of military volunteers to join the invasion. This was accepted by Hitler and, within two weeks, there were more than enough volunteers to form a division -- the Blue Division (División Azul) under General Agustín Muñoz Grandes. The possibility of Spanish intervention in World War II was of concern to the United States, which investigated the activities of Spain 's ruling Falange party in Latin America, especially Puerto Rico, where pro-Falange and pro-Franco sentiment was high, even amongst the ruling upper classes. The Falangists promoted the idea of supporting Spain 's former colonies in fighting against American domination. Prior to the outbreak of war, support for Franco and the Falange was high in the Philippines. The Falange Exterior, the international department of the Falange, collaborated with Japanese forces against U.S. and Filipino forces in the Philippines through the Philippine Falange. Although officially neutral, Marshal Philippe Pétain 's "Vichy regime '' collaborated with the Axis from its creation on 10 July 1940. It retained full control of the non-occupied part of France until November 1942 -- when the whole of France was occupied by Germany -- and of a large part of France 's colonial empire, until the colonies gradually fell under Free French control. The German invasion army entered Paris on 14 June 1940, following the battle of France. Pétain became the last Prime Minister of the French Third Republic on 16 June 1940. He sued for peace with Germany and on 22 June 1940, the French government concluded an armistice with Hitler. Under the terms of the agreement, Germany occupied two - thirds of France, including Paris. Pétain was permitted to keep an "armistice army '' of 100,000 men within the unoccupied southern zone. This number included neither the army based in the French colonial empire nor the French fleet. In Africa the Vichy regime was permitted to maintain 127,000. The French also maintained substantial garrisons at the French - mandated territory of Syria and Greater Lebanon, the French colony of Madagascar, and in French Somaliland. Some members of the Vichy government pushed for closer cooperation, but they were rebuffed by Pétain. Neither did Hitler accept that France could ever become a full military partner, and constantly prevented the buildup of Vichy 's military strength. After the armistice, relations between the Vichy French and the British quickly worsened. Although the French had told Churchill they would not allow their fleet to be taken by the Germans, the British launched several naval attacks, the most notable of which was against the Algerian harbour of Mers el - Kebir on 3 July 1940. Though Churchill defended his controversial decision to attack the French fleet, the action deteriorated greatly the relations between France and Britain. German propaganda trumpeted these attacks as an absolute betrayal of the French people by their former allies. On 10 July 1940, Pétain was given emergency "full powers '' by a majority vote of the French National Assembly. The following day approval of the new constitution by the Assembly effectively created the French State (l'État Français), replacing the French Republic with the government unofficially called "Vichy France, '' after the resort town of Vichy, where Pétain maintained his seat of government. This continued to be recognised as the lawful government of France by the neutral United States until 1942, while the United Kingdom had recognised de Gaulle 's government - in - exile in London. Racial laws were introduced in France and its colonies and many French Jews were deported to Germany. Albert Lebrun, last President of the Republic, did not resign from the presidential office when he moved to Vizille on 10 July 1940. By 25 April 1945, during Pétain 's trial, Lebrun argued that he thought he would be able to return to power after the fall of Germany, since he had not resigned. In September 1940, Vichy France was forced to allow Japan to occupy French Indochina, a federation of French colonial possessions and protectorates encompassing modern day Vietnam, Laos, and Cambodia. The Vichy regime continued to administer them under Japanese military occupation. French Indochina was the base for the Japanese invasions of Thailand, Malaya, and the Dutch East Indies. In 1945, under Japanese sponsorship, the Empire of Vietnam and the Kingdom of Kampuchea were proclaimed as Japanese puppet states. On 26 September 1940, de Gaulle led an attack by Allied forces on the Vichy port of Dakar in French West Africa. Forces loyal to Pétain fired on de Gaulle and repulsed the attack after two days of heavy fighting, drawing Vichy France closer to Germany. During the Anglo -- Iraqi War of May 1941, Vichy France allowed Germany and Italy to use air bases in the French mandate of Syria to support the Iraqi revolt. British and Free French forces attacked later Syria and Lebanon in June -- July 1941, and in 1942 Allied forces took over French Madagascar. More and more colonies abandoned Vichy, joining the Free French territories of French Equatorial Africa, Polynesia, New Caledonia and others who had sided with de Gaulle from the start. In November 1942 Vichy French troops briefly resisted the landing of Allied troops in French North Africa for a couple of days, until Admiral François Darlan negotiated a local ceasefire with the Allies. In response to the landings, Axis troops invaded the non-occupied zone in southern France and ended Vichy France as an entity with any kind of autonomy; it then became a puppet government for the occupied territories. In June 1943, the formerly Vichy - loyal colonial authorities in French North Africa led by Henri Giraud came to an agreement with the Free French to merge with their own interim regime with the French National Committee (Comité Français National, CFN) to form a provisional government in Algiers, known as the French Committee of National Liberation (Comité Français de Libération Nationale, CFLN) initially led by Darlan. After his assassination De Gaulle emerged as the uncontested French leader. The CFLN raised more troops and re-organised, re-trained and re-equipped the Free French military, in cooperation with Allied forces in preparation of future operations against Italy and the German Atlantic wall. In 1943 the Milice, a paramilitary force which had been founded by Vichy, was subordinated to the Germans and assisted them in rounding up opponents and Jews, as well as fighting the French Resistance. The Germans recruited volunteers in units independent of Vichy. Partly as a result of the great animosity of many right - wingers against the pre-war Front Populaire, volunteers joined the German forces in their anti-communist crusade against the USSR. Almost 7,000 joined Légion des Volontaires Français (LVF) from 1941 to 1944. The LVF then formed the cadre of the Waffen - SS Division Charlemagne in 1944 -- 1945, with a maximum strength of some 7,500. Both the LVF and the Division Charlemagne fought on the eastern front. Deprived of any military assets, territory or resources, the members of the Vichy government continued to fulfil their role as German puppets, being quasi-prisoners in the so - called "Sigmaringen enclave '' in a castle in Baden - Württemberg at the end of the war in May 1945. On 7 December 1941, Japan attacked the US naval bases in Pearl Harbor, Hawaii. According to the stipulation of the Tripartite Pact, Nazi Germany and Fascist Italy were required to come to the defense of their allies only if they were attacked. Since Japan had made the first move, Germany and Italy were not obliged to aid her until the United States counterattacked. Nevertheless, expecting the US to declare war on Germany in any event, Hitler ordered the Reichstag to formally declare war on the United States. Italy also declared war on the U.S.. Historian Ian Kershaw suggests that this declaration of war against the United States was a serious blunder made by Germany and Italy, as it allowed the United States to join the war in Europe and North Africa without any limitation. On the other hand, American destroyers escorting convoys had been effectively intervening in the Battle of the Atlantic with German and Italian ships and submarines, and the immediate war declaration made the Second Happy Time possible for U-boats. The US had effectively abandoned its strictly neutral stance in March 1941 with the beginning of Lend - Lease. The US destroyer Reuben James was torpedoed and sunk by the submarine U-552 on 31 October 1941. Franklin D. Roosevelt had said in his Fireside Chat on 9 December 1941 that Germany and Italy considered themselves to be in a state of war with the United States. Plans for Rainbow Five had been published by the press early in December 1941, and Hitler could no longer ignore the amount of economic and military aid the US was giving Britain and the USSR. Americans played key roles in financing and supplying the Allies, in the strategic bombardment of Germany, and in the final invasion of the European continent. Hitler declaring war on the United States on 11 December 1941 Italian pilots of a Savoia - Marchetti SM. 75 long - range cargo aircraft meeting with Japanese officials upon arriving in East Asia in 1942. German and Japanese direct spheres of influence at their greatest extents in Autumn 1942. Arrows show planned movements to an agreed demarcation line at 70 ° E, which was, however, never approximated. Print sources Online sources
st thomas is part of the united states
United States Virgin Islands - Wikipedia The United States Virgin Islands (USVI; also called the American Virgin Islands), officially the Virgin Islands of the United States, is a group of islands in the Caribbean that is an insular area of the United States located 40 miles (64 km) east of Puerto Rico. The islands are geographically part of the Virgin Islands archipelago and are located in the Leeward Islands of the Lesser Antilles. The U.S. Virgin Islands consist of the main islands of Saint Croix, Saint John, and Saint Thomas, and many other surrounding minor islands. The total land area of the territory is 133.73 square miles (346.36 km). The territory 's capital is Charlotte Amalie on the island of Saint Thomas. Previously the Danish West Indies of the Kingdom of Denmark -- Norway, they were sold to the United States by Denmark in the Treaty of the Danish West Indies of 1916. They are classified by the U.N. as a Non-Self - Governing Territory, and are currently an organized, unincorporated United States territory. The U.S. Virgin Islands are organized under the 1954 Revised Organic Act of the Virgin Islands and have since held five constitutional conventions. The last and only proposed Constitution, adopted by the Fifth Constitutional Convention of the U.S. Virgin Islands in 2009, was rejected by the U.S. Congress in 2010, which urged the convention to reconvene to address the concerns Congress and the Obama Administration had with the proposed document. The Fifth Constitutional Convention of the U.S. Virgin Islands met in October 2012 to address these concerns, but was not able to produce a revised Constitution before its October 31 deadline. In 2010 the population was 106,405, and mostly Afro - Caribbean. Tourism and related categories are the primary economic activity, employing a high percentage of the civilian non-farm labor force that totalled 42,752 persons in 2016. (The total non-farm labor force was 48,278 persons.) Private sector jobs made up 71 percent of the total workforce. The average private sector salary was $34,088 and the average public sector salary was $52,572. In a May 2016 report, some 11,000 people were categorized as being involved in some aspect of agriculture in the first half of 2016 but this category makes up a small part of the total economy. (The islands have a significant rum manufacturing sector.) At that time, there were approximately 607 manufacturing jobs and 1,487 natural resource and construction jobs. The single largest employer was the government. In mid February 2017, the USVI was facing a financial crisis due to a very high debt level of $2 billion and a structural budget deficit of $110 million. The U.S. Virgin Islands were originally inhabited by the Ciboney, Carib, and Arawaks. The islands were named by Christopher Columbus on his second voyage in 1493 for Saint Ursula and her virgin followers. Over the next two hundred years, the islands were held by many European powers, including Spain, Great Britain, the Netherlands, France, and Denmark - Norway. The Danish West India Company settled on Saint Thomas in 1672, settled on Saint John in 1694, and purchased Saint Croix from France in 1733. The islands became royal Danish colonies in 1754, named the Danish West Indian Islands (Danish: De dansk - vestindiske øer). Sugarcane, produced by slave labor, drove the islands ' economy during the 18th and early 19th centuries, until the abolition of slavery by Governor Peter von Scholten on July 3, 1848. The Danish West India and Guinea Company are also credited with naming the island St. John (Danish: Sankt Jan). The Danish crown took full control of Saint John in 1754 along with St. Thomas and St. Croix. Sugarcane plantations such as the famous Annaberg Sugar Plantation were established in great numbers on St. John because of the intense heat and fertile terrain that provided ideal growing conditions. The establishment of sugarcane plantations also led to the buying of more slaves from Africa. In 1733, St. John was the site of one of the first significant slave rebellions in the New World when Akwamu slaves from the Gold Coast took over the island for six months. The Danish were able to defeat the enslaved Africans with help from the French in Martinique. Instead of allowing themselves to be recaptured, more than a dozen of the ringleaders shot themselves before the French forces could capture them and call them to account for their activities during the period of rebel control. It is estimated that by 1775, slaves outnumbered the Danish settlers by a ratio of 5: 1. The indigenous Caribs and Arawaks were also used as slave labor to the point of the entire native population being absorbed into the larger groups. Slavery was abolished in the Virgin Islands on July 3, 1848. Although some plantation owners refused to accept the abolition, some 5,000 Black people were freed while another 17,000 remained enslaved. In that era, slaves labored mainly on sugar plantations. Other crops included cotton and indigo. Over the following years, strict labor laws were implemented several times, leading planters to abandon their estates, causing a significant drop in population and the overall economy. In the late 1800s, numerous natural disasters added to worsen the situation. For the remainder of the period of Danish rule the islands were not economically viable and significant transfers were made from the Danish state budgets to the authorities in the islands. In 1867 a treaty to sell Saint Thomas and Saint John to the United States was agreed, but the sale was never effected. A number of reforms aimed at reviving the islands ' economy were attempted, but none had great success. A second draft treaty to sell the islands to the United States was negotiated in 1902 but was defeated in the upper house of the Danish parliament in a balanced ballot (because the opposition carried a 97 - year - old life member into the chamber). The onset of World War I brought the reforms to a close and again left the islands isolated and exposed. During the submarine warfare phases of the war, the United States, fearing that the islands might be seized by Germany as a submarine base, again approached Denmark about buying them. After a few months of negotiations, a selling price of $25 million in United States gold coin was agreed (this is equivalent to $562.23 million in 2017 dollars). At the same time the economics of continued possession weighed heavily on the minds of Danish decision makers, and a consensus in favor of selling emerged in the Danish parliament. The Treaty of the Danish West Indies was signed in August 1916, with a Danish referendum held in December 1916 to confirm the decision. The deal was finalized on January 17, 1917, when the United States and Denmark exchanged their respective treaty ratifications. The United States took possession of the islands on March 31, 1917 and the territory was renamed the Virgin Islands of the United States. Every year Transfer Day is recognized as a holiday, to commemorate the acquisition of the islands by the United States. U.S. citizenship was granted to the inhabitants of the islands in 1927. The U.S. dollar was adopted in the territory in 1934 and from 1935 to 1939 the islands were a part of the United States customs area. Water Island, a small island to the south of St. Thomas, was initially administered by the U.S. federal government and did not become a part of the U.S. Virgin Islands territory until 1996, when 50 acres (200,000 m) of land was transferred to the territorial government. The remaining 200 acres (81 ha) of the island were purchased from the U.S. Department of the Interior in May 2005 for $10, a transaction that marked the official change in jurisdiction. Hurricane Hugo struck the U.S. Virgin Islands in 1989, causing catastrophic physical and economic damage, particularly on the island of St. Croix. The territory was again struck by Hurricane Marilyn in 1995, killing eight people and causing more than $2 billion in damage. The islands were again struck by Hurricanes Bertha, Georges, Lenny, and Omar in 1996, 1998, 1999, and 2008, respectively, but damage was not as severe in those storms. In 2017, Hurricane Irma caused catastrophic damage to St. John and St. Thomas; just days later, Hurricane Maria 's eyewall crossed over St. Croix. Until February 2012, the Hovensa plant located on St. Croix was one of the world 's largest petroleum refineries and contributed about 20 % of the territory 's GDP. The facility stopped exporting petroleum products in 2014. In the final year of full refinery operations, the value of exported petroleum products was $12.7 billion (2011 fiscal year). After being shut down, it has operated as no more than an oil storage facility; the closure had provoked a local economic crisis. The U.S. Virgin Islands are in the Atlantic Ocean, about 40 miles (60 km) east of Puerto Rico and immediately west of the British Virgin Islands. They share the Virgin Islands Archipelago with the Puerto Rican Virgin Islands of Vieques and Culebra (administered by Puerto Rico), and the British Virgin Islands. The territory consists of three main islands: Saint Thomas, Saint John, and Saint Croix, as well as several dozen smaller islands. The main islands have nicknames often used by locals: "Twin City '' (St. Croix), "Rock City '' (St. Thomas) and "Love City '' (St. John). The combined land area of the islands is roughly twice the size of Washington, D.C. The U.S. Virgin Islands are known for their white sand beaches, including Magens Bay and Trunk Bay, and strategic harbors, including Charlotte Amalie and Christiansted. Like most Caribbean islands, the islands of the Virgin Islands, including Saint Thomas, are volcanic in origin and hilly. The highest point is Crown Mountain, Saint Thomas (1,555 ft or 474 m). Saint Croix, the largest of the U.S. Virgin Islands, lies to the south and has a flatter terrain due to being coral in origin. The National Park Service manages more than half of Saint John, nearly all of Hassel Island, and many acres of coral reef. (See also Virgin Islands National Park, Virgin Islands Coral Reef National Monument, Buck Island Reef National Monument, Christiansted National Historic Site, and Salt River Bay National Historical Park and Ecological Preserve.) The U.S. Virgin Islands lie on the boundary of the North American plate and the Caribbean Plate. Natural hazards include earthquakes and hurricanes. The United States Virgin Islands enjoy a tropical climate, with little seasonal change throughout the year. Rainfall is concentrated in the high sun period (May through October), while in the winter the northeast trade winds prevail. Summer and winter high temperatures differ by 5 ° F (3 ° C) or less on average. The U.S. Virgin Islands are an organized, unincorporated United States territory. Although they are U.S. citizens, U.S. Virgin Islanders residing in the territory are ineligible to vote for the President of the United States. They are, however, eligible to vote if they become residents of mainland U.S. states. The U.S. Democratic and Republican parties allow U.S. Virgin Islands citizens to vote in their presidential primary elections for delegates to the respective national conventions. People born in the U.S. Virgin Islands derive their U.S. citizenship from Congressional statute. The main political parties in the U.S. Virgin Islands are the Democratic Party of the Virgin Islands, the Independent Citizens Movement, and the Republican Party of the Virgin Islands. Additional candidates run as independents. At the national level, the U.S. Virgin Islands elect a delegate to Congress from their at - large congressional district. The elected delegate, while able to vote in committee, can not participate in floor votes. The current House of Representatives delegate is Stacey Plaskett. At the territorial level, fifteen senators -- seven from the district of Saint Croix, seven from the district of Saint Thomas and Saint John, and one senator at - large who must be a resident of Saint John -- are elected for two - year terms to the unicameral Virgin Islands Legislature. There is no limit as to the number of terms they can serve. The U.S. Virgin Islands have elected a territorial governor every four years since 1970. Previous governors were appointed by the President of the United States. The U.S. Virgin Islands have a District Court, Superior Court and the Supreme Court. The District Court is responsible for federal law, while the Superior Court is responsible for U.S. Virgin Islands law at the trial level and the Supreme Court is responsible for appeals from the Superior Court for all appeals filed on or after January 29, 2007. Appeals filed prior to that date are heard by the Appellate Division of the District Court. Appeals from the federal District Court are heard by the United States Court of Appeals for the Third Circuit, located in Philadelphia, Pennsylvania. District Court judges are appointed by the U.S. president, while Superior Court and Supreme Court judges are appointed by the governor. On October 21, 1976, President Gerald Ford signed Pub. L. 94 -- 584 authorizing the people of the United States Virgin Islands to organize a government pursuant to a constitution, which would be automatically approved if Congress did not act within 60 days. On May 26, 2009 the U.S. Virgin Islands Fifth Constitutional Convention adopted a proposed Constitution of the Virgin Islands, which was submitted by President Barack Obama to Congress on March 1, 2010. On June 30, 2010, President Obama signed Pub. L. 111 -- 194 in which Congress rejected the proposed constitution and urged the constitutional convention to reconvene. As of early 2017, the territory still did not have its own constitution. Little has been achieved on this front since 2009 when a proposed constitution was contested by the U.S. Justice Department on the grounds that the powers sought exceeded what would be considered allowable under territorial status. In September 2012, the Fifth Constitutional Convention of the U.S. Virgin Islands was unable to come to a decision on the contents of a proposed constitution by the October 31 deadline. Administratively, the U.S. Virgin Islands are divided into three (3) districts and twenty (20) sub-districts. While a Danish possession, the Islands were divided into "quarters '' (five on St. John and nine on St. Croix) which were further divided into many dozens of "estates ''. Estate names are still used to write addresses; estates and quarters are used in describing real estate, especially on St. John and St. Croix. More densely populated towns such as Frederiksted and Christiansted on St. Croix were historically referred to as "districts '', in contrast to the surrounding plantation land. A 1993 referendum on status attracted only 31.4 % turnout, and so its results (in favor of the status quo) were considered void. No further referenda have been scheduled since. In 2004, the 25th Legislature of the Virgin Islands established the Fifth Constitutional Convention, a constitutional convention gathered in order to draft a new constitution. In June 2009, Governor John de Jongh, Jr. rejected the resulting constitutional draft, saying the terms of the document would "violate federal law, fail to defer to federal sovereignty and disregard basic civil rights. '' A lawsuit filed by members of the Convention to force Governor de Jongh to forward the document to President Barack Obama was ultimately successful. The President of the United States forwarded the proposal to Congress -- which then had 60 days to approve or reject the document -- in May 2010, along with a report noting concerns raised by the U.S. Department of Justice and restating the issues noted by Governor de Jongh. A U.S. Congressional resolution disapproving of the proposed constitution and requesting that the Fifth Constitutional Convention reconvene to consider changes to address these issues was signed into law by President Obama on June 30, 2010. Months later, a federal lawsuit was filed in the Federal District Court of the Virgin Islands in 2011. The lawsuit claimed that the United States had to provide U.S. Virgin Islanders with the ability to be represented in Congress and vote for U.S. President. The case is Civil No. 3: 11 - cv - 110, Charles v. U.S. Federal Elections Commission et al. (3: 11 - cv - 00110 - AET - RM). It alleged that racial discrimination present in an all - white and segregated U.S. Congress of 1917 was the impetus to deny the right to vote to a majority non-white constituency. The case was ultimately dismissed and closed on August 16, 2012 by District Judge Anne E. Thompson from the Federal District Court of the Virgin Islands, Division of St. Croix. The Fifth Constitutional Convention of the U.S. Virgin Islands met in October 2012 but was not able to produce a revised Constitution before its October 31 deadline. In 2016, the United Nations 's Special Committee on Decolonisation recommended to the UN 's General Assembly that this larger body should assist in "decolonization '' and help the people of the territory to "determine freely their future political status ''. Specifically, the Special Committee recommended that the "views of the people of the United States Virgin Islands in respect of their right to self - determination should be ascertained '' and that the UN should "actively pursue a public awareness campaign aimed at assisting the people of the United States Virgin Islands with their inalienable right to self - determination and in gaining a better understanding of the options for self - determination ''. A 2012 Economic report from the US Census Bureau indicated a total of 2,414 business establishments generating $6.8 billion in sales, employing 32,465 people and paying $1.1 billion in payroll per year. Between 2007 and 2012, sales declined by $12.6 billion, or 64.9 percent. (In 2007, total sales were $19.5 billion and the number employed was 35,300.) According to a report on the first half of 2016 by the VI Bureau of Economic Research, the unemployment rate was 11.5 percent. In May 2016 the islands ' Bureau of Economic Research indicated that there were 37,613 non-agricultural wage and salary jobs in the islands. This report states that the "leisure and hospitality sector '' employed an average of 7,333 people. The retail trade sector, which also serves many tourists, averaged another 5,913 jobs. Other categories which also include some tourism jobs include Arts and Entertainment (792 jobs), Accommodation and Food (6,541 jobs), Accommodation (3755 jobs), Food Services and Drink (2,766 jobs). A large percentage of the 37,613 non-farm workers are employed in dealing with tourists. Serving the local population is also part of the role of these sectors. The median income for a household in the territory was $24,704, and the median income for a family was $28,553 according to the 2010 Census. Males had a median income of $28,309 versus $22,601 for females. The per capita income for the territory was $13,139. About 28.7 % of families and 32.5 % of the population were below the poverty line, including 41.7 % of those less than 18 years old and 29.8 % of those 65 or more years old. Nearly 70 % of adults had at least a high school diploma and 19.2 % had a bachelor 's degree or higher. Analysts reviewing the economy often point to the closure of the HOVENSA oil refinery, the islands ' largest private sector employer, in early 2012 as having a major negative impact on the territory 's economy. In late 2013, the Reserve Bank of New York 's Research and Statistics Group pointed out that manufacturing employment dropped by 50 percent in May 2012, and by another 4 percent by November 2012, and that the GDP fell by 13 percent, "mainly due to an 80 percent drop - off in exports (mostly refined petroleum) ''. On the other hand, tourism and some other service industries were growing. As well, the 2010 census indicated that a relatively high share of the adult population is in the labor force: 66 percent, versus 65 percent on the mainland and well below 50 percent in Puerto Rico. The bottom line in this report however is that "it may also be worthwhile to look at the physical infrastructure and human capital built up over the years, with an eye toward using it for other types of productive economic activity ''. A May 2016 report by Bloomberg expressed concern about the islands ' tax - supported debt load. By January 23, 2017 this had increased to $2 billion. That translated to a per capita debt of $19,000, which was higher than the per capita debt in Puerto Rico which was undergoing a severe financial crisis at the time. A Debtwire analyst writing in Forbes indicated that nothing short of a miracle would prevent a financial collapse. Another area of concern was the structural budget deficit which was at $110 million in mid February 2017. The government instituted a new law in March 2017 with new or increased taxes on rum, beer, tobacco products and sugary drinks, as well as internet purchases and timeshare unit owners. Tourism, trade, and other service - oriented industries are the primary economic activities, accounting for nearly 60 % of the GDP. Approximately 2.5 million tourists per year visit, most arriving on cruise ships. Such visitors do not spend large amounts of money ($146.70 each on average) but as a group, they contributed $339.8 million to the economy in 2012. However, the travel industry warned in late 2014 that work needs to be done for USVI tourism practices to meet 21st century demands. "The needs of the community and the tourists may be diametrically opposed; however, for tourism to flourish cooperation is a necessity. From reduced energy costs to increased educational opportunities, from improved healthcare to a continued reduction in crime, these and many other challenges must be tackled. There is only now. '' Additionally, the islands frequently are a starting point for private yacht charters to the neighboring British Virgin Islands. Euromonitor indicates that over 50 percent of the workforce is employed in some tourism - related work. The manufacturing sector consists of mainly rum distilling. The agricultural sector is small, with most food being imported. International business and financial services are a small but growing component of the economy. Most energy is also generated from imported oil, leading to electricity costs four to five times higher than the U.S. mainland. The Virgin Islands Water and Power Authority also uses imported energy to operate its desalination facilities to provide fresh water. The CIA 's World Factbook lists the value of federal programs and grants -- $241.4 million in 2013, 19.7 % of the territory 's total revenues -- and that "the economy remains relatively diversified. Along with the tourist industry, it appears that rum exports, trade, and services will be major income sources in future years ''. There are some military facilities and personnel on the islands, supported by the US government: Although a public airport, Henry E. Rohlsen Airport has serviced aircraft from the United States Air Force as well as the United States Army. The U.S. Virgin Islands are located in the Atlantic Standard Time zone and do not participate in daylight saving time. When the mainland United States is on Standard Time, the U.S. Virgin Islands are one hour ahead of Eastern Standard Time. When the mainland United States is on daylight saving time, Eastern Daylight Time is the same as Atlantic Standard Time. The U.S. Virgin Islands are an independent customs territory from the mainland United States and operate largely as a free port. U.S. citizens thus do not have to clear customs when arriving in the U.S. Virgin Islands, but do when traveling to the mainland. Local residents are not subject to U.S. federal income taxes on U.S. Virgin Islands source income; they pay taxes to the territory equal to what their federal taxes would be if they lived in a state. The Henry E. Rohlsen International Airport serves St. Croix and the Cyril E. King International Airport serves St. Thomas and St. John. The U.S. Virgin Islands is the only U.S. jurisdiction that drives on the left. This was inherited from what was then - current practice on the islands at the time of the 1917 transfer of the territory to the United States from Denmark. However, because most cars in the territory are imported from the mainland United States, the cars in the territory are left - hand drive. As in other U.S. territories, U.S. Virgin Islands mail service is handled by the United States Postal Service, using the two - character state code "VI '' for domestic mail delivery. ZIP codes are in the 008xx range. As of January 2010, specifically assigned codes include 00801 -- 00805 (St Thomas), 00820 -- 00824 (Christiansted), 00830 -- 00831 (St John), 00840 -- 00841 (Frederiksted), and 00850 -- 00851 (Kingshill). The islands are part of the North American Numbering Plan, using area code 340, and island residents and visitors are able to call most toll - free U.S. numbers. In 2010 the U.S. Virgin Islands had a population of 106,405. There are 40,648 households, and 26,636 families. In 2010 there were 40,648 households out of which 34.7 % had children under the age of 18 living with them, 33.2 % were married couples living together, 24.9 % had a female householder with no husband present, and 34.5 % were non-families. 30.2 % of all households were made up of individuals and 6.3 % had someone living alone who was 65 years of age or older. The average household size was 2.64 and the average family size was 3.34. In the territory, the population in 2010 was distributed with 31.6 % under the age of 18, 8.0 % from 18 to 24, 27.1 % from 25 to 44, 24.9 % from 45 to 64, and 8.4 % who were 65 years of age or older. The median age was 33 years. For every 100 females, there were 91.4 males. For every 100 females ages 18 and up, there were 87.7 males. The annual population growth is − 0.12 %. The literacy rate for the adult population was 94.9 % in 2010. The racial makeup of the U.S. Virgin Islands was: Many residents can trace their ancestry to other Caribbean islands, especially Puerto Rico and the Lesser Antilles. The territory is largely Afro - Caribbean in origin. English is currently the dominant language and Spanish is spoken by about 17 % of the population. Other languages are spoken by 11 % of the population. English has been the predominant language since 1917 when the islands were transferred from Denmark to the United States. Under Danish rule, the official language was Danish, but it was solely the language of administration and spoken by Danes, a tiny minority of the overall population that primarily occupied administrative roles in colonial Danish West Indian society. However, place names and surnames of Denmark - Norway origins still remain among natives. Although the U.S. Virgin Islands was a Danish possession during most of its colonial history, Danish never was a spoken language among the populace, black or white, as the majority of plantation and slave owners were of Dutch, English, Scottish or Irish descent. Even during Danish ownership, Dutch was more common at least during some of those 245 years, specifically on St. Thomas and St. John. In St. Croix, English was the dominant language. St. Croix was owned by the French until 1733 when the island was sold to the Danish West Indian and Guinea Company. By 1741 there were five times as many English on the island as Danes. English Creole emerged on St. Croix more so than Dutch Creole, which was more popular on St. Thomas and St. John. Other languages spoken in the Danish West Indies included Irish, Scots, Spanish, and French, as well as Virgin Islands English Creole. Virgin Islands Creole English, an English - based creole locally known as "dialect '', is spoken in informal situations. The form of Virgin Islands Creole spoken on St. Croix, known as Crucian, is slightly different from that spoken on St. Thomas and St. John. Because the U.S. Virgin Islands are home to thousands of immigrants from across the Caribbean, Spanish and various French creole languages are also widely spoken. As of the 2000 census, 25.3 % of persons over the age of five speak a language other than English at home. Spanish is spoken by 16.8 % of the population and French is spoken by 6.6 %. Religions: in the United States Virgin Islands (2010) Christianity is the dominant religion in the U.S. Virgin Islands. According to Pew Research Center, 94.8 % of the population was Christian in 2010. Baptist, Roman Catholic and Episcopalian were the largest denominations in the 2010 Census. Protestantism is the most widespread of the religious categories, reflecting the territory 's Danish and Norwegian colonial heritage and more recently, it being a part of the United States. There is also a strong Roman Catholic presence. Rastafari is also prevalent. Saint Thomas is home to one of the oldest Jewish communities in the Western Hemisphere as Sephardic Jews began to settle the island in the 18th century as traders and merchants. The St. Thomas Synagogue in Charlotte Amalie is the second oldest synagogue on American soil and oldest in terms of continuous usage. In 2010, the national average life expectancy was 79.61 years. It was 76.57 years for men and 82.83 for women. The U.S. Virgin Islands Department of Education serves as the territory 's education agency, and has two school districts: St. Thomas - St. John School District and St. Croix School District. The University of the Virgin Islands provides higher education leading to associate 's, bachelor 's, and master 's degrees, with campuses on St. Thomas and St. Croix. The culture of the Virgin Islands reflects the various people that have inhabited the present - day U.S. Virgin Islands and British Virgin Islands, both despite their political separation having kept close cultural ties. The culture derives chiefly from West African, European and American cultures, in addition to the influences from the immigrants from the Arab world, India and other Caribbean islands. The island was strongly influenced by the Dutch, French and Danish during the periods of control the island were under these powers. The islands have a number of AM and FM radio stations (mostly on St. Thomas and St. Croix) broadcasting music, religious, and news programming. (See List of radio stations in U.S. Territories.) Full and low - power television stations are split between St. Thomas and St. Croix. (See List of television stations in the U.S. Virgin Islands.) Newspapers include: Virgin Islands government employees are also given administrative leave for St. Croix carnival events in January and St. Thomas carnival events in April / May. Coordinates: 18 ° 21 ′ N 64 ° 56 ′ W  /  18.350 ° N 64.933 ° W  / 18.350; - 64.933
when does criminal minds season 13 finale air
Criminal Minds (season 13) - wikipedia The thirteenth season of Criminal Minds was ordered on April 7, 2017, by CBS with an order of 22 episodes. The season premiered on September 27, 2017 in a new time slot at 10: 00PM on Wednesday when it had previously been at 9: 00PM on Wednesday since its inception. The season concluded on April 18, 2018 with a two - part season finale. The entire main cast from the previous season returned for the season, except Damon Gupton (Stephen Walker), who was fired from the show. His character was killed off in the season premiere off - screen. Following the cancellation of Criminal Minds: Beyond Borders, it was announced that Daniel Henney (Matt Simmons) would join the cast this season as a series regular. Criminal Minds was renewed for a thirteenth season with an episode order of 22 episodes on April 7, 2017. The entire main cast from the previous season returned for the season, except Damon Gupton (Stephen Walker), who was fired from the show. Matthew Gray Gubler will be directing the seventeenth episode of the season and it is said to be "the spookiest episode of Season 13 '' and will involve clowns. On August 10, 2017, it was revealed that Aisha Tyler will make her television directing debut and direct the sixth episode of the season. On August 12, 2017 it was revealed that Erica Messer and Kirsten Vangsness will be co-writing the eleventh episode of the season, which will be the fourth episode they have co-written together. On October 31, 2017, it was announced that Adam Rodriguez will make his Criminal Minds directing debut and direct the sixteenth episode of the season. On June 11, 2017, it was announced that Damon Gupton had been let go from the show after one season. CBS said his departure was "part of a creative change on the show ''. His character Stephen Walker was killed off - screen in the season premiere. On June 20, 2017, CBS announced that Daniel Henney, who was a series regular on Criminal Minds: Beyond Borders as Matt Simmons, would join the main show as a series regular for the thirteenth season. On October 12, 2017, it was announced that Shemar Moore would reprise his role as Derek Morgan in the fifth episode of the season ("Lucky Strikes '') as his character returns to help Penelope Garcia get through a tough time.
where do your sympathetic nerves emerge from in your cns
Sympathetic nervous system - wikipedia The sympathetic nervous system (SNS) is one of the two main divisions of the autonomic nervous system, the other being the parasympathetic nervous system. (The enteric nervous system (ENS) is now usually referred to as separate from the autonomic nervous system since it has its own independent reflex activity.) The autonomic nervous system functions to regulate the body 's unconscious actions. The sympathetic nervous system 's primary process is to stimulate the body 's fight - or - flight response. It is, however, constantly active at a basic level to maintain homeostasis homeodynamics. The sympathetic nervous system is described as being complementary to the parasympathetic nervous system which stimulates the body to "feed and breed '' and to (then) "rest - and - digest ''. There are two kinds of neurons involved in the transmission of any signal through the sympathetic system: pre-ganglionic and post-ganglionic. The shorter preganglionic neurons originate from the thoracolumbar region of the spinal cord specifically at T1 to L2 ~ L3, and travel to a ganglion, often one of the paravertebral ganglia, where they synapse with a postganglionic neuron. From there, the long postganglionic neurons extend across most of the body. At the synapses within the ganglia, preganglionic neurons release acetylcholine, a neurotransmitter that activates nicotinic acetylcholine receptors on postganglionic neurons. In response to this stimulus, the postganglionic neurons release norepinephrine, which activates adrenergic receptors that are present on the peripheral target tissues. The activation of target tissue receptors causes the effects associated with the sympathetic system. However, there are three important exceptions: Sympathetic nerves arise from near the middle of the spinal cord in the intermediolateral nucleus of the lateral grey column, beginning at the first thoracic vertebra of the vertebral column and are thought to extend to the second or third lumbar vertebra. Because its cells begin in the thoracic and lumbar regions of the spinal cord, the sympathetic nervous system is said to have a thoracolumbar outflow. Axons of these nerves leave the spinal cord through the anterior root. They pass near the spinal (sensory) ganglion, where they enter the anterior rami of the spinal nerves. However, unlike somatic innervation, they quickly separate out through white rami connectors (so called from the shiny white sheaths of myelin around each axon) that connect to either the paravertebral (which lie near the vertebral column) or prevertebral (which lie near the aortic bifurcation) ganglia extending alongside the spinal column. To reach target organs and glands, the axons must travel long distances in the body, and, to accomplish this, many axons relay their message to a second cell through synaptic transmission. The ends of the axons link across a space, the synapse, to the dendrites of the second cell. The first cell (the presynaptic cell) sends a neurotransmitter across the synaptic cleft where it activates the second cell (the postsynaptic cell). The message is then carried to the final destination. Presynaptic nerves ' axons terminate in either the paravertebral ganglia or prevertebral ganglia. There are four different paths an axon can take before reaching its terminal. In all cases, the axon enters the paravertebral ganglion at the level of its originating spinal nerve. After this, it can then either synapse in this ganglion, ascend to a more superior or descend to a more inferior paravertebral ganglion and synapse there, or it can descend to a prevertebral ganglion and synapse there with the postsynaptic cell. The postsynaptic cell then goes on to innervate the targeted end effector (i.e. gland, smooth muscle, etc.). Because paravertebral and prevertebral ganglia are relatively close to the spinal cord, presynaptic neurons are generally much shorter than their postsynaptic counterparts, which must extend throughout the body to reach their destinations. A notable exception to the routes mentioned above is the sympathetic innervation of the suprarenal (adrenal) medulla. In this case, presynaptic neurons pass through paravertebral ganglia, on through prevertebral ganglia and then synapse directly with suprarenal tissue. This tissue consists of cells that have pseudo-neuron like qualities in that when activated by the presynaptic neuron, they will release their neurotransmitter (epinephrine) directly into the bloodstream. In the sympathetic nervous system and other components of the peripheral nervous system, these synapses are made at sites called ganglia. The cell that sends its fiber is called a preganglionic cell, while the cell whose fiber leaves the ganglion is called a postganglionic cell. As mentioned previously, the preganglionic cells of the sympathetic nervous system are located between the first thoracic segment and third lumbar segments of the spinal cord. Postganglionic cells have their cell bodies in the ganglia and send their axons to target organs or glands. The ganglia include not just the sympathetic trunks but also the cervical ganglia (superior, middle and inferior), which sends sympathetic nerve fibers to the head and thorax organs, and the celiac and mesenteric ganglia (which send sympathetic fibers to the gut). Messages travel through the sympathetic nervous system in a bi-directional flow. Efferent messages can trigger changes in different parts of the body simultaneously. For example, the sympathetic nervous system can accelerate heart rate; widen bronchial passages; decrease motility (movement) of the large intestine; constrict blood vessels; increase peristalsis in the oesophagus; cause pupillary dilation, piloerection (goose bumps) and perspiration (sweating); and raise blood pressure. One exception is with certain blood vessels such as those in the cerebral and coronary arteries, which dilate (rather than constrict) with an increase in sympathetic tone. This is because of a proportional increase in the presence of β adrenergic receptors rather than α receptors. β receptors promote vessel dilation instead of constriction like α1 receptors. An alternative explanation is that the primary (and direct) effect of sympathetic stimulation on coronary arteries is vasoconstriction followed by a secondary vasodilation caused by the release of vasodilatory metabolites due to the sympathetically increased cardiac inotropy and heart rate. This secondary vasodilation caused by the primary vasoconstriction is termed functional sympatholysis, the overall effect of which on coronary arteries is dilation. The target synapse of the postganglionic neuron is mediated by adrenergic receptors and is activated by either norepinephrine (noradrenaline) or epinephrine (adrenaline). The sympathetic nervous system is responsible for up - and down - regulating many homeostatic mechanisms in living organisms. Fibers from the SNS innervate tissues in almost every organ system, providing at least some regulation of functions as diverse as pupil diameter, gut motility, and urinary system output and function. It is perhaps best known for mediating the neuronal and hormonal stress response commonly known as the fight - or - flight response. This response is also known as sympatho - adrenal response of the body, as the preganglionic sympathetic fibers that end in the adrenal medulla (but also all other sympathetic fibers) secrete acetylcholine, which activates the great secretion of adrenaline (epinephrine) and to a lesser extent noradrenaline (norepinephrine) from it. Therefore, this response that acts primarily on the cardiovascular system is mediated directly via impulses transmitted through the sympathetic nervous system and indirectly via catecholamines secreted from the adrenal medulla. The sympathetic nervous system is responsible for priming the body for action, particularly in situations threatening survival. One example of this priming is in the moments before waking, in which sympathetic outflow spontaneously increases in preparation for action. Sympathetic nervous system stimulation causes vasoconstriction of most blood vessels, including many of those in the skin, the digestive tract, and the kidneys. This occurs as a result of activation of alpha - 1 adrenergic receptors by norepinephrine released by post-ganglionic sympathetic neurons. These receptors exist throughout the vasculature of the body but are inhibited and counterbalanced by beta - 2 adrenergic receptors (stimulated by epinephrine release from the adrenal glands) in the skeletal muscles, the heart, the lungs, and the brain during a sympathoadrenal response. The net effect of this is a shunting of blood away from the organs not necessary to the immediate survival of the organism and an increase in blood flow to those organs involved in intense physical activity. The afferent fibers of the autonomic nervous system, which transmit sensory information from the internal organs of the body back to the central nervous system (or CNS), are not divided into parasympathetic and sympathetic fibers as the efferent fibers are. Instead, autonomic sensory information is conducted by general visceral afferent fibers. General visceral afferent sensations are mostly unconscious visceral motor reflex sensations from hollow organs and glands that are transmitted to the CNS. While the unconscious reflex arcs normally are undetectable, in certain instances they may send pain sensations to the CNS masked as referred pain. If the peritoneal cavity becomes inflamed or if the bowel is suddenly distended, the body will interpret the afferent pain stimulus as somatic in origin. This pain is usually non-localized. The pain is also usually referred to dermatomes that are at the same spinal nerve level as the visceral afferent synapse. Together with the other component of the autonomic nervous system, the parasympathetic nervous system, the sympathetic nervous system aids in the control of most of the body 's internal organs. Reaction to stress -- as in the flight - or - fight response -- is thought to counteract the parasympathetic system, which generally works to promote maintenance of the body at rest. The comprehensive functions of both the parasympathetic and sympathetic nervous systems are not so straightforward, but this is a useful rule of thumb. In heart failure, the sympathetic nervous system increases its activity, leading to increased force of muscular contractions that in turn increases the stroke volume, as well as peripheral vasoconstriction to maintain blood pressure. However, these effects accelerate disease progression, eventually increasing mortality in heart failure. Sympathicotonia is a stimulated condition of the sympathetic nervous system, marked by vascular spasm, elevated blood pressure, and goose bumps. A recent study has shown the expansion of Foxp3+ natural Treg in the bone marrow of mice after brain ischemia and this myeloid Treg expansion is related to sympathetic stress signaling after brain ischemia. The name of this system can be traced to the concept of sympathy, in the sense of "connection between parts '', first used medically by Galen. In the 18th century, Jacob B. Winslow applied the term specifically to nerves.
explain how the principle of intervention is an idea based on conservatism
Conservatism - Wikipedia Conservatism is a political and social philosophy that promotes retaining traditional social institutions in the context of culture and civilization. The central tenets of conservatism include tradition, human imperfection, organic society, hierarchy and authority, and property rights. Conservatives seek to preserve institutions, emphasizing stability and continuity while the more extreme elements called reactionaries oppose modernism and seek a return to "the way things were ''. The first established use of the term in a political context originated with François - René de Chateaubriand in 1818, during the period of Bourbon restoration that sought to roll back the policies of the French Revolution. The term, historically associated with right - wing politics, has since been used to describe a wide range of views. There is no single set of policies that are universally regarded as conservative, because the meaning of conservatism depends on what is considered traditional in a given place and time. Thus conservatives from different parts of the world -- each upholding their respective traditions -- may disagree on a wide range of issues. Edmund Burke, an 18th - century politician who opposed the French Revolution but supported the American Revolution, is credited as one of the main theorists of conservatism in Great Britain in the 1790s. According to Quintin Hogg, the chairman of the British Conservative Party in 1959: "Conservatism is not so much a philosophy as an attitude, a constant force, performing a timeless function in the development of a free society, and corresponding to a deep and permanent requirement of human nature itself ''. In contrast to the tradition - based definition of conservatism, political theorists such as Corey Robin define conservatism primarily in terms of a general defense of social and economic inequality. From this perspective, conservatism is less an attempt to uphold traditional institutions and more "a meditation on -- and theoretical rendition of -- the felt experience of having power, seeing it threatened, and trying to win it back ''. Liberal conservatism incorporates the classical liberal view of minimal government intervention in the economy; individuals should be free to participate in the market and generate wealth without government interference. Individuals, however, can not be thoroughly depended on to act responsibly in other spheres of life, therefore liberal conservatives believe that a strong state is necessary to ensure law and order and social institutions are needed to nurture a sense of duty and responsibility to the nation. Liberal conservatism is a variant of conservatism that is strongly influenced by liberal stances. As these latter two terms have had different meanings over time and across countries, liberal conservatism also has a wide variety of meanings. Historically, the term often referred to the combination of economic liberalism, which champions laissez - faire markets, with the classical conservatism concern for established tradition, respect for authority and religious values. It contrasted itself with classical liberalism, which supported freedom for the individual in both the economic and social spheres. Over time, the general conservative ideology in many countries adopted economic liberal arguments and the term liberal conservatism was replaced with conservatism. This is also the case in countries where liberal economic ideas have been the tradition such as the United States and are thus considered conservative. In other countries where liberal conservative movements have entered the political mainstream, such as Italy and Spain, the terms liberal and conservative may be synonymous. The liberal conservative tradition in the United States combines the economic individualism of the classical liberals with a Burkean form of conservatism (which has also become part of the American conservative tradition, such as in the writings of Russell Kirk). A secondary meaning for the term liberal conservatism that has developed in Europe is a combination of more modern conservative (less traditionalist) views with those of social liberalism. This has developed as an opposition to the more collectivist views of socialism. Often this involves stressing what are now conservative views of free - market economics and belief in individual responsibility, with social liberal views on defence of civil rights, environmentalism and support for a limited welfare state. In continental Europe, this is sometimes also translated into English as social conservatism. Conservative liberalism is a variant of liberalism that combines liberal values and policies with conservative stances, or, more simply, the right wing of the liberal movement. The roots of conservative liberalism are found at the beginning of the history of liberalism. Until the two World Wars, in most European countries the political class was formed by conservative liberals, from Germany to Italy. Events after World War I brought the more radical version of classical liberalism to a more conservative (i.e. more moderate) type of liberalism. Libertarian conservatism describes certain political ideologies within the United States and Canada which combine libertarian economic issues with aspects of conservatism. Its four main branches are constitutionalism, paleolibertarianism, small government conservatism and Christian libertarianism. They generally differ from paleoconservatives, in that they are in favor of more personal and economic freedom. Agorists such as Samuel Edward Konkin III labeled libertarian conservatism right - libertarianism. In contrast to paleoconservatives, libertarian conservatives support strict laissez - faire policies such as free trade, opposition to any national bank and opposition to business regulations. They are vehemently opposed to environmental regulations, corporate welfare, subsidies and other areas of economic intervention. Many conservatives, especially in the United States, believe that the government should not play a major role in regulating business and managing the economy. They typically oppose efforts to charge high tax rates and to redistribute income to assist the poor. Such efforts, they argue, do not properly reward people who have earned their money through hard work. Fiscal conservatism is the economic philosophy of prudence in government spending and debt. In his Reflections on the Revolution in France, Edmund Burke argued that a government does not have the right to run up large debts and then throw the burden on the taxpayer: ... (I) t is to the property of the citizen, and not to the demands of the creditor of the state, that the first and original faith of civil society is pledged. The claim of the citizen is prior in time, paramount in title, superior in equity. The fortunes of individuals, whether possessed by acquisition or by descent or in virtue of a participation in the goods of some community, were no part of the creditor 's security, expressed or implied... (T) he public, whether represented by a monarch or by a senate, can pledge nothing but the public estate; and it can have no public estate except in what it derives from a just and proportioned imposition upon the citizens at large. National conservatism is a political term used primarily in Europe to describe a variant of conservatism which concentrates more on national interests than standard conservatism as well as upholding cultural and ethnic identity, while not being outspokenly nationalist or supporting a far - right approach. In Europe, national conservatives are usually eurosceptics. National conservatism is heavily oriented towards the traditional family and social stability as well as in favour of limiting immigration. As such, national conservatives can be distinguished from economic conservatives, for whom free market economic policies, deregulation and fiscal conservatism are the main priorities. Some commentators have identified a growing gap between national and economic conservatism: "(M) ost parties of the Right (today) are run by economic conservatives who, in varying degrees, have marginalized social, cultural, and national conservatives ''. National conservatism is also related to traditionalist conservatism. Traditionalist conservatism is a political philosophy emphasizing the need for the principles of natural law and transcendent moral order, tradition, hierarchy and organic unity, agrarianism, classicism and high culture, as well as the intersecting spheres of loyalty. Some traditionalists have embraced the labels "reactionary '' and "counterrevolutionary '', defying the stigma that has attached to these terms since the Enlightenment. Having a hierarchical view of society, many traditionalist conservatives, including a few Americans, defend the monarchical political structure as the most natural and beneficial social arrangement. Cultural conservatives support the preservation of the heritage of one nation, or of a shared culture that is not defined by national boundaries. The shared culture may be as divergent as Western culture or Chinese culture. In the United States, the term cultural conservative may imply a conservative position in the culture war. Cultural conservatives hold fast to traditional ways of thinking even in the face of monumental change. They believe strongly in traditional values and traditional politics and often have an urgent sense of nationalism. Social conservatism is distinct from cultural conservatism, although there are some overlaps. Social conservatives may believe that society is built upon a fragile network of relationships which need to be upheld through duty, traditional values and established institutions., and that the government has a role in encouraging or enforcing traditional values or behaviours. A social conservative wants to preserve traditional morality and social mores, often by opposing what they consider radical policies or social engineering. Social change is generally regarded as suspect. A second meaning of the term social conservatism developed in the Nordic countries and continental Europe, where it refers to liberal conservatives supporting modern European welfare states. Social conservatives (in the first meaning of the word) in many countries generally favour the pro-life position in the abortion controversy and oppose human embryonic stem cell research (particularly if publicly funded); oppose both eugenics and human enhancement (transhumanism) while supporting bioconservatism; support a traditional definition of marriage as being one man and one woman; view the nuclear family model as society 's foundational unit; oppose expansion of civil marriage and child adoption rights to couples in same - sex relationships; promote public morality and traditional family values; oppose atheism, especially militant atheism, secularism and the separation of church and state; support the prohibition of drugs, prostitution and euthanasia; and support the censorship of pornography and what they consider to be obscenity or indecency. Most conservatives in the United States support the death penalty. Religious conservatives principally apply the teachings of particular religions to politics, sometimes by merely proclaiming the value of those teachings, at other times by having those teachings influence laws. In most democracies, political conservatism seeks to uphold traditional family structures and social values. Religious conservatives typically oppose abortion, homosexual behavior, drug use and sexual activity outside of marriage. In some cases, conservative values are grounded in religious beliefs and some conservatives seek to increase the role of religion in public life. Progressive conservatism incorporates progressive policies alongside conservative policies. It stresses the importance of a social safety net to deal with poverty, support of limited redistribution of wealth along with government regulation to regulate markets in the interests of both consumers and producers. Progressive conservatism first arose as a distinct ideology in the United Kingdom under Prime Minister Benjamin Disraeli 's "One Nation '' Toryism. There have been a variety of progressive conservative governments. In the United Kingdom, the Prime Ministers Disraeli, Stanley Baldwin, Neville Chamberlain, Winston Churchill, Harold Macmillan and previous Prime Minister David Cameron are progressive conservatives. In the United States, the administration of President William Howard Taft was a progressive conservative and he described himself as "a believer in progressive conservatism '' and President Dwight D. Eisenhower declared himself an advocate of "progressive conservatism ''. In Germany, Chancellor Leo von Caprivi promoted a progressive conservative agenda called the "New Course ''. In Canada, a variety of conservative governments have been progressive conservative, with Canada 's major conservative movement being officially named the Progressive Conservative Party of Canada from 1942 to 2003. In Canada, the Prime Ministers Arthur Meighen, R.B. Bennett, John Diefenbaker, Joe Clark, Brian Mulroney, Kim Campbell and Stephen Harper led progressive conservative federal governments. Authoritarian conservatism refers to autocratic regimes that center their ideology around conservative nationalism rather than ethnic nationalism, though certain racial components such as antisemitism may exist. Authoritarian conservative movements show strong devotion towards religion, tradition and culture, while also expressing fervent nationalism akin to other far - right nationalist movements. Examples of authoritarian conservative leaders include António de Oliveira Salazar and Engelbert Dollfuss. Authoritarian conservative movements were prominent in the same era as fascism, with which it sometimes clashed. Although both ideologies shared core values such as nationalism and had common enemies such as communism and materialism, there was nonetheless a contrast between the traditionalist nature of authoritarian conservatism and the revolutionary, palingenetic and populist nature of fascism, thus it was common for authoritarian conservative regimes to suppress rising fascist and National Socialist movements. The hostility between the two ideologies is highlighted by the struggle for power for the National Socialists in Austria, which was marked by the assassination of Dollfuss. Sociologist Seymour Martin Lipset has examined the class basis of right wing extremist politics in the 1920 -- 1960 era. He reports: In Great Britain, conservative ideas (though not yet called that) emerged in the Tory movement during the Restoration period (1660 -- 1688). Toryism supported a hierarchical society with a monarch who ruled by divine right. Tories opposed the idea that sovereignty derived from the people, and rejected the authority of parliament and freedom of religion. Robert Filmer 's Patriarcha: or the Natural Power of Kings (published posthumously in 1680, but written before the English Civil War of 1642 -- 1651) became accepted as the statement of their doctrine. However, the Glorious Revolution of 1688 destroyed this principle to some degree by establishing a constitutional government in England, leading to the hegemony of the Tory - opposed Whig ideology. Faced with defeat, the Tories reformed their movement, now holding that sovereignty was vested in the three estates of Crown, Lords and Commons rather than solely in the Crown. Toryism became marginalized during the long period of Whig ascendancy in the 18th century. Conservatives typically see Richard Hooker (1554 -- 1600) as the founding father of conservatism, along with the Marquess of Halifax (1633 -- 1695), David Hume (1711 -- 1776) and Edmund Burke (1729 -- 1797). Halifax promoted pragmatism in government, whilst Hume argued against political rationalism and utopianism. Burke served as the private secretary to the Marquis of Rockingham and as official pamphleteer to the Rockingham branch of the Whig party. Together with the Tories, they were the conservatives in the late 18th century United Kingdom. Burke 's views were a mixture of liberal and conservative. He supported the American Revolution of 1765 -- 1783, but abhorred the violence of the French Revolution (1789 -- 1799). He accepted the liberal ideals of private property and the economics of Adam Smith (1723 -- 1790), but thought that economics should remain subordinate to the conservative social ethic, that capitalism should be subordinate to the medieval social tradition and that the business class should be subordinate to aristocracy. He insisted on standards of honor derived from the medieval aristocratic tradition and saw the aristocracy as the nation 's natural leaders. That meant limits on the powers of the Crown, since he found the institutions of Parliament to be better informed than commissions appointed by the executive. He favored an established church, but allowed for a degree of religious toleration. Burke justified the social order on the basis of tradition: tradition represented the wisdom of the species and he valued community and social harmony over social reforms. Burke was a leading theorist in his day, finding extreme idealism (either Tory or Whig) an endangerment to broader liberties, and (like Hume) rejecting abstract reason as an unsound guide for political theory. Despite their influence on future conservative thought, none of these early contributors were explicitly involved in Tory politics. Hooker lived in the 16th century, long before the advent of toryism, whilst Hume was an apolitical philosopher and Halifax similarly politically independent. Burke described himself as a Whig. Shortly after Burke 's death in 1797, conservatism revived as a mainstream political force as the Whigs suffered a series of internal divisions. This new generation of conservatives derived their politics not from Burke but from his predecessor, the Viscount Bolingbroke (1678 -- 1751), who was a Jacobite and traditional Tory, lacking Burke 's sympathies for Whiggish policies such as Catholic Emancipation and American independence (famously attacked by Samuel Johnson in "Taxation No Tyranny ''). In the first half of the 19th century many newspapers, magazines, and journals promoted loyalist or right - wing attitudes in religion, politics, and international affairs. Burke was seldom mentioned but William Pitt the Younger (1759 -- 1806) became a conspicuous hero. The most prominent journals included The Quarterly Review, founded in 1809 as a counterweight to the Whigs ' Edinburgh Review, and the even more conservative Blackwood 's Edinburgh Magazine. Sack finds that the Quarterly Review promoted a balanced Canningite toryism; was neutral on Catholic emancipation and only mildly critical of Nonconformist Dissent; it opposed slavery and supported the current poor laws. It was "aggressively imperialist ''. The high - church clergy of the Church of England read the Orthodox Churchman 's Magazine which was equally hostile to Jewish, Catholic, Jacobin, Methodist, and Unitarian spokesmen. Anchoring the ultra Tories, Blackwood 's Edinburgh Magazine stood firmly against Catholic emancipation and favoured slavery, cheap money, mercantilism, the Navigation Acts and the Holy Alliance. Conservatism evolved after 1820, embracing free trade in 1846, and a commitment to democracy, especially under Disraeli. The effect was to significantly strengthen Conservatism as a grassroots political force. Conservatism no longer was the philosophical defense of the landed aristocracy but had been refreshed into redefining its commitment to the ideals of order, both secular and religious, expanding imperialism, strengthened monarchy, and a more generous vision of the welfare state as opposed to the punitive vision of the Whigs and Liberals. As early as 1835, Disraeli attacked the Whigs and utilitarians as slavishly devoted to an industrial oligarchy, while he described his fellow Tories as the only "really democratic party of England '' and devoted to the interests of the whole people. Nevertheless, inside the party there was a tension between the growing numbers of wealthy businessmen on the one side, and the aristocracy and rural gentry on the other. The aristocracy gained strength as businessmen discovered they could use their wealth to buy a peerage and a country estate. Although conservatives opposed attempts to allow greater representation of the middle class in parliament, in 1834 they conceded that electoral reform could not be reversed and promised to support further reforms so long as they did not erode the institutions of church and state. These new principles were presented in the Tamworth Manifesto of 1834, which historians regard as the basic statement of the beliefs of the new Conservative Party. Some conservatives lamented the passing of a pastoral world where the ethos of noblesse oblige had promoted respect from the lower classes. They saw the Anglican Church and the aristocracy as balances against commercial wealth. They worked toward legislation for improved working conditions and urban housing. This viewpoint would later be called Tory Democracy. However, since Burke there has always been tension between traditional aristocratic conservatism and the wealthy business class. In 1834, Tory Prime Minister Robert Peel issued the Tamworth Manifesto in which he pledged to endorse moderate political reform. This marked the beginning of the transformation of British conservatism from High Tory reactionism towards a more modern form based on "conservation ''. The party became known as the Conservative Party as a result, a name it has retained to this day. Peel, however, would also be the root of a split in the party between the traditional Tories (led by the Earl of Derby and Benjamin Disraeli) and the ' Peelites ' (led first by Peel himself, then by the Earl of Aberdeen). The split occurred in 1846 over the issue of free trade, which Peel supported, versus protectionism, supported by Derby. The majority of the party sided with Derby, whilst about a third split away, eventually merging with the Whigs and the radicals to form the Liberal Party. Despite the split, the mainstream Conservative Party accepted the doctrine of free trade in 1852. In the second half of the 19th century, the Liberal Party faced political schisms, especially over Irish Home Rule. Leader William Gladstone (himself a former Peelite) sought to give Ireland a degree of autonomy, a move that elements in both the left and right wings of his party opposed. These split off to become the Liberal Unionists (led by Joseph Chamberlain), forming a coalition with the Conservatives before merging with them in 1912. The Liberal Unionist influence dragged the Conservative Party towards the left; Conservative governments passing a number of progressive reforms at the turn of the 20th century. By the late 19th century the traditional business supporters of the UK Liberal Party had joined the Conservatives, making them the party of business and commerce. After a period of Liberal dominance before the First World War, the Conservatives gradually became more influential in government, regaining full control of the cabinet in 1922. In the interwar period conservatism was the major ideology in Britain, as the Liberal Party vied with the Labour Party for control of the left. After the Second World War, the first Labour government (1945 -- 1951) under Clement Attlee embarked on a program of nationalization of industry and the promotion of social welfare. The Conservatives generally accepted those policies until the 1980s. In the 1980s the Conservative government of Margaret Thatcher, guided by neoliberal economics, reversed many of Labour 's programmes. Other conservative political parties, such as the United Kingdom Independence Party (founded in 1993) and the Democratic Unionist Party (founded in 1971), began to appear, although they have yet to make any significant impact at Westminster (as of 2014 the DUP comprises the largest political party in the ruling coalition in the Northern Ireland Assembly). Conservative thought developed alongside nationalism in Germany, culminating in Germany 's victory over France in the Franco - Prussian War, the creation of the unified German Empire in 1871, and the simultaneous rise of Otto von Bismarck on the European political stage. Bismarck 's "balance of power '' model maintained peace in Europe for decades at the end of the 19th century. His "revolutionary conservatism '' was a conservative state - building strategy designed to make ordinary Germans -- not just the Junker elite -- more loyal to state and emperor, he created the modern welfare state in Germany in the 1880s. According to Kees van Kersbergen and Barbara Vis, his strategy was: granting social rights to enhance the integration of a hierarchical society, to forge a bond between workers and the state so as to strengthen the latter, to maintain traditional relations of authority between social and status groups, and to provide a countervailing power against the modernist forces of liberalism and socialism. Bismarck also enacted universal male suffrage in the new German Empire in 1871. He became a great hero to German conservatives, who erected many monuments to his memory after he left office in 1890. With the rise of Nazism in 1933, agrarian movements faded and was supplanted by a more command - based economy and forced social integration. Though Adolf Hitler succeeded in garnering the support of many German industrialists, prominent traditionalists openly and secretly opposed his policies of euthanasia, genocide, and attacks on organized religion, including Claus von Stauffenberg, Dietrich Bonhoeffer, Henning von Tresckow, Bishop Clemens August Graf von Galen, and the monarchist Carl Friedrich Goerdeler. More recently, the work of conservative CDU leader and Chancellor Helmut Kohl helped bring about German Reunification, along with the closer integration of Europe in the form of the Maastricht Treaty. Today, German conservatism is often associated with politicians such as Chancellor Angela Merkel, whose tenure has been marked by attempts to save the common European currency (Euro) from demise. The German conservatives are divided under Merkel due to the refugee crisis in Germany. Many conservatives oppose the refugee policies under Merkel. American conservatism is a broad system of political beliefs in the United States that is characterized by respect for American traditions, support for Judeo - Christian values, economic liberalism, anti-communism, and a defense of Western culture. Liberty is a core value, with a particular emphasis on strengthening the free market, limiting the size and scope of government, and opposition to high taxes and government or labor union encroachment on the entrepreneur. American conservatives consider individual liberty, within the bounds of conformity to American values, as the fundamental trait of democracy, which contrasts with modern American liberals, who generally place a greater value on social equity and social justice. Another form of conservatism developed in France in parallel to conservatism in Britain. It was influenced by Counter-Enlightenment works by men such as Joseph de Maistre and Louis de Bonald. Latin conservatism was less pragmatic and more reactionary than the conservatism of Burke. Many Continental or Traditionalist conservatives do not support separation of Church and state, with most supporting state recognition of and cooperation with the Catholic Church, such as had existed in France before the Revolution. Eventually conservatives added patriotism and nationalism to the list of traditional values they support. Conservatives were the first to embrace nationalism, which was previously associated with liberalism and the Revolution in France. Conservative political parties vary widely from country to country in the goals they wish to achieve. Both conservative and liberal parties tend to favor private ownership of property, in opposition to communist, socialist and green parties, which favor communal ownership or laws requiring social responsibility on the part of property owners. Where conservatives and liberals differ is primarily on social issues. Conservatives tend to reject behavior that does not conform to some social norm. Modern conservative parties often define themselves by their opposition to liberal or labor parties. The United States usage of the term conservative is unique to that country. According to Alan Ware, Belgium, Denmark, Iceland, Finland, France, Greece, Iceland, Luxembourg, Netherlands, Norway, Sweden, Switzerland, and the UK retained viable conservative parties into the 1980s. Ware said that Australia, Germany, Israel, Italy, Japan, Malta, New Zealand, Spain and the US had no conservative parties, although they had either Christian Democrats or liberals as major right - wing parties. Canada, Ireland, and Portugal had right - wing political parties that defied categorization: the Progressive Conservative Party of Canada; Fianna Fáil, Fine Gael, and Progressive Democrats in Ireland; and the Social Democratic Party of Portugal. Since then, the Swiss People 's Party has moved to the extreme right and is no longer considered to be conservative. Klaus von Beyme, who developed the method of party categorization, found that no modern Eastern European parties could be considered conservative, although the communist and communist - successor parties had strong similarities. In Italy, which was united by liberals and radicals (risorgimento), liberals not conservatives emerged as the party of the Right. In the Netherlands, conservatives merged into a new Christian democratic party in 1980. In Austria, Germany, Portugal and Spain, conservatism was transformed into and incorporated into fascism or the far right. In 1940, all Japanese parties were merged into a single fascist party. Following the war, Japanese conservatives briefly returned to politics but were largely purged from public office. Louis Hartz explained the absence of conservatism in Australia or the United States as a result of their settlement as radical or liberal fragments of Great Britain. Although he said English Canada had a negligible conservative influence, subsequent writers claimed that loyalists opposed to the American Revolution brought a Tory ideology into Canada. Hartz explained conservatism in Quebec and Latin America as a result of their settlement as feudal societies. The American conservative writer Russell Kirk provided the opinion that conservatism had been brought to the US and interpreted the American revolution as a "conservative revolution ''. Conservative elites have long dominated Latin American nations. Mostly this has been achieved through control of and support for civil institutions, the church and the armed forces, rather than through party politics. Typically the church was exempt from taxes and its employees immune from civil prosecution. Where national conservative parties were weak or non-existent, conservatives were more likely to rely on military dictatorship as a preferred form of government. However, in some nations where the elites were able to mobilize popular support for conservative parties, longer periods of political stability were achieved. Chile, Colombia and Venezuela are examples of nations that developed strong conservative parties. Argentina, Brazil, El Salvador and Peru are examples of nations where this did not occur. The Conservative Party of Venezuela disappeared following the Federal Wars of 1858 -- 1863. Chile 's conservative party, the National Party disbanded in 1973 following a military coup and did not re-emerge as a political force following the subsequent return to democracy. Having its roots in the conservative Catholic Party, the Christian People 's Party, retained a conservative edge through the twentieth century, supporting the king in the Royal Question, supporting nuclear family as the cornerstone of society, defending Christian education and opposing euthanasia. The Christian People 's Party dominated politics in post-war Belgium. In 1999, the party 's support collapsed and it became the country 's fifth largest party. Currently the N - VA (nieuw - vlaamse alliantie / new - Flemish alliance) is the largest party in Belgium. Canada 's Conservatives had their roots in the Loyalists -- Tories -- who left America after the American Revolution. They developed in the socio - economic and political cleavages that existed during the first three decades of the 19th century, and had the support of the business, professional and established Church (Anglican) elites in Ontario and to a lesser extent in Quebec. Holding a monopoly over administrative and judicial offices, they were called the "Family Compact '' in Ontario and the "Chateau Clique '' in Quebec. John A. Macdonald 's successful leadership of the movement to confederate the provinces and his subsequent tenure as prime minister for most of the late 19th century rested on his ability to bring together the English - speaking Protestant oligarchy and the ultramontane Catholic hierarchy of Quebec and to keep them united in a conservative coalition. The Conservatives combined pro-market liberalism and Toryism. They generally supported an activist government and state intervention in the marketplace, and their policies were marked by noblesse oblige, a paternalistic responsibility of the elites for the less well - off. From 1942, the party was known as the Progressive Conservatives, until 2003, when the national party merged with the Canadian Alliance to form the Conservative Party of Canada. The conservative Union Nationale governed the province of Quebec in periods from 1936 to 1960, in a close alliance with English Canadian business elites and the Catholic Church. This period, known as the Great Darkness ended with the Quiet Revolution and the party went into terminal decline. The Colombian Conservative Party, founded in 1849, traces its origins to opponents of General Francisco de Paula Santander 's 1833 -- 37 administration. While the term "liberal '' had been used to describe all political forces in Colombia, the conservatives began describing themselves as "conservative liberals '' and their opponents as "red liberals ''. From the 1860s until the present, the party has supported strong central government, and supported the Catholic Church, especially its role as protector of the sanctity of the family, and opposed separation of church and state. Its policies include the legal equality of all men, the citizen 's right to own property and opposition to dictatorship. It has usually been Colombia 's second largest party, with the Colombian Liberal Party being the largest. Founded in 1915, the Conservative People 's Party of Denmark. was the successor of Højre (literally "Right ''). The conservative party led the government coalition from 1982 to 1993. The party was a junior partner in coalition with the Liberals from 2001 to 2011. The party is preceded by 11 years by the Young Conservatives (KU), today the youth movement of the party. The Party suffered a major defeat in the parliamentary elections of September 2011 in which the party lost more than half of its seat and also lost governmental power. A liberal cultural policy dominated during the postwar period. However, by the 1990s disagreements regarding immigrants from entirely different cultures ignited a conservative backlash. The conservative party in Finland is the National Coalition Party (in Finnish Kansallinen Kokoomus, Kok). The party was founded in 1918 when several monarchist parties united. Although in the past the party was right - wing, today it is a moderate liberal conservative party. While the party advocates economic liberalism, it is committed to the social market economy. Conservatism in France focused on the rejection of the French Revolution, support for the Catholic Church, and the restoration of the monarchy. The monarchist cause was on the verge of victory in the 1870s but then collapsed because of disagreements on who would be king, and what the national flag should be. Religious tensions heightened in the 1890 -- 1910 era, but moderated after the spirit of unity in fighting the First World War. An extreme form of conservatism characterized the Vichy regime of 1940 -- 1944 with heightened anti-Semitism, opposition to individualism, emphasis on family life, and national direction of the economy. Following the Second World War, conservatives in France supported Gaullist groups and have been nationalistic, and emphasized tradition, order, and the regeneration of France. Gaullists held divergent views on social issues. The number of Conservative groups, their lack of stability, and their tendency to be identified with local issues defy simple categorization. Conservatism has been the major political force in France since the second world war. Unusually, post-war French conservatism was formed around the personality of a leader, Charles de Gaulle, and did not draw on traditional French conservatism, but on the Bonapartism tradition. Gaullism in France continues under Les Republicains (formerly Union for a Popular Movement). The word "conservative '' itself is a term of abuse in France. The main interwar conservative party was called the People 's Party (PP), which supported constitutional monarchy and opposed the republican Liberal Party. Both it and the Liberal party were suppressed by the authoritarian, arch - conservative and royalist 4th of August Regime of Ioannis Metaxas in 1936 -- 41. The PP was able to re-group after the Second World War as part of a United Nationalist Front which achieved power campaigning on a simple anticommunist, ultranationalist platform during the Greek Civil War (1946 -- 49). However, the vote received by the PP declined during the so - called "Centrist Interlude '' in 1950 -- 52. In 1952, Marshal Alexandros Papagos created the Greek Rally as an umbrella for the right - wing forces. The Greek Rally came to power in 1952 and remained the leading party in Greece until 1963 -- after Papagos ' death in 1955 reformed as the National Radical Union under Konstantinos Karamanlis. Right - wing governments backed by the palace and the army overthrew the Centre Union government in 1965, and governed the country until the establishment of the far - right Regime of the Colonels (1967 -- 74). After the regime 's collapse in August 1974, Karamanlis returned from exile to lead the government, and founded the New Democracy party. The new conservative party had four objectives: to confront Turkish expansionism in Cyprus, to reestablish and solidify democratic rule, to give the country a strong government, and to make a powerful moderate party a force in Greek politics. The Independent Greeks, a newly formed political party in Greece has also supported conservatism, particularly national and religious conservatism. The Founding Declaration of the Independent Greeks strongly emphasises in the preservation of the Greek state and its sovereignty, the Greek people and the Greek Orthodox Church. Founded in 1924, as the Conservative Party, Iceland 's Independence Party adopted its current name in 1929 after the merger with the Liberal Party. From the beginning they have been the largest vote - winning party, averaging around 40 %. They combined liberalism and conservatism, supported nationalization of infrastructure and opposed class conflict. While mostly in opposition during the 1930s, they embraced economic liberalism, but accepted the welfare state after the war and participated in governments supportive of state intervention and protectionism. Unlike other Scandanivian conservative (and liberal) parties, it has always had a large working - class following. After the financial crisis in 2008 the party has sunk to a lower support level around 20 -- 25 %. After WW2 in Italy the conservative theories were mainly represented by the Christian Democracy, which government form the foundation of the Republic until party 's dissolution in 1994. Officially DC refused the ideology of Conservatism, but in many aspects, for example family values, it was a typical social conservative party. In 1994 the media tycoon and entrepreneur Silvio Berlusconi founded the liberal conservative party Forza Italia. Berlusconi won three elections in 1994, 2001 and 2008 governing the country for almost ten years as Prime Minister. Forza Italia formed a coalition with right - wing regional party Lega Nord while in government. Besides FI, now the conservative ideas are mainly expressed by the New Centre - Right party led by Angelino Alfano, Berlusconi form a new party, which is the reborn Forza Italia founding a new conservative movement. Alfano is the current Minister of Foreign Affairs. Luxembourg 's major Christian democratic conservative party, the Christian Social People 's Party (CSV or PCS) was formed as the Party of the Right in 1914, and adopted its present name in 1945. It was consistently the largest political party in Luxembourg and dominated politics throughout the 20th century. The Conservative Party of Norway (Norwegian: Høyre, literally "right '') was formed by the old upper class of state officials and wealthy merchants to fight the populist democracy of the Liberal Party, but lost power in 1884 when parliamentarian government was first practised. It formed its first government under parliamentarism in 1889, and continued to alternate in power with the Liberals until the 1930s, when Labour became the dominant political party. It has elements both of paternalism, stressing the responsibilities of the state, and of economic liberalism. It first returned to power in the 1960s. During Kåre Willoch 's premiership in the 1980s, much emphasis was laid on liberalizing the credit - and housing market and abolishing the NRK TV and radio monopoly, while supporting law and order in criminal justice and traditional norms in education Sweden 's conservative party, the Moderate Party, was formed in 1904, two years after the founding of the liberal party. The party emphasizes tax reductions, deregulation of private enterprise, and privatization of schools, hospitals and kindergartens. There are a number of conservative parties in Switzerland 's parliament, the Federal Assembly. These include the largest, the Swiss People 's Party (SVP), the Christian Democratic People 's Party (CVP), and the Conservative Democratic Party of Switzerland (BDP), which is a splinter of the SVP created in the aftermath to the election of Eveline Widmer - Schlumpf as Federal Council. The right - wing parties have a majority in the Federal Assembly. The Swiss People 's Party (SVP or UDC) was formed from the 1971 merger of the Party of Farmers, Traders, and Citizens, formed in 1917 and the smaller Swiss Democratic Party, formed in 1942. The SVP emphasized agricultural policy, and was strong among farmers in German - speaking Protestant areas. As Switzerland considered closer relations with the European Union in the 1990s, the SVP adopted a more militant protectionist and isolationist stance. This stance has allowed it to expand into German - speaking Catholic mountainous areas. The Anti-Defamation League, a non-Swiss lobby group based in the US has accused them of manipulating issues such as immigration, Swiss neutrality and welfare benefits, awakening anti-Semitism and racism. The Council of Europe has called the SVP "extreme right '', although some scholars dispute this classification. Hans - Georg Betz for example describes it as "populist radical right ''. According to historian James Sack, English conservatives celebrate Edmund Burke as their intellectual father. Burke was affiliated with the Whig Party which eventually became the Liberal Party. However, the modern Conservative Party is generally thought to derive from the Tory party and the MPs of the modern conservative party are still frequently referred to as Tories. While conservatism has been seen as an appeal to traditional, hierarchical society, some writers, such as Samuel P. Huntington, see it as situational. Under this definition, conservatives are seen as defending the established institutions of their time. The Liberal Party of Australia adheres to the principles of social conservatism and liberal conservatism. It is Liberal in the sense of economics. Other conservative parties are the National Party of Australia, a sister party of the Liberals, Family First Party, Democratic Labor Party, Shooters Party, Australian Conservatives and the Katter 's Australian Party. The second largest party in the country, the Australian Labor Party 's dominant faction is Labor Right, a socially conservative element. Australia undertook significant economic reform under the Labor Party in the mid-1980s. Consequently, issues like protectionism, welfare reform, privatization and deregulation are no longer debated in the political space as they are in Europe or North America. Moser and Catley explain, "In America, ' liberal ' means left - of - center, and it is a pejorative term when used by conservatives in adversarial political debate. In Australia, of course, the conservatives are in the Liberal Party. '' Jupp points out that, "(the) decline in English influences on Australian reformism and radicalism, and appropriation of the symbols of Empire by conservatives continued under the Liberal Party leadership of Sir Robert Menzies, which lasted until 1966. '' Conservatism in Brazil originates from the cultural and historical tradition of Brazil, whose cultural roots are Luso - Iberian and Roman Catholic. Brazilian conservatism from the 20th Century on includes names such as Gerardo Melo Mourão and Otto Maria Carpeaux in literature; Oliveira Lima and Oliveira Torres in historiography; Sobral Pinto and Miguel Reale in law; Plinio Corrêa de Oliveira and Father Paulo Ricardo in the Catholic Church; Roberto Campos and Mario Henrique Simonsen in Economics; Carlos Lacerda in the political arena and Olavo de Carvalho in Philosophy. In India, Bharatiya Janata Party (BJP) represents conservative politics nationally and is the largest right - wing conservative party. Under Vladimir Putin, the dominant leader since 1999, Russia has promoted explicitly conservative policies in social, cultural and political matters, both at home and abroad. Putin has attacked globalism and economic liberalism, as well as scientific and technological progress. Putin has promoted new think tanks that bring together like - minded intellectuals and writers. For example, the Izborsky Club, founded in 2012 by Aleksandr Prokhanov, stresses Russian nationalism, the restoration of Russia 's historical greatness, and systematic opposition to liberal ideas and policies. Vladislav Surkov, a senior government official has been one of the key ideologists during Putin 's presidency. In cultural and social affairs Putin has collaborated closely with the Russian Orthodox Church. Mark Woods provides specific examples of how the Church under Patriarch Kirill of Moscow has backed the expansion of Russian power into Crimea and eastern Ukraine. More broadly the New York Times reports in September 2016 how that Church 's policy prescriptions support the Kremlin 's appeal to social conservatives: South Korea 's major conservative party, the Saenuri Party (새누리 당) has changed its form throughout its history. First it was the Democratic - Republican Party (1963 ~ 1980); its head was Park Chung - hee who seized power in a 1961 military coup d'état and ruled as an unelected military strongman until his formal election as president in 1963. He was president for 16 years, until his assassination on October 26, 1979. The Democratic Justice Party inherited the same ideology as the Democratic - Republican Party. Its head, Chun Doo - hwan, also gained power through a coup. His followers called themselves the Hanahae. The Democratic Justice Party changed its form and acted to suppress the opposition party and to follow the people 's demand for direct elections. The party 's Roh Tae - woo became the first president who was elected through direct election. The next form of the major conservative party was the Democratic - Liberal Party. Again, through election, its second leader, Kim Young - sam, became the fourteenth president of Korea. When the conservative party was beaten by the opposition party in the general election, it changed its form again to follow the party members ' demand for reforms. It became the New Korean Party. It changed again one year later since the President Kim Young - sam was blamed by the citizen for the IMF. It changed its name to Grand National Party (Hannara - dang). Since the late Kim Dae - jung assumed the presidency in 1998, GNP had been the opposition party until Lee Myung - bak won the presidential election of 2007. The meaning of "conservatism '' in the United States has little in common with the way the word is used elsewhere. As Ribuffo (2011) notes, "what Americans now call conservatism much of the world calls liberalism or neoliberalism. '' Since the 1950s conservatism in the United States has been chiefly associated with the Republican Party. However, during the era of segregation many Southern Democrats were conservatives, and they played a key role in the Conservative Coalition that largely controlled domestic policy in Congress from 1937 to 1963. Major priorities within American conservatism include support for tradition, law - and - order, Christianity, anti-communism, and a defense of "Western civilization from the challenges of modernist culture and totalitarian governments. '' Economic conservatives and libertarians favor small government, low taxes, limited regulation, and free enterprise. Some social conservatives see traditional social values threatened by secularism, so they support school prayer and oppose abortion and homosexuality. Neoconservatives want to expand American ideals throughout the world and show a strong support for Israel. Paleoconservatives, in opposition to multiculturalism, press for restrictions on immigration. Most US conservatives prefer Republicans over Democrats, and most factions favor a strong foreign policy and a strong military. The conservative movement of the 1950s attempted to bring together these divergent strands, stressing the need for unity to prevent the spread of "Godless Communism '', which Reagan later labeled an "evil empire ''. During the Reagan administration, conservatives also supported the so - called "Reagan Doctrine '' under which the US, as part of a Cold War strategy, provided military and other support to guerrilla insurgencies that were fighting governments identified as socialist or communist. Other modern conservative positions include opposition to world government and opposition to environmentalism. On average, American conservatives desire tougher foreign policies than liberals do. Most recently, the Tea Party movement, founded in 2009, has proven a large outlet for populist American conservative ideas. Their stated goals include rigorous adherence to the US Constitution, lower taxes, and opposition to a growing role for the federal government in health care. Electorally, it was considered a key force in Republicans reclaiming control of the US House of Representatives in 2010. This is a broad checklist of modern conservatism in seven countries. Following the Second World War, psychologists conducted research into the different motives and tendencies that account for ideological differences between left and right. The early studies focused on conservatives, beginning with Theodor W. Adorno 's The Authoritarian Personality (1950) based on the F - scale personality test. This book has been heavily criticized on theoretical and methodological grounds, but some of its findings have been confirmed by further empirical research. In 1973, British psychologist Glenn Wilson published an influential book providing evidence that a general factor underlying conservative beliefs is "fear of uncertainty ''. A meta - analysis of research literature by Jost, Glaser, Kruglanski, and Sulloway in 2003 found that many factors, such as intolerance of ambiguity and need for cognitive closure, contribute to the degree of one 's political conservatism. A study by Kathleen Maclay stated these traits "might be associated with such generally valued characteristics as personal commitment and unwavering loyalty ''. The research also suggested that while most people are resistant to change, liberals are more tolerant of it. According to psychologist Bob Altemeyer, individuals who are politically conservative tend to rank high in right - wing authoritarianism on his RWA scale. This finding was echoed by Theodor Adorno. A study done on Israeli and Palestinian students in Israel found that RWA scores of right - wing party supporters were significantly higher than those of left - wing party supporters. However, a 2005 study by H. Michael Crowson and colleagues suggested a moderate gap between RWA and other conservative positions. "The results indicated that conservatism is not synonymous with RWA. '' Psychologist Felicia Pratto and her colleagues have found evidence to support the idea that a high social dominance orientation (SDO) is strongly correlated with conservative political views, and opposition to social engineering to promote equality, though Pratto 's findings have been highly controversial. Pratto and her colleagues found that high SDO scores were highly correlated with measures of prejudice. David J. Schneider, however, argued for a more complex relationships between the three factors, writing "correlations between prejudice and political conservative are reduced virtually to zero when controls for SDO are instituted, suggesting that the conservatism -- prejudice link is caused by SDO ''. Kenneth Minogue criticized Pratto 's work, saying "It is characteristic of the conservative temperament to value established identities, to praise habit and to respect prejudice, not because it is irrational, but because such things anchor the darting impulses of human beings in solidities of custom which we do not often begin to value until we are already losing them. Radicalism often generates youth movements, while conservatism is a condition found among the mature, who have discovered what it is in life they most value. '' A 1996 study on the relationship between racism and conservatism found that the correlation was stronger among more educated individuals, though "anti-Black affect had essentially no relationship with political conservatism at any level of educational or intellectual sophistication ''. They also found that the correlation between racism and conservatism could be entirely accounted for by their mutual relationship with social dominance orientation. A 2008 research report found that conservatives are happier than liberals, and that as income inequality increases, this difference in relative happiness increases, because conservatives (more than liberals) possess an ideological buffer against the negative hedonic effects of economic inequality.
how many times teams won fifa world cup
FIFA World Cup - Wikipedia The FIFA World Cup, often simply called the World Cup, is an international association football competition contested by the senior men 's national teams of the members of the Fédération Internationale de Football Association (FIFA), the sport 's global governing body. The championship has been awarded every four years since the inaugural tournament in 1930, except in 1942 and 1946 when it was not held because of the Second World War. The current champion is France, which won its second title at the 2018 tournament in Russia. The current format of the competition involves a qualification phase, which currently takes place over the preceding three years, to determine which teams qualify for the tournament phase, which is often called the World Cup Finals. After this, 32 teams, including the automatically qualifying host nation (s), compete in the tournament phase for the title at venues within the host nation (s) over a period of about a month. The 21 World Cup tournaments have been won by eight national teams. Brazil have won five times, and they are the only team to have played in every tournament. The other World Cup winners are Germany and Italy, with four titles each; Argentina, France and inaugural winner Uruguay, with two titles each; and England and Spain with one title each. The World Cup is the most prestigious association football tournament in the world, as well as the most widely viewed and followed sporting event in the world, exceeding even the Olympic Games; the cumulative viewership of all matches of the 2006 World Cup was estimated to be 26.29 billion with an estimated 715.1 million people watching the final match, a ninth of the entire population of the planet. 17 countries have hosted the World Cup. Brazil, France, Italy, Germany and Mexico have each hosted twice, while Uruguay, Switzerland, Sweden, Chile, England, Argentina, Spain, the United States, Japan and South Korea (jointly), South Africa and Russia have each hosted once. Qatar are planned as hosts of the 2022 finals, and 2026 will be a joint hosted finals between Canada, the United States and Mexico, which will give Mexico the distinction of being the first country to have hosted games in three different finals. The world 's first international football match was a challenge match played in Glasgow in 1872 between Scotland and England, which ended in a 0 -- 0 draw. The first international tournament, the inaugural British Home Championship, took place in 1884. As football grew in popularity in other parts of the world at the start of the 20th century, it was held as a demonstration sport with no medals awarded at the 1900 and 1904 Summer Olympics (however, the IOC has retroactively upgraded their status to official events), and at the 1906 Intercalated Games. After FIFA was founded in 1904, it tried to arrange an international football tournament between nations outside the Olympic framework in Switzerland in 1906. These were very early days for international football, and the official history of FIFA describes the competition as having been a failure. At the 1908 Summer Olympics in London, football became an official competition. Planned by The Football Association (FA), England 's football governing body, the event was for amateur players only and was regarded suspiciously as a show rather than a competition. Great Britain (represented by the England national amateur football team) won the gold medals. They repeated the feat at the 1912 Summer Olympics in Stockholm. With the Olympic event continuing to be contested only between amateur teams, Sir Thomas Lipton organised the Sir Thomas Lipton Trophy tournament in Turin in 1909. The Lipton tournament was a championship between individual clubs (not national teams) from different nations, each one of which represented an entire nation. The competition is sometimes described as The First World Cup, and featured the most prestigious professional club sides from Italy, Germany and Switzerland, but the FA of England refused to be associated with the competition and declined the offer to send a professional team. Lipton invited West Auckland, an amateur side from County Durham, to represent England instead. West Auckland won the tournament and returned in 1911 to successfully defend their title. In 1914, FIFA agreed to recognise the Olympic tournament as a "world football championship for amateurs '', and took responsibility for managing the event. This paved the way for the world 's first intercontinental football competition, at the 1920 Summer Olympics, contested by Egypt and 13 European teams, and won by Belgium. Uruguay won the next two Olympic football tournaments in 1924 and 1928. Those were also the first two open world championships, as 1924 was the start of FIFA 's professional era. Due to the success of the Olympic football tournaments, FIFA, with President Jules Rimet as the driving force, again started looking at staging its own international tournament outside of the Olympics. On 28 May 1928, the FIFA Congress in Amsterdam decided to stage a world championship itself. With Uruguay now two - time official football world champions and to celebrate their centenary of independence in 1930, FIFA named Uruguay as the host country of the inaugural World Cup tournament. The national associations of selected nations were invited to send a team, but the choice of Uruguay as a venue for the competition meant a long and costly trip across the Atlantic Ocean for European sides. Indeed, no European country pledged to send a team until two months before the start of the competition. Rimet eventually persuaded teams from Belgium, France, Romania, and Yugoslavia to make the trip. In total, 13 nations took part: seven from South America, four from Europe and two from North America. The first two World Cup matches took place simultaneously on 13 July 1930, and were won by France and the USA, who defeated Mexico 4 -- 1 and Belgium 3 -- 0 respectively. The first goal in World Cup history was scored by Lucien Laurent of France. In the final, Uruguay defeated Argentina 4 -- 2 in front of 93,000 people in Montevideo, and became the first nation to win the World Cup. After the creation of the World Cup, FIFA and the IOC disagreed over the status of amateur players, and so football was dropped from the 1932 Summer Olympics. Olympic football returned at the 1936 Summer Olympics, but was now overshadowed by the more prestigious World Cup. The issues facing the early World Cup tournaments were the difficulties of intercontinental travel, and war. Few South American teams were willing to travel to Europe for the 1934 World Cup and all North and South American nations except Brazil and Cuba boycotted the 1938 tournament. Brazil was the only South American team to compete in both. The 1942 and 1946 competitions, which Germany and Brazil sought to host, were cancelled due to World War II and its aftermath. The 1950 World Cup, held in Brazil, was the first to include British participants. British teams withdrew from FIFA in 1920, partly out of unwillingness to play against the countries they had been at war with, and partly as a protest against foreign influence on football, but rejoined in 1946 following FIFA 's invitation. The tournament also saw the return of 1930 champions Uruguay, who had boycotted the previous two World Cups. Uruguay won the tournament again after defeating the host nation Brazil, in the match called "Maracanazo '' (Portuguese: Maracanaço). In the tournaments between 1934 and 1978, 16 teams competed in each tournament, except in 1938, when Austria was absorbed into Germany after qualifying, leaving the tournament with 15 teams, and in 1950, when India, Scotland, and Turkey withdrew, leaving the tournament with 13 teams. Most of the participating nations were from Europe and South America, with a small minority from North America, Africa, Asia, and Oceania. These teams were usually defeated easily by the European and South American teams. Until 1982, the only teams from outside Europe and South America to advance out of the first round were: USA, semi-finalists in 1930; Cuba, quarter - finalists in 1938; North Korea, quarter - finalists in 1966; and Mexico, quarter - finalists in 1970. The tournament was expanded to 24 teams in 1982, and then to 32 in 1998, also allowing more teams from Africa, Asia and North America to take part. Since then, teams from these regions have enjoyed more success, with several having reached the quarter - finals: Mexico, quarter - finalists in 1986; Cameroon, quarter - finalists in 1990; South Korea, finishing in fourth place in 2002; Senegal, along with USA, both quarter - finalists in 2002; Ghana, quarter - finalists in 2010; and Costa Rica, quarter - finalists in 2014. Nevertheless, European and South American teams continue to dominate, e.g., the quarter - finalists in 1994, 1998, 2006 and 2018 were all from Europe or South America and so were the finalists of all tournaments so far. Two hundred teams entered the 2002 FIFA World Cup qualification rounds; 198 nations attempted to qualify for the 2006 FIFA World Cup, while a record 204 countries entered qualification for the 2010 FIFA World Cup. In October 2013, Sepp Blatter spoke of guaranteeing the Caribbean Football Union 's region a position in the World Cup. In the edition of 25 October 2013 of the FIFA Weekly Blatter wrote that: "From a purely sporting perspective, I would like to see globalisation finally taken seriously, and the African and Asian national associations accorded the status they deserve at the FIFA World Cup. It can not be that the European and South American confederations lay claim to the majority of the berths at the World Cup. '' Those two remarks suggested to commentators that Blatter could be putting himself forward for re-election to the FIFA Presidency. Following the magazine 's publication, Blatter 's would - be opponent for the FIFA Presidency, UEFA President Michel Platini, responded that he intended to extend the World Cup to 40 national associations, increasing the number of participants by eight. Platini said that he would allocate an additional berth to UEFA, two to the Asian Football Confederation and the Confederation of African Football, two shared between CONCACAF and CONMEBOL, and a guaranteed place for the Oceania Football Confederation. Platini was clear about why he wanted to expand the World Cup. He said: "(The World Cup is) not based on the quality of the teams because you do n't have the best 32 at the World Cup... but it 's a good compromise... It 's a political matter so why not have more Africans? The competition is to bring all the people of all the world. If you do n't give the possibility to participate, they do n't improve. '' In October 2016 FIFA president Gianni Infantino stated his support for a 48 - team World Cup in 2026. On 10 January 2017, FIFA confirmed the 2026 World Cup will have 48 finalist teams. By May 2015, the games were under a particularly dark cloud because of the 2015 FIFA corruption case, allegations and criminal charges of bribery, fraud and money laundering to corrupt the issuing of media and marketing rights (rigged bids) for FIFA games, with FIFA officials accused of taking bribes totaling more than $150 million over 24 years. In late May, the U.S. Justice Department announced a 47 - count indictment with charges of racketeering, wire fraud and money laundering conspiracy against 14 people. Arrests of over a dozen FIFA officials were made since that time, particularly on 29 May and 3 December. By the end of May 2015, a total of nine FIFA officials and five executives of sports and broadcasting markets had already been charged on corruption. At the time, FIFA president Sepp Blatter announced he would relinquish his position in February 2016. On 4 June 2015 Chuck Blazer while co-operating with the FBI and the Swiss authorities admitted that he and the other members of FIFA 's then - executive committee were bribed in order to promote the 1998 and 2010 World Cups. On 10 June 2015 Swiss authorities seized computer data from the offices of Sepp Blatter. The same day, FIFA postponed the bidding process for the 2026 FIFA World Cup in light of the allegations surrounding bribery in the awarding of the 2018 and 2022 tournaments. Then - secretary general Jérôme Valcke stated, "Due to the situation, I think it 's nonsense to start any bidding process for the time being. '' On 28 October 2015, Blatter and FIFA VP Michel Platini, a potential candidate for presidency, were suspended for 90 days; both maintained their innocence in statements made to the news media. On 3 December 2015 two FIFA vice-presidents were arrested on suspicion of bribery in the same Zurich hotel where seven FIFA officials had been arrested in May. An additional 16 indictments by the U.S. Department of Justice were announced on the same day. An equivalent tournament for women 's football, the FIFA Women 's World Cup, was first held in 1991 in China. The women 's tournament is smaller in scale and profile than the men 's, but is growing; the number of entrants for the 2007 tournament was 120, more than double that of 1991. Men 's football has been included in every Summer Olympic Games except 1896 and 1932. Unlike many other sports, the men 's football tournament at the Olympics is not a top - level tournament, and since 1992, an under - 23 tournament with each team allowed three over-age players. Women 's football made its Olympic debut in 1996. The FIFA Confederations Cup is a tournament held one year before the World Cup at the World Cup host nation (s) as a dress rehearsal for the upcoming World Cup. It is contested by the winners of each of the six FIFA confederation championships, along with the FIFA World Cup champion and the host country. FIFA also organises international tournaments for youth football (FIFA U-20 World Cup, FIFA U-17 World Cup, FIFA U-20 Women 's World Cup, FIFA U-17 Women 's World Cup), club football (FIFA Club World Cup), and football variants such as futsal (FIFA Futsal World Cup) and beach soccer (FIFA Beach Soccer World Cup). The latter three do not have a women 's version, although a FIFA Women 's Club World Cup has been proposed. The FIFA U-20 Women 's World Cup is held the year before each Women 's World Cup and both tournaments are awarded in a single bidding process. The U-20 tournament serves as a dress rehearsal for the larger competition. From 1930 to 1970, the Jules Rimet Trophy was awarded to the World Cup winning team. It was originally simply known as the World Cup or Coupe du Monde, but in 1946 it was renamed after the FIFA president Jules Rimet who set up the first tournament. In 1970, Brazil 's third victory in the tournament entitled them to keep the trophy permanently. However, the trophy was stolen in 1983 and has never been recovered, apparently melted down by the thieves. After 1970, a new trophy, known as the FIFA World Cup Trophy, was designed. The experts of FIFA, coming from seven countries, evaluated the 53 presented models, finally opting for the work of the Italian designer Silvio Gazzaniga. The new trophy is 36 cm (14.2 in) high, made of solid 18 carat (75 %) gold and weighs 6.175 kg (13.6 lb). The base contains two layers of semi-precious malachite while the bottom side of the trophy bears the engraved year and name of each FIFA World Cup winner since 1974. The description of the trophy by Gazzaniga was: "The lines spring out from the base, rising in spirals, stretching out to receive the world. From the remarkable dynamic tensions of the compact body of the sculpture rise the figures of two athletes at the stirring moment of victory. '' This new trophy is not awarded to the winning nation permanently. World Cup winners retain the trophy only until the post-match celebration is finished. They are awarded a gold - plated replica rather than the solid gold original immediately afterwards. Currently, all members (players, coaches, and managers) of the top three teams receive medals with an insignia of the World Cup Trophy; winners ' (gold), runners - up ' (silver), and third - place (bronze). In the 2002 edition, fourth - place medals were awarded to hosts South Korea. Before the 1978 tournament, medals were only awarded to the eleven players on the pitch at the end of the final and the third - place match. In November 2007, FIFA announced that all members of World Cup - winning squads between 1930 and 1974 were to be retroactively awarded winners ' medals. Since the second World Cup in 1934, qualifying tournaments have been held to thin the field for the final tournament. They are held within the six FIFA continental zones (Africa, Asia, North and Central America and Caribbean, South America, Oceania, and Europe), overseen by their respective confederations. For each tournament, FIFA decides the number of places awarded to each of the continental zones beforehand, generally based on the relative strength of the confederations ' teams. The qualification process can start as early as almost three years before the final tournament and last over a two - year period. The formats of the qualification tournaments differ between confederations. Usually, one or two places are awarded to winners of intercontinental play - offs. For example, the winner of the Oceanian zone and the fifth - placed team from the Asian zone entered a play - off for a spot in the 2010 World Cup. From the 1938 World Cup onwards, host nations receive automatic qualification to the final tournament. This right was also granted to the defending champions between 1938 and 2002, but was withdrawn from the 2006 FIFA World Cup onward, requiring the champions to qualify. Brazil, winners in 2002, were the first defending champions to play qualifying matches. The current final tournament has been used since 1998 and features 32 national teams competing over the course of a month in the host nation (s). There are two stages: the group stage followed by the knockout stage. In the group stage, teams compete within eight groups of four teams each. Eight teams are seeded, including the hosts, with the other seeded teams selected using a formula based on the FIFA World Rankings and / or performances in recent World Cups, and drawn to separate groups. The other teams are assigned to different "pots '', usually based on geographical criteria, and teams in each pot are drawn at random to the eight groups. Since 1998, constraints have been applied to the draw to ensure that no group contains more than two European teams or more than one team from any other confederation. Each group plays a round - robin tournament, in which each team is scheduled for three matches against other teams in the same group. This means that a total of six matches are played within a group. The last round of matches of each group is scheduled at the same time to preserve fairness among all four teams. The top two teams from each group advance to the knockout stage. Points are used to rank the teams within a group. Since 1994, three points have been awarded for a win, one for a draw and none for a loss (before, winners received two points). If one considers all possible outcomes (win, draw, loss) for all six matches in a group, there are 729 (= 3) outcome combinations possible. However, 207 of these combinations lead to ties between the second and third places. In such case, the ranking among these teams is determined as follows: The knockout stage is a single - elimination tournament in which teams play each other in one - off matches, with extra time and penalty shootouts used to decide the winner if necessary. It begins with the round of 16 (or the second round) in which the winner of each group plays against the runner - up of another group. This is followed by the quarter - finals, the semi-finals, the third - place match (contested by the losing semi-finalists), and the final. On 10 January 2017, FIFA approved a new format, the 48 - team World Cup (to accommodate more teams), which consists of 16 groups of three teams each, with two teams qualifying from each group, to form a round of 32 knockout stage, to be implemented by 2026. Early World Cups were given to countries at meetings of FIFA 's congress. The locations were controversial because South America and Europe were by far the two centres of strength in football and travel between them required three weeks by boat. The decision to hold the first World Cup in Uruguay, for example, led to only four European nations competing. The next two World Cups were both held in Europe. The decision to hold the second of these in France was disputed, as the South American countries understood that the location would alternate between the two continents. Both Argentina and Uruguay thus boycotted the 1938 FIFA World Cup. Since the 1958 FIFA World Cup, to avoid future boycotts or controversy, FIFA began a pattern of alternating the hosts between the Americas and Europe, which continued until the 1998 FIFA World Cup. The 2002 FIFA World Cup, hosted jointly by South Korea and Japan, was the first one held in Asia, and the first tournament with multiple hosts. South Africa became the first African nation to host the World Cup in 2010. The 2014 FIFA World Cup was hosted by Brazil, the first held in South America since Argentina 1978, and was the first occasion where consecutive World Cups were held outside Europe. The host country is now chosen in a vote by FIFA 's Council. This is done under an exhaustive ballot system. The national football association of a country desiring to host the event receives a "Hosting Agreement '' from FIFA, which explains the steps and requirements that are expected from a strong bid. The bidding association also receives a form, the submission of which represents the official confirmation of the candidacy. After this, a FIFA designated group of inspectors visit the country to identify that the country meets the requirements needed to host the event and a report on the country is produced. The decision on who will host the World Cup is usually made six or seven years in advance of the tournament. However, there have been occasions where the hosts of multiple future tournaments were announced at the same time, as was the case for the 2018 and 2022 World Cups, which were awarded to Russia and Qatar, with Qatar becoming the first Middle Eastern country to host the tournament. For the 2010 and 2014 World Cups, the final tournament is rotated between confederations, allowing only countries from the chosen confederation (Africa in 2010, South America in 2014) to bid to host the tournament. The rotation policy was introduced after the controversy surrounding Germany 's victory over South Africa in the vote to host the 2006 tournament. However, the policy of continental rotation will not continue beyond 2014, so any country, except those belonging to confederations that hosted the two preceding tournaments, can apply as hosts for World Cups starting from 2018. This is partly to avoid a similar scenario to the bidding process for the 2014 tournament, where Brazil was the only official bidder. The 2026 FIFA World Cup was chosen to be held in the United States, Canada and Mexico, marking the first time a World Cup has been shared by three host nations. The 2026 tournament will be the biggest World Cup ever held, with 48 teams playing 80 matches. Sixty matches will take place in the US, including all matches from the quarter - finals onward, while Canada and Mexico will host 10 games each. Six of the eight champions have won one of their titles while playing in their own homeland, the exceptions being Brazil, who finished as runners - up after losing the deciding match on home soil in 1950 and lost their semi-final against Germany in 2014, and Spain, which reached the second round on home soil in 1982. England (1966) won its only title while playing as a host nation. Uruguay (1930), Italy (1934), Argentina (1978) and France (1998) won their first titles as host nations but have gone on to win again, while Germany (1974) won their second title on home soil. Other nations have also been successful when hosting the tournament. Switzerland (quarter - finals 1954), Sweden (runners - up in 1958), Chile (third place in 1962), South Korea (fourth place in 2002), and Mexico (quarter - finals in 1970 and 1986) all have their best results when serving as hosts. So far, South Africa (2010) has been the only host nation to fail to advance beyond the first round. The best - attended single match, shown in the last three columns, has been the final in half of the 20 World Cups as of 2014. Another match or matches drew more attendance than the final in 1930, 1938, 1958, 1962, 1970 -- 1982, 1990 and 2006. The World Cup was first televised in 1954 and is now the most widely viewed and followed sporting event in the world. The cumulative viewership of all matches of the 2006 World Cup is estimated to be 26.29 billion. 715.1 million individuals watched the final match of this tournament (a ninth of the entire population of the planet). The 2006 World Cup draw, which decided the distribution of teams into groups, was watched by 300 million viewers. The World Cup attracts many sponsors such as Coca - Cola, McDonald 's and Adidas. For these companies and many more, being a sponsor strongly impacts their global brands. Host countries typically experience a multimillion - dollar revenue increase from the month - long event. The governing body of the sport, FIFA, generated $4.8 billion in revenue from the 2014 tournament. Each FIFA World Cup since 1966 has its own mascot or logo. World Cup Willie, the mascot for the 1966 competition, was the first World Cup mascot. World Cups feature official match balls specially designed for each tournament. Each World Cup also has an official song, which have been performed by artists ranging from Shakira to Will Smith. Other songs, such as "Nessun dorma '', performed by The Three Tenors at four World Cup concerts, have also become identified with the tournament. The World Cup even has a statistically significant effect on birth rates, the male / female sex ratio of newborns, and heart attacks in nations whose national teams are competing. In all, 79 nations have played in at least one World Cup. Of these, eight national teams have won the World Cup, and they have added stars to their badges, with each star representing a World Cup victory. (Uruguay, however, choose to display four stars on their badge, representing their two gold medals at the 1924 and 1928 Summer Olympics and their two World Cup titles in 1930 and 1950). With five titles, Brazil are the most successful World Cup team and also the only nation to have played in every World Cup (21) to date. Brazil were also the first team to win the World Cup for the third (1970), fourth (1994) and fifth (2002) time. Italy (1934 and 1938) and Brazil (1958 and 1962) are the only nations to have won consecutive titles. West Germany (1982 -- 1990) and Brazil (1994 -- 2002) are the only nations to appear in three consecutive World Cup finals. Germany has made the most top - four finishes (13), medals (12), as well as the most finals (8). To date, the final of the World Cup has only been contested by teams from the UEFA (Europe) and CONMEBOL (South America) confederations. European nations have won twelve titles, while South American have won nine. Only two teams from outside these two continents have ever reached the semi-finals of the competition: United States (North, Central America and Caribbean) in 1930 and South Korea (Asia) in 2002. The best result of an African team is reaching the quarter - finals: Cameroon in 1990, Senegal in 2002 and Ghana in 2010. Only one Oceanian qualifier, Australia in 2006, has advanced to the second round. Brazil, Argentina, Spain and Germany are the only teams to win a World Cup outside their continental confederation; Brazil came out victorious in Europe (1958), North America (1970 and 1994) and Asia (2002). Argentina won a World Cup in North America in 1986, while Spain won in Africa in 2010. In 2014, Germany became the first European team to win in the Americas. Only on five occasions have consecutive World Cups been won by teams from the same continent, and currently it is the first time with four champions in a row from the same continental confederation. Italy and Brazil successfully defended their titles in 1938 and 1962 respectively, while Italy 's triumph in 2006 has been followed by wins for Spain in 2010, Germany in 2014 and France in 2018. Currently, it is also the first time that one of the currently winning continents (Europe) is ahead of the other (South America) by more than one championship. At the end of each World Cup, awards are presented to the players and teams for accomplishments other than their final team positions in the tournament. There are currently six awards: An All - Star Team consisting of the best players of the tournament has also been announced for each tournament since 1998. Three players share the record for playing in the most World Cups; Mexico 's Antonio Carbajal (1950 -- 1966) and Rafael Márquez (2002 - 2018); and Germany 's Lothar Matthäus (1982 -- 1998) all played in five tournaments. Matthäus has played the most World Cup matches overall, with 25 appearances. Brazil 's Djalma Santos (1954 -- 1962), West Germany 's Franz Beckenbauer (1966 -- 1974) and Germany 's Philipp Lahm (2006 -- 2014) are the only players to be named to three Finals All - Star Teams. Miroslav Klose of Germany (2002 -- 2014) is the all - time top scorer at the finals, with 16 goals. He broke Ronaldo of Brazil 's record of 15 goals (1998 -- 2006) during the 2014 semi-final match against Brazil. West Germany 's Gerd Müller (1970 -- 1974) is third, with 14 goals. The fourth placed goalscorer, France 's Just Fontaine, holds the record for the most goals scored in a single World Cup; all his 13 goals were scored in the 1958 tournament. In November 2007, FIFA announced that all members of World Cup - winning squads between 1930 and 1974 were to be retroactively awarded winners ' medals. This made Brazil 's Pelé the only player to have won three World Cup winners ' medals (1958, 1962, and 1970, although he did not play in the 1962 final due to injury), with 20 other players who have won two winners ' medals. Seven players have collected all three types of World Cup medals (winners ', runner - ups ', and third - place); five players were from West Germany 's squad of 1966 -- 1974 including Franz Beckenbauer, Jürgen Grabowski, Horst - Dieter Höttges, Sepp Maier and Wolfgang Overath (1966 -- 1974), Italy 's Franco Baresi (1982, 1990, 1994) and the most recent has been Miroslav Klose of Germany (2002 -- 2014) with four consecutive medals. Brazil 's Mário Zagallo, West Germany 's Franz Beckenbauer and France 's Didier Deschamps are the only people to date to win the World Cup as both player and head coach. Zagallo won in 1958 and 1962 as a player and in 1970 as head coach. Beckenbauer won in 1974 as captain and in 1990 as head coach, and Deschamps repeated the feat in 2018, after having won in 1998 as captain. Italy 's Vittorio Pozzo is the only head coach to ever win two World Cups (1934 and 1938). All World Cup - winning head coaches were natives of the country they coached to victory. Among the national teams, Germany and Brazil have played the most World Cup matches (109), Germany appeared in the most finals (8), semi-finals (13), quarter - finals (16), while Brazil has appeared in the most World Cups (21), has the most wins (73) and has scored the most goals (229). The two teams have played each other twice in the World Cup, in the 2002 final and in the 2014 semi-final.
when did the self balancing scooter come out
Self - balancing scooter - wikipedia A self - balancing scooter (also "hoverboard '', self - balancing board) is a self - balancing personal transporter consisting of two motorized wheels connected to a pair of articulated pads on which the rider places their feet. The rider controls the speed by leaning forwards or backwards, and direction of travel by twisting the pads. Invented in its current form in early 2013, the device is the subject of complex patent disputes. Volume manufacture started in China in 2014 and early units were prone to catch fire due to an overheating battery which resulted in product recalls in 2016, including one of 500,000 units sold in the United States from 8 manufacturers. Shane Chen, an American businessman and founder of Inventist filed a patent for a device of this type in February 2013 and launched a Kickstarter fund - raising campaign in May 2013. The devices ' increasing popularity in Western countries has been attributed, initially, to endorsement by the wide array of celebrities (including Justin Bieber, Jamie Foxx, Kendall Jenner, Chris Brown, Soulja Boy and Wiz Khalifa). The founders of the American company, PhunkeeTree, encountered the board at the Hong Kong Electronics Show, in 2014 and became involved in its distribution, shortly thereafter. By June 2015, the board was being made by several manufacturers, mainly in the Shenzhen region of China. In January 2015 through Inventist, he announced his intention to pursue litigation In April 2015, Ninebot, a significant manufacturer of devices acquired Segway Inc. (which separately asserted that it holds patents for self - balancing scooters.) in order to resolve the dispute. In May Chen voiced his frustrations regarding patent rights in China. In August 2015, Mark Cuban announced plans to purchase the Hovertrax patents from Chen. Many of the units provided in the first year of manufacture were defective and likely to catch fire, resulting in a major product recall from multiple manufacturers during 2016 (more details below). In June 2016 the U.S. International Trade Commission issued an injunction for patent infringement against UPTECH, U.P. Technology, U.P. Robotics, FreeGo China, EcoBoomer, and Roboscooters. Robstep, INMOTION, Tech in the City, FreeGo settled with Segway. The use of the term "hoverboard '' to describe these devices, which do n't actually hover, has led to considerable discussion in the media. The first use of the term for can be traced back to a 1967 science fiction novel by M.K. Joseph and subsequently popularized in the 1989 film, Back to the Future Part II where Marty McFly uses one in a fictional 2015. While the first trademark use of hoverboard was registered in 1996 as a collecting and trading game, its first use as a commercial name representing a wheeled scooter was in 1999, and Guinness World Records lists a farthest hoverboard flight entry. In September 2015 the Oxford English Dictionary stated in their view the term had not been in use in the context for long enough for inclusion and that for the time being they would restrict their description to boards that Marty McFly would recognize. The term "self - balancing electric scooter '' remains popular. The device has two 6.5 inches (170 mm) - 8 inches (200 mm) diameter wheels connected to a self - balancing control mechanism using built - in gyroscopic and a sensor pad. By tilting the pad the rider can control the speed and direction of travel achieving speeds of between 6 miles per hour (9.7 km / h) and 15 miles per hour (24 km / h) with a range of up to 12 miles (19 km) dependent on model, terrain, rider weight and other factors. As with most wheeled vehicles where the rider is exposed, Consumer Reports has recommended that users wear appropriate safety gear while using them. There were many instances of units catching fire, with claims that they were responsible for numerous residential fires between late 2015 into 2016. In the United Kingdom, authorities expressed concerns with the boards, regarding possible faulty wiring. Many airlines banned the transportation of the boards, both as stored or carry - on luggage. The U.S. Consumer Product Safety Commission (CPSC) launched an investigation into the safety of the device in late 2015 and determined that the lithium - ion battery packs in the self - balancing scooters / hoverboards could overheat and posed a risk of catching fire / exploding, and that defects had led to 60 fires in over 20 states. In July 2016 the commission ordered the recall of over 500,000 units from 8 manufacturers. The Swagway model X1 constituted the majority of the recalled "hoverboards, '' at 267,000 units. In January 2016 the Philippines, the Departments of Health and Trade and Industry issued a joint advisory cautioning the public against buying them, due to reports of injuries and "potential electrocution connected with its usage ''. The advisory also stated "as a precautionary measure, the DOH and DTI - Consumer Protection Group therefore advise parents against buying hoverboards for children under 14 years of age. '' In May 2016, the miniPRO produced by Segway Inc. received UL certification, as did a company in Shenzhen, China. In June 2016, after safety improvements in design, the UL - approved Swagtron was launched in the United States. However, the danger of self - balance board have continued as there were several houses that caught on fire due to these devices in 2017.
what characteristics of the skin tend to limit infectious disease
Skin flora - wikipedia The term skin flora (also commonly referred to as skin microbiome) refers to the microorganisms which reside on the skin; typically human skin. Many of them are bacteria of which there are around 1000 species upon human skin from 19 phyla. Most are found in the superficial layers of the epidermis and the upper parts of hair follicles. Skin flora is usually non-pathogenic, and either commensal (are not harmful to their host) or mutualistic (offer a benefit). The benefits bacteria can offer include preventing transient pathogenic organisms from colonizing the skin surface, either by competing for nutrients, secreting chemicals against them, or stimulating the skin 's immune system. However, resident microbes can cause skin diseases and enter the blood system, creating life - threatening diseases, particularly in immunosuppressed people. A major non-human skin flora is Batrachochytrium dendrobatidis, a chytrid and non-hyphal zoosporic fungus that causes chytridiomycosis, an infectious disease thought to be responsible for the decline in amphibian populations. The estimate of the number of species present on skin bacteria has been radically changed by the use of 16S ribosomal RNA to identify bacterial species present on skin samples direct from their genetic material. Previously such identification had depended upon microbiological culture upon which many varieties of bacteria did not grow and so were hidden to science. Staphylococcus epidermidis and Staphylococcus aureus were thought from cultural based research to be dominant. However 16S ribosomal RNA research finds that while common, these species make up only 5 % of skin bacteria. However, skin variety provides a rich and diverse habitat for bacteria. Most come from four phyla: Actinobacteria (51.8 %), Firmicutes (24.4 %), Proteobacteria (16.5 %), and Bacteroidetes (6.3 %). There are three main ecological areas: sebaceous, moist, and dry. Propionibacteria and Staphylococci species were the main species in sebaceous areas. In moist places on the body Corynebacteria together with Staphylococci dominate. In dry areas, there is a mixture of species but b - Proteobacteria and Flavobacteriales are dominant. Ecologically, sebaceous areas had greater species richness than moist and dry one. The areas with least similarity between people in species were the spaces between fingers, the spaces between toes, axillae, and umbilical cord stump. Most similarly were beside the nostril, nares (inside the nostril), and on the back. A study of the area between toes in 100 young adults found 14 different genera of fungi. These include yeasts such as Candida albicans, Rhodotorula rubra, Torulopsis and Trichosporon cutaneum, dermatophytes (skin living fungi) such as Microsporum gypseum, and Trichophyton rubrum and nondermatophyte fungi (opportunistic fungi that can live in skin) such as Rhizopus stolonifer, Trichosporon cutaneum, Fusarium, Scopulariopsis brevicaulis, Curvularia, Alternaria alternata, Paecilomyces, Aspergillus flavus and Penicillium species. A study by the National Human Genome Research Institute in Bethesda, Maryland, researched the DNA of human skin fungi at 14 different locations on the body. These were the ear canal, between the eyebrows, the back of the head, behind the ear, the heel, toenails, between the toes, forearm, back, groin, nostrils, chest, palm, and the crook of the elbow. The study showed a large fungal diversity across the body, the richest habitat being the heel, which hosts about 80 species of fungi. By way of contrast, there are some 60 species in toenail clippings and 40 between the toes. Other rich areas are the palm, forearm and inside the elbow, with from 18 to 32 species. The head and the trunk hosted between 2 and 10 each. The umbilicus, or navel, is an area of the body that is rarely exposed to UV light, soaps, or bodily secretions (the navel does not produce any secretions or oils) and because it is an almost undisturbed community of bacteria it is an excellent part of the skin microbiome to study. The navel, or umbilicus is a moist microbiome of the body (with high humidity and temperatures), that contains a large amount of bacteria, especially bacteria that favors moist conditions such as Corynebacterium and Staphylococcus. The Belly Button Biodiversity Project began at North Carolina State University in early 2011 with two initial groups of 35 and 25 volunteers. Volunteers were given sterile cotton swabs and were asked to insert the cotton swabs into their navels, to turn the cotton swab around three times and then return the cotton swab to the researchers in a vial that contained a 0.5 ml 10 % phosphate saline buffer. Researchers at North Carolina State University, led by Jiri Hulcr, then grew the samples in a culture until the bacterial colonies were large enough to be photographed and then these pictures were posted on the Belly Button Biodiversity Project 's website (volunteers were given sample numbers so that they could view their own samples online). These samples then were analyzed using 16S rDNA libraries so that strains that did not grow well in cultures could be identified. The researchers at North Carolina State University discovered that while it was difficult to predict every strain of bacteria in the microbiome of the navel that they could predict which strains would be prevalent and which strains of bacteria would be quite rare in the microbiome. It was found that the navel microbiomes only contained a few prevalent types of bacteria (Staphylococcus, Corynebacterium, Actinobacteria, Clostridiales, and Bacilli) and many different types of rare bacteria. Other types of rare organisms were discovered inside the navels of the volunteers including three types of Archaea (an organism that usually lives in only extreme environments) and two of the three types of Archaea were found in one volunteer who claimed not to have bathed or showered for many years. Staphylococcus and Corynebacterium were among the most common types of bacteria found in the navels of this project 's volunteers and these types of bacteria have been found to be the most common types of bacteria found on the human skin in larger studies of the skin microbiome (of which the Belly Button Biodiversity Project is a part). (In these larger studies it has been found that females generally have more Staphylococcus living in their skin microbiomes (usually Staphylococcus epidermidis) and that men have more Corynebacterium living in their skin microbiomes.) According to the Belly Button Biodiversity Project at North Carolina State University, there are two types of microorganisms found in the navel and surrounding areas. Transient bacteria (bacteria that does not reproduce) forms the majority of the organisms found in the navel, and an estimated 1400 various strains were found in 95 % of participants of the study. The Belly Button Biodiversity Project is ongoing and has now taken swabs from over 500 people. The project was designed with the aim of countering that misconception that bacteria are always harmful to humans and that humans are at war with bacteria. In actuality, most strains of bacteria are harmless if not beneficial for the human body. Another of the project 's goals is to foster public interest in microbiology. Working in concert with the Human Microbiome Project, the Belly Button Biodiversity Project also studies the connections between human microbiomes and the factors of age, sex, ethnicity, location and overall health. Skin microflora can be commensals, mutualistic or pathogens. Often they can be all three depending upon the strength of the person 's immune system. Research upon the immune system in the gut and lungs has shown that microflora aids immunity development: however such research has only started upon whether this is the case with the skin. Pseudomonas aeruginosa is an example of a mutualistic bacterium that can turn into a pathogen and cause disease: if it gains entry into the blood system it can result in infections in bone, joint, gastrointestinal, and respiratory systems. It can also cause dermatitis. However, Pseudomonas aeruginosa produces antimicrobial substances such as pseudomonic acid (that are exploited commercially such as Mupirocin). This works against staphylococcal and streptococcal infections. Pseudomonas aeruginosa also produces substances that inhibit the growth of fungus species such as Candida krusei, Candida albicans, Torulopsis glabrata, Saccharomyces cerevisiae and Aspergillus fumigatus. It can also inhibit the growth of Helicobacter pylori. So important is its antimicrobial actions that it has been noted that "removing P. aeruginosa from the skin, through use of oral or topical antibiotics, may inversely allow for aberrant yeast colonization and infection. '' Another aspect of bacteria is the generation of body odor. Sweat is odorless however several bacteria may consume it and create byproducts which may be considered putrid by man (as in contrast to flies, for example, that may find them attractive / appealing). Several examples are: The skin creates antimicrobial peptides such as cathelicidins that control the proliferation of skin microbes. Cathelicidins not only reduce microbe numbers directly but also cause the secretion of cytokine release which induces inflammation, angiogenesis, and reepithelialization. Conditions such as atopic dermatitis have been linked to the suppression in cathelicidin production. In rosacea abnormal processing of cathelicidin cause inflammation. Psoriasis has been linked to self - DNA created from cathelicidin peptides that causes autoinflammation. A major factor controlling cathelicidin is vitamin D. The superficial layers of the skin are naturally acidic (pH 4 - 4.5) due to lactic acid in sweat and produced by skin bacteria. At this pH mutualistic flora such as Staphylococci, Micrococci, Corynebacterium and Propionibacteria grow but not transient bacteria such as Gram negative bacteria like Escherichia and Pseudomonas or Gram positive ones such as Staphylococcus aureus. Another factor affecting the growth of pathological bacteria is that the antimicrobial substances secreted by the skin are enhanced in acidic conditions. In alkaline conditions, bacteria cease to be attached to the skin and are more readily shed. It has been observed that the skin also swells under alkaline conditions and opens up allowing move to the surface. If activated, the immune system in the skin produces cell - mediated immunity against microbes such as dermatophytes (skin fungi). One reaction is to increase stratum corneum turnover and so shed the fungus from the skin surface. Skin fungi such as Trichophyton rubrum have evolved to create substances that limit the immune response to them. The shedding of skin is a general means to control the buildup of flora upon the skin surface. Microorganisms play a role in noninfectious skin diseases such as atopic dermatitis, rosacea, psoriasis, and acne Damaged skin can cause nonpathogenic bacteria to become pathogenic. The skin microbiome is an active field where we continue to find new relationships between our commensal skin microbiota and skin diseases. Acne vulgaris is a common skin condition characterised by excessive sebum production by the pilosebaceous unit and inflammation of the skin. Affected areas are typically colonised by Propionibacterium acnes; a member of the commensal microbiota even in those without acne. High populations of P. acnes are linked to acne vulgaris although only certain strains are strongly associated with acne while others with healthy skin. The relative population of P. acnes is similar between those with acne and those without. Current treatment includes topical and systemic antibacterial drugs which result in decreased P. acnes colonisation and / or activity. Potential probiotic treatment includes the use of Staphylococcus epidermidis to inhibit P. acnes growth. S. epidermidis produces succinic acid which has been shown to inhibit P. acnes growth. Lactobacillus plantarum has also been shown to act as an anti-inflammatory and improve antimicrobial properties of the skin when applied topically. It was also shown to be effective in reducing acne lesion size. Individuals with atopic dermatitis have shown an increase in populations of Staphylococcus aureus in both lesional and nonlesional skin. Atopic dermatitis flares are associated with low bacterial diversity due to colonisation by S. aureus and following standard treatment, bacterial diversity has been seen to increase. Current treatments include combinations of topical or systemic antibiotics, corticosteroids, and diluted bleach baths. Potential probiotic treatments include using the commensal skin bacteria, S. epidermidis, to inhibit S. aureus growth. During atopic dermatitis flares, population levels of S. epidermidis has been shown to increase as an attempt to control S. aureus populations. Low gut microbial diversity in babies has been associated with an increased risk of atopic dermatitis. Infants with atopic eczema have low levels of Bacteroides and high levels of Firmicutes. Bacteroides have anti-inflammatory properties which are essential against dermatitis. (See gut microbiota) Psoriasis vulgaris typically affects drier skin sites such as elbows and knees. Dry areas of the skin tend to have high microbial diversity and fewer populations than sebaceous sites. A study using swab sampling techniques show areas in rich in Firmicutes (mainly Streptococcus and Staphylococcus) and Actinobacteria (mainly Corynebacterium and Propionibacterium) are associated with psoriasis. While another study using biopsies associate increased levels of Firmicutes and Actinobacteria with healthy skin. However most studies show that individuals affected by psoriasis have a lower microbial diversity in the affected areas. Treatments for psoriasis include topical agents, phototherapy, and systemic agents. Current research on the skin microbiota 's role in psoriasis is inconsistent therefore there are no potential probiotic treatments. Rosacea is typically connected to sebaceous sites of the skin. The skin mite Demodex folliculorum produce lipases that allow them to use sebum as a source of food therefore they have a high affinity for sebaceous skin sites. Although it is a part of the commensal skin microbiota, patients affected with rosacea show an increase in D. folliculorum compared to healthy individuals, suggesting pathogenicity. Bacillus oleronius, a Demodex associated microbe, is not typically found in the commensal skin microbiota but initiates inflammatory pathways whose starting mechanism is similar to rosacea patients. Populations of S. epidermidis have also been isolated from pustules of rosacea patients. However it is possible that they were moved by Demodex to areas that favour growth as Demodex has shown to transport bacteria around the face. Current treatments include topical and oral antibiotics and laser therapy. As current research has yet to show a clear mechanism for Demodex influence in rosacea, there are no potential probiotic treatments. Skin microbes are a potential source of infected medical devices such as catheters. It is important to note that the human skin is host to numerous bacterial and fungal species, some of which are known to be harmful, some known to be beneficial and the vast majority unresearched. The use of bactericidal and fungicidal soaps will inevitably lead to bacterial and fungal populations which are resistant to the chemicals employed. (see Drug resistance) Skin flora do not readily pass between people: 30 seconds of moderate friction and dry hand contact results in a transfer of only 0.07 % of natural hand flora from naked with a greater percentage from gloves. The most effective (60 to 80 % reduction) antimicrobial washing is with ethanol, isopropanol, and n - propanol. Viruses are most affected by high (95 %) concentrations of ethanol, while bacteria are more affected by n - propanol. Unmedicated soaps are not very effective as illustrated by the following data. Health care workers washed their hands once in nonmedicated liquid soap for 30 seconds. The students / technicians for 20 times. An important use of hand washing is to prevent the transmission of antibiotic resistant skin flora that cause hospital - acquired infections such as Methicillin - resistant Staphylococcus aureus. While such flora have become antibiotic resistant due to antibiotics there is no evidence that recommended antiseptics or disinfectants selects for antibiotic - resistant organisms when used in hand washing. However, many strains of organisms are resistant to some of the substances used in antibacterial soaps such as Triclosan. One survey of bar soaps in dentist clinics found they all had their own flora and on average from two to five different genera of microorganisms with those used most more likely to have more species varieties. Another survey of bar soaps in public toilets found even more flora. Another study found that very dry soaps are not infected while all are that rest in pools of water. However, research upon soap that was specially infected found that soap flora do not transmit to the hands. Washing skin repeatedly can damage the protective external layer and cause transepidermal loss of water. This can be seen in roughness characterized by scaling and dryness, itchiness, dermatitis provoked by microorganisms and allergens penetrating the corneal layer and redness. Wearing gloves can cause further problems since it produces a humid environment favoring the growth of microbes and also contains irritants such as latex and talcum powder. Hand washing can damage skin because the stratum corneum top layer of skin consists of 15 to 20 layers of keratin disks, corneocytes, each of which is each surrounded by a thin film of skin lipids which can be removed by alcohols and detergents. Damaged skin defined by extensive cracking of skin surface, widespread reddening or occasional bleeding has also been found to be more frequently colonized by Staphylococcus hominis and these were more likely to be methicillin resistant. Though not related to greater antibiotic resistance, damaged skin was also more like to be colonized by Staphylococcus aureus, gram - negative bacteria, Enterococci and Candida. The skin flora is different from that of the gut which is predominantly Firmicutes and Bacteroidetes. There is also low level of variation between people that is not found in gut studies. Both gut and skin flora however lack the diversity found in soil flora.
who bought jim beam and maker's mark
Maker 's Mark - wikipedia Maker 's Mark is a small - batch bourbon whiskey produced in Loretto, Kentucky, by Beam Suntory. It is bottled at 90 U.S. proof (45 % alcohol by volume) and sold in distinctively squarish bottles sealed with red wax. The distillery offers tours, and is part of the American Whiskey Trail and the Kentucky Bourbon Trail. Maker 's Mark 's origin began when T. William "Bill '' Samuels Sr., purchased the "Burks ' Distillery '' in Loretto, Kentucky, for $35,000 on October 1, 1953. Production began in 1954, and the first run was bottled in 1958 under the brand 's distinctive dipped red wax seal (U.S. trademark serial number 73526578). In the 1960s and 1970s, Maker 's Mark was widely marketed with the tag line, "It tastes expensive... and is. '' The distillery was listed on the National Register of Historic Places on December 31, 1974, and designated a National Historic Landmark on December 16, 1980, listed as "Burks ' Distillery '', the first distillery in America to be recognized while the landmark buildings were in active production. Maker 's Mark was sold to Hiram Walker & Sons in 1981, which was acquired by the distillery giant Allied Domecq in 1987. When Allied - Domecq was bought by Pernod Ricard in 2005 the Maker 's Mark brand was sold to Deerfield, Illinois - based Fortune Brands. Fortune Brands split in 2011, with its alcoholic beverage business becoming Beam Inc. After the brand 's creation by Bill Samuels, Sr., its production was overseen by his son Bill Samuels, Jr. until 2011 when he announced his retirement as president and CEO of Maker 's Mark at the age of 70. His son Rob Samuels succeeded him in April 2011. On February 9, 2013, the company sent a mass e-mail announcing a plan to reduce the alcohol strength of the whiskey, citing supply issues as the reason for the change. The result of this change would have been to reduce the product from 90 U.S. proof (45 % alcohol by volume) to 84 U.S. proof (42 % abv), which would have stretched inventory by about 6 %. Maker 's Mark said that their own tasting panel of distillery employees reported no taste difference in the lower proof, while industry analysts said that the difference would be subtle, and since most drinkers mix the bourbon or serve it on ice, few would be able to notice it. According to Neil Irwin for The Washington Post 's Wonkblog, the decision can be explained by Beam 's desire to keep Maker 's Mark competitive as a premium bourbon at mid-range bars, and a well drink among high - end bars. On February 17, the company said that it had reconsidered its decision after receiving a strong negative reaction from customers, and that it would continue to bottle at the original strength. Some overseas markets like Australia will continue to sell the whiskey at 40 %. In January 2014, Beam Inc announced its sale to Suntory Holdings, creating the third largest distilled spirits maker in the world. News of the proposed sale included bourbon executives vowing "the product taste wo n't change -- and neither will the company 's historic purity standards. '' In 2014, Maker 's Mark released a Maker 's Mark Cask Strength Bourbon in limited quantities initially available to consumers only at their distillery gift shop. ABV fluctuates each batch between 53 % and 58 %. The product was released on the global market in July 2016. In November 2015 Suntory announced a major expansion of its distillery. Maker 's Mark is unusual in that no rye is used as part of the mash. Instead red winter wheat is used, along with corn (the predominant grain) and malted barley. During the planning phase of Maker 's Mark, Samuels allegedly developed seven candidate mash bills for the new bourbon. As he did not have time to distill and age each one for tasting, he instead made a loaf of bread from each recipe and the one with no rye was judged the best tasting. Samuels also received considerable assistance and recipes from Stitzel - Weller owner Pappy Van Winkle, whose distillery produced the wheated Old Fitzgerald and W.L. Weller bourbons. Maker 's Mark is aged for around six years, being bottled and marketed when the company 's tasters agree that it is ready. Maker 's Mark is one of the few distillers to rotate the barrels from the upper to the lower levels of the aging warehouses during the aging process to even out the differences in temperature during the process. The upper floors are exposed to the greatest temperature variations during the year, so rotating the barrels ensures that the bourbon in all the barrels have the same quality and taste. Maker 's Mark is marketed as a small batch Bourbon. Most producers of so - called small batch Bourbons do not clarify exactly what they mean by the term. The producer of Maker 's Mark says that the traditional definition is "A bourbon that is produced / distilled in small quantities of approximately 1,000 gallons or less (20 barrels) from a mash bill of around 200 bushels of grain ''. Maker 's Mark is sold in squarish bottles that are sealed with red wax. T. William Samuels ' wife, Marjorie "Margie '' Samuels, gave the whiskey its name, drew its label, and thought up the wax dipping that gives the bottle its distinctive look. It was introduced to the market in 1959. Three varieties are marketed; the original, a mint julep flavor with green wax on the neck released seasonally in limited amounts, and Maker 's 46 (47 % alcohol by volume), a variety flavored by introducing seared French oak staves into the traditional charred white oak barrel toward the end of its aging. The original has been bottled at 90 U.S. proof (45 % alcohol by volume). Maker 's Mark is, along with George Dickel and Old Forester, one of a handful of American - made whiskies that uses the Scottish spelling "whisky '' rather than the predominant American "whiskey ''. Maker 's Mark has began creating branded restaurants with the October 2004 opening of Maker 's Mark Bourbon House & Lounge in the Fourth Street Live! entertainment complex in downtown Louisville, Kentucky. In addition to serving Maker 's Mark it features bourbons from each of Kentucky 's distilleries. The menu was designed by Chef Al Paris of the famous Zanzibar Blue restaurant in Philadelphia. A second such establishment opened in Kansas City, Missouri 's downtown Power & Light District in 2008, and a third at the Indiana Live Casino in Shelbyville, Indiana just outside Indianapolis in March, 2009. Maker 's Mark bourbon has earned solid marks at international Spirit ratings competitions. Its primary bourbon earned a gold medal at the 2010 San Francisco World Spirit Ratings Competition and a score of 90 -- 95 from Wine Enthusiast in 2007. The Maker 's Mark 46 -- which benefits from longer aging and exposure to toasted French oak staves -- has earned similar ratings. Jane MacQuitty, spirits writer for the London Times, said of Maker 's Mark that "What separates this bourbon from the rest is the softness and smoothness of its rich oak, vanilla and raisiny - like flavours. '' Food critic Morgan Murphy said "Dark as its red wax seal, this beautiful whiskey packs apple spice, vanilla, and a front - of - the - mouth crispness that is admired the world over. '' Maker 's Mark began special edition bottles featuring Keeneland bottles for horses in 1997. The label was white with a dark green horse and green wax. Other Keeneland bottles include famous Derby winners such as Secretariat (2003), Seattle Slew (2004), Affirmed (2005), and American Pharoah (2016). On July 20, 2012, Maker 's Mark started selling a limited edition bottle featuring University of Louisville 's Head Football Coach Charlie Strong. The bottles were created to raise money for a new Academic Center of Excellence on UofL 's campus. Maker 's Mark also has marketed special label bottles with the images of Hall of Fame Basketball Coach Rick Pitino and dynamic Athletic Tom Jurich for the same purpose. Maker 's Mark has featured several University of Kentucky sports personalities on its University of Kentucky (UK) line of limited release bottles. A limited quantity of bottles can be signed for free by the personality that was selected for the bottle and by a member of the Samuels family. The signing party is held at Keeneland horse track in the university 's home city of Lexington. The first UK special edition bottle was produced in 1993. In celebration of the 1996 NCAA Men 's Basketball Champions, Maker 's Mark printed a bottle that had a denim background with white type. The team 's coach at the time, Rick Pitino, signed the bottle. Other bottles include: Wildcat Bottle (2001), Bill Keightley (2002), Rupp 's Runts (2006), The Unforgettables (2007), Joe B. Hall (first in 2008 and again in 2016), Rich Brooks (2009), John Calipari (2010), Tim Couch (2012), Dan Issel (2013), Mark Stoops (2014), and Adolph Rupp (2015). The 2015 bottle was the first in a series honoring the five basketball coaches who won NCAA titles at UK; future entries in the series will feature Pitino in 2017, Tubby Smith in 2018, and Calipari for a second time in 2019. Coordinates: 37 ° 38 ′ 52 '' N 85 ° 20 ′ 56 '' W  /  37.64778 ° N 85.34889 ° W  / 37.64778; - 85.34889
what is the book of jonah about in the bible
Book of Jonah - wikipedia The Book of Jonah is one of the Prophets in the Bible. It tells of a Hebrew prophet named Jonah son of Amittai who is sent by God to prophesy the destruction of Nineveh but tries to escape the divine mission. Set in the reign of Jeroboam II (786 -- 746 BC), it was probably written in the post-exilic period, some time between the late 5th to early 4th century BC. The story has a long interpretive history and has become well - known through popular children 's stories. In Judaism it is the Haftarah, read during the afternoon of Yom Kippur in order to instill reflection on God 's willingness to forgive those who repent; it remains a popular story among Christians. It is also retold in the Koran. Unlike the other Prophets, the book of Jonah is almost entirely narrative, with the exception of the psalm in chapter 2. The actual prophetic word against Nineveh is given only in passing through the narrative. As with any good narrative, the story of Jonah has a setting, characters, a plot, and themes. It also relies heavily on such literary devices as irony. Nineveh, where Jonah preached, was the capital of the ancient Assyrian empire, which fell to the Babylonians and the Medes in 612 BC. The book calls Nineveh a "great city, '' referring to its size (Jonah 3: 3 + 4: 11) and perhaps to its affluence as well. (The story of the city 's deliverance from judgment may reflect an older tradition dating back to the 8th -- 7th century BC) Assyria often opposed Israel and eventually took the Israelites captive in 722 -- 721 BC (see History of ancient Israel and Judah). The Assyrian oppression against the Israelites can be seen in the bitter prophecies of Nahum. The story of Jonah is a drama between a passive man and an active God. Jonah, whose name literally means "dove, '' is introduced to the reader in the very first verse. The name is decisive. While many other prophets had heroic names (e.g., Isaiah means "God has saved ''), Jonah 's name carries with it an element of passivity. Jonah 's passive character is contrasted with the other main character: Yahweh. God 's character is altogether active. While Jonah flees, God pursues. While Jonah falls, God lifts up. The character of God in the story is progressively revealed through the use of irony. In the first part of the book, God is depicted as relentless and wrathful; in the second part of the book, He is revealed to be truly loving and merciful. The other characters of the story include the sailors in chapter 1 and the people of Nineveh in chapter 3. These characters are also contrasted to Jonah 's passivity. While Jonah sleeps in the hull, the sailors pray and try to save the ship from the storm (1: 4 -- 6). While Jonah passively finds himself forced to act under the Divine Will, the people of Nineveh actively petition God to change his mind. The plot centers on a conflict between Jonah and God. God calls Jonah to proclaim judgment to Nineveh, but Jonah resists and attempts to flee. He goes to Joppa and boards a ship bound for Tarshish. God calls up a great storm at sea, and, at Jonah 's insistence, the ship 's crew reluctantly cast Jonah overboard in an attempt to appease God. A great sea creature, sent by God, swallows Jonah. For three days and three nights Jonah languishes inside the fish 's belly. He says a prayer in which he repents for his disobedience and thanks God for His mercy. God speaks to the fish, which vomits out Jonah safely on dry land. After his rescue, Jonah obeys the call to prophesy against Nineveh, causing the people of the city to repent and God to forgive them. Jonah is furious, however, and angrily tells God that this is the reason he tried to flee from Him, as he knew Him to be a just and merciful God. He then beseeches God to kill him, a request which is denied when God causes a tree to grow over him, giving him shade. Initially grateful, Jonah 's anger returns the next day, when God sends a worm to eat the plant, withering it, and he tells God that it would be better if he were dead. God then points out: "You are concerned about the bush, for which you did not labour and which you did not grow; it came into being in a night and perished in a night. And should I not be concerned about Nineveh, that great city, in which there are more than a hundred and twenty thousand people who do not know their right hand from their left, and also many animals? (NRSV) '' Ironically, the relentless God demonstrated in the first chapter is shown to be the merciful God in the last two chapters (see 3: 10). Equally ironic, despite not wanting to go to Nineveh and follow God 's calling, Jonah becomes one of the most effective prophets of God. As a result of his preaching, the entire population of Nineveh repents before the Lord and is spared destruction. The author indicates that the city "has more than a hundred and twenty thousand people who can not tell their right hand from their left '' (4: 11a, NIV). While some commentators see this number (120,000) as a somewhat pejorative reference to ignorant or backward Ninevites, most commentators take it to refer to young infants, thus implying a population considerably larger than 120,000. Islam also tells the story of the Prophet Jonah in the Koran. Similar to the Bible, the Koran states that Jonah was sent to his people to deliver a message to worship only one God (the Judeo - Christian God of Abraham) and refrain from evil behavior. However Johah became angry with his people when they refused to listen and ignores him. Jonah gave up on his people and left his community without having instruction from God. "And remember when he (Jonah) went off in anger. '' (Quran 21: 87) According to Islam, after Jonah left his people the sky turned red as fire and the people were filled with fear. Jonah 's people repented to God and prayed that Jonah would return to guide them to the Straight Path. God accepted their repentance and the sky returned to normal. As told in the Koran, Jonah boarded a ship to be far away from his people. While on the ship the calm sea became violent and was tearing at the boat. After throwing their belongings overboard without any positive change, the passengers cast lots to throw someone overboard to reduce the weight. Twice Jonah 's name was drawn to be thrown overboard, which surprised the passengers because Jonah as perceived as a righteous and pious man. Jonah understood this was not a coincidence but his destiny and he jumped into the violent sea and was swallowed by a "giant fish. '' Many believe this fish was a whale. The strong acid from fish 's belly began to eat away at Jonah 's skin and he began to repeatedly call out to God for help by saying: "None has the right to be worshipped but you oh God, glorified are you and truly I have been one of the wrongdoers! '' (Quran 21: 87) Islam teaches that God accepted Jonah 's repentance and commanded the giant fish to spit Jonah out onto the shore. Jonah was in pain and his skin was burned from the acid in the fish 's belly. Jonah repeated his prayer and God relieved him by having a vine (gourd) cover his body to protect him and also provided him with food. The Koran states: "And, verily, Jonah was one of the Messengers. When he ran to the laden ship, he agreed to cast lots and he was among the losers, then a big fish swallowed him and he had done an act worthy of blame. Had he not been of them who glorify God, he would have indeed remained inside its belly (the fish) until the Day of Resurrection. But We cast him forth on the naked shore while he was sick and We caused a plant of gourd to grow over him. And We sent him to a hundred thousand people or even more, and they believed, so We gave them enjoyment for a while. '' (Quran 37: 139 - 148). Jonah returned to be with his people and guide them. The prayer made by Jonah while in the fish 's belly can be used to help anyone in times of distress: "None has the right to be worshipped but you oh God, glorified are you and truly I have been one of the wrongdoers! '' (Quran 21: 87). The story of Jonah has numerous theological implications, and this has long been recognized. In early translations of the Hebrew Bible, Jewish translators tended to remove anthropomorphic imagery in order to prevent the reader from misunderstanding the ancient texts. This tendency is evidenced in both the Aramaic translations (e.g. the Targums) and the Greek translations (e.g. the Septuagint). As far as the Book of Jonah is concerned, Targum Jonah offers a good example of this. In Jonah 1: 6, the Masoretic Text (MT) reads, "... perhaps God will pay heed to us... '' Targum Jonah translates this passage as: "... perhaps there will be mercy from the Lord upon us... '' The captain 's proposal is no longer an attempt to change the divine will; it is an attempt to appeal to divine mercy. Furthermore, in Jonah 3: 9, the MT reads, "Who knows, God may turn and relent (lit. repent)? '' Targum Jonah translates this as, "Whoever knows that there are sins on his conscience let him repent of them and we will be pitied before the Lord. '' God does not change His mind; He shows pity. Fragments of the book were found among the Dead Sea Scrolls (DSS) (4Q76 a.k.a. 4QMinorProphets, Col V - VI, frags. 21 -- 22; 4Q81 a.k.a. 4QMinorProphets, Col I and II; and 4Q82 a.k.a. 4QMinorProphets, Frags. 76 -- 91), most of which follows the Masoretic Text closely and with Mur XII reproducing a large portion of the text. As for the non-canonical writings, the majority of references to biblical texts were made by argumentum ad verecundiam. The Book of Jonah appears to have served less purpose in the Qumran community than other texts, as the writings make no references to it. The earliest Christian interpretations of Jonah are found in the Gospel of Matthew (see Matthew 12: 38 -- 42 and 16: 1 -- 4) and the Gospel of Luke (see Luke 11: 29 -- 32). Both Matthew and Luke record a tradition of Jesus ' interpretation of the Book of Jonah (notably, Matthew includes two very similar traditions in chapters 12 and 16). As with most Old Testament interpretations found in the New Testament, Jesus ' interpretation is primarily "typological '' (see Typology (theology)). Jonah becomes a "type '' for Jesus. Jonah spent three days in the belly of the fish; Jesus will spend three days in the grave. Here, Jesus plays on the imagery of Sheol found in Jonah 's prayer. While Jonah metaphorically declared, "Out of the belly of Sheol I cried, '' Jesus will literally be in the belly of Sheol. Finally, Jesus compares his generation to the people of Nineveh. Jesus fulfills his role as a type of Jonah, however his generation fails to fulfill its role as a type of Nineveh. Nineveh repented, but Jesus ' generation, which has seen and heard one even greater than Jonah, fails to repent. Through his typological interpretation of the Book of Jonah, Jesus has weighed his generation and found it wanting. The debate over the credibility of the miracle of Jonah is not simply a modern one. The credibility of a human being surviving in the belly of a great fish has long been questioned. In c. 409 AD, Augustine of Hippo wrote to Deogratias concerning the challenge of some to the miracle recorded in the Book of Jonah. He writes: The last question proposed is concerning Jonah, and it is put as if it were not from Porphyry, but as being a standing subject of ridicule among the Pagans; for his words are: "In the next place, what are we to believe concerning Jonah, who is said to have been three days in a whale 's belly? The thing is utterly improbable and incredible, that a man swallowed with his clothes on should have existed in the inside of a fish. If, however, the story is figurative, be pleased to explain it. Again, what is meant by the story that a gourd sprang up above the head of Jonah after he was vomited by the fish? What was the cause of this gourd 's growth? '' Questions such as these I have seen discussed by Pagans amidst loud laughter, and with great scorn. Augustine responds that if one is to question one miracle, then one should question all miracles as well (section 31). Nevertheless, despite his apologetic, Augustine views the story of Jonah as a figure for Christ. For example, he writes: "As, therefore, Jonah passed from the ship to the belly of the whale, so Christ passed from the cross to the sepulchre, or into the abyss of death. And as Jonah suffered this for the sake of those who were endangered by the storm, so Christ suffered for the sake of those who are tossed on the waves of this world. '' Augustine credits his allegorical interpretation to the interpretation of Christ himself (Matt. 12: 39, 40), and he allows for other interpretations as long as they are in line with Christ 's. The Ordinary Gloss, or Glossa Ordinaria, was the most important Christian commentary on the Bible in the later Middle Ages. "The Gloss on Jonah relies almost exclusively on Jerome 's commentary on Jonah (c. 396), so its Latin often has a tone of urbane classicism. But the Gloss also chops up, compresses, and rearranges Jerome with a carnivalesque glee and scholastic directness that renders the Latin authentically medieval. '' "The Ordinary Gloss on Jonah '' has been translated into English and printed in a format that emulates the first printing of the Gloss. The relationship between Jonah and his fellow Jews is ambivalent, and complicated by the Gloss 's tendency to read Jonah as an allegorical prefiguration of Jesus Christ. While some glosses in isolation seem crudely supersessionist ("The foreskin believes while the circumcision remains unfaithful ''), the prevailing allegorical tendency is to attribute Jonah 's recalcitrance to his abiding love for his own people and his insistence that God 's promises to Israel not be overridden by a lenient policy toward the Ninevites. For the glossator, Jonah 's pro-Israel motivations correspond to Christ 's demurral in the Garden of Gethsemane ("My Father, if it be possible, let this chalice pass from me '' (Matt. 26: 39)) and the Gospel of Matthew 's and Paul 's insistence that "salvation is from the Jews '' (Jn. 4: 22). While in the Gloss the plot of Jonah prefigures how God will extend salvation to the nations, it also makes abundantly clear -- as some medieval commentaries on the Gospel of John do not -- that Jonah and Jesus are Jews, and that they make decisions of salvation - historical consequence as Jews. Roman Catholic author Terry Eagleton has written, "There are writers who consider their work to be examples of high seriousness when they are hilariously, unintentionally funny... Another example is the Book of Jonah, which is probably not intended to be funny but which is brilliantly comic without seeming to be aware of it. '' The Hebrew text of Jonah 2: 1 (1: 17 in English translation), reads dag gadol (Hebrew: דג גדול), which literally means "great fish. '' The Septuagint translates this into Greek as ketos megas, (Greek: κητος μεγας), "huge fish ''; in Greek mythology the term was closely associated with sea monsters. Saint Jerome later translated the Greek phrase as piscis granda in his Latin Vulgate, and as cetus in Matthew 12: 40. At some point, cetus became synonymous with whale (cf. cetyl alcohol, which is alcohol derived from whales). In his 1534 translation, William Tyndale translated the phrase in Jonah 2: 1 as "greate fyshe, '' and he translated the word ketos (Greek) or cetus (Latin) in Matthew 12: 40 as "whale ''. Tyndale 's translation was later followed by the translators of the King James Version of 1611 and has enjoyed general acceptance in English translations. In the line 2: 1 the book refers to the fish as dag gadol, "great fish '', in the masculine. However, in the 2: 2, it changes the gender to daga, meaning female fish. The verses therefore read: "And the lord provided a great fish (dag gadol, masculine) for Jonah, and it swallowed him, and Jonah sat in the belly of the fish (still male) for three days and nights; then, from the belly of the (daga, female) fish, Jonah began to pray. '' The peculiarity of this change of gender led the later rabbis to reason that this means Jonah was comfortable in the roomy male fish, so he did n't pray, but that God then transferred him to a smaller, female fish, in which the prophet was uncomfortable, so that he prayed. The book closes abruptly (Jonah 4) with an epistolary warning based on the emblematic trope of a fast - growing vine present in Persian narratives, and popularized in fables such as The Gourd and the Palm - tree during the Renaissance, for example by Andrea Alciato. St. Jerome differed with St. Augustine in his Latin translation of the plant known in Hebrew as קיקיון (qīqayōn), using hedera (from the Greek, meaning "ivy '') over the more common Latin cucurbita, "gourd '', from which the English word gourd (Old French coorde, couhourde) is derived. The Renaissance humanist artist Albrecht Dürer memorialized Jerome 's decision to use an analogical type of Christ 's "I am the Vine, you are the branches '' in his woodcut Saint Jerome in His Study.
where does coal come from and how is it obtained
Coal - Wikipedia Coal is a combustible black or brownish - black sedimentary rock usually occurring in rock strata in layers or veins called coal beds or coal seams. The harder forms, such as anthracite coal, can be regarded as metamorphic rock because of later exposure to elevated temperature and pressure. Coal is composed primarily of carbon, along with variable quantities of other elements, chiefly hydrogen, sulfur, oxygen, and nitrogen. Coal is a fossil fuel that forms when dead plant matter is converted into peat, which in turn is converted into lignite, then sub-bituminous coal, after that bituminous coal, and lastly anthracite. This involves biological and geological processes that take place over time. Throughout human history, coal has been used as an energy resource, primarily burned for the production of electricity and heat, and is also used for industrial purposes, such as refining metals. Coal is the largest source of energy for the generation of electricity worldwide, as well as one of the largest worldwide anthropogenic sources of carbon dioxide releases. The extraction of coal, its use in energy production and its byproducts are all associated with environmental and health effects including climate change. Coal is extracted from the ground by coal mining. Since 1983, the world 's top coal producer has been China. In 2015 China produced 3,747 million tonnes of coal -- 48 % of 7,861 million tonnes world coal production. In 2015 other large producers were United States (813 million tonnes), India (678), European Union (539) and Australia (503). In 2010 the largest exporters were Australia with 328 million tonnes (27 % of world coal export) and Indonesia with 316 million tonnes (26 %), while the largest importers were Japan with 207 million tonnes (18 % of world coal import), China with 195 million tonnes (17 %) and South Korea with 126 million tonnes (11 %). The word originally took the form col in Old English, from Proto - Germanic * kula (n), which in turn is hypothesized to come from the Proto - Indo - European root * g (e) u-lo - "live coal ''. Germanic cognates include the Old Frisian kole, Middle Dutch cole, Dutch kool, Old High German chol, German Kohle and Old Norse kol, and the Irish word gual is also a cognate via the Indo - European root. In Old Turkic languages, kül is "ash (es), cinders '', öčür is "quench ''. The compound "charcoal '' in Turkic is öčür (ülmüş) kül, literally "quenched ashes, cinders, coals '' with elided anlaut ö - and inflection affixes - ülmüş. At various times in the geologic past, the Earth had dense forests in low - lying wetland areas. Due to natural processes such as flooding, these forests were buried underneath soil. As more and more soil deposited over them, they were compressed. The temperature also rose as they sank deeper and deeper. As the process continued the plant matter was protected from biodegradation and oxidation, usually by mud or acidic water. This trapped the carbon in immense peat bogs that were eventually covered and deeply buried by sediments. Under high pressure and high temperature, dead vegetation was slowly converted to coal. As coal contains mainly carbon, the conversion of dead vegetation into coal is called carbonization. The wide, shallow seas of the Carboniferous Period provided ideal conditions for coal formation, although coal is known from most geological periods. The exception is the coal gap in the Permian -- Triassic extinction event, where coal is rare. Coal is known from Precambrian strata, which predate land plants -- this coal is presumed to have originated from residues of algae. As geological processes apply pressure to dead biotic material over time, under suitable conditions, its metamorphic grade increases successively into: The classification of coal is generally based on the content of volatiles. However, the exact classification varies between countries. According to the German classification, coal is classified as follows: The middle six grades in the table represent a progressive transition from the English - language sub-bituminous to bituminous coal. The last class is an approximate equivalent to anthracite, but more inclusive. (US anthracite has < 6 % volatiles.) Cannel coal (sometimes called "candle coal '') is a variety of fine - grained, high - rank coal with significant hydrogen content. It consists primarily of "exinite '' macerals, now termed "liptinite ''. Hilt 's law is a geological observation that (within in a small area) the deeper the coal is found, the higher its rank (or grade). It appliies if the thermal gradient is entirely vertical; however, metamorphism may cause lateral changes of rank, irrespective of depth. The earliest recognized use is from the Shenyang area of China 4000 BC where Neolithic inhabitants had begun carving ornaments from black lignite. Coal from the Fushun mine in northeastern China was used to smelt copper as early as 1000 BC. Marco Polo, the Italian who traveled to China in the 13th century, described coal as "black stones... which burn like logs '', and said coal was so plentiful, people could take three hot baths a week. In Europe, the earliest reference to the use of coal as fuel is from the geological treatise On stones (Lap. 16) by the Greek scientist Theophrastus (c. 371 -- 287 BC): Among the materials that are dug because they are useful, those known as anthrakes (coals) are made of earth, and, once set on fire, they burn like charcoal. They are found in Liguria... and in Elis as one approaches Olympia by the mountain road; and they are used by those who work in metals. Outcrop coal was used in Britain during the Bronze Age (3000 -- 2000 BC), where it has been detected as forming part of the composition of funeral pyres. In Roman Britain, with the exception of two modern fields, "the Romans were exploiting coals in all the major coalfields in England and Wales by the end of the second century AD ''. Evidence of trade in coal (dated to about AD 200) has been found at the Roman settlement at Heronbridge, near Chester, and in the Fenlands of East Anglia, where coal from the Midlands was transported via the Car Dyke for use in drying grain. Coal cinders have been found in the hearths of villas and Roman forts, particularly in Northumberland, dated to around AD 400. In the west of England, contemporary writers described the wonder of a permanent brazier of coal on the altar of Minerva at Aquae Sulis (modern day Bath), although in fact easily accessible surface coal from what became the Somerset coalfield was in common use in quite lowly dwellings locally. Evidence of coal 's use for iron - working in the city during the Roman period has been found. In Eschweiler, Rhineland, deposits of bituminous coal were used by the Romans for the smelting of iron ore. No evidence exists of the product being of great importance in Britain before the High Middle Ages, after about AD 1000. Mineral coal came to be referred to as "seacoal '' in the 13th century; the wharf where the material arrived in London was known as Seacoal Lane, so identified in a charter of King Henry III granted in 1253. Initially, the name was given because much coal was found on the shore, having fallen from the exposed coal seams on cliffs above or washed out of underwater coal outcrops, but by the time of Henry VIII, it was understood to derive from the way it was carried to London by sea. In 1257 -- 1259, coal from Newcastle upon Tyne was shipped to London for the smiths and lime - burners building Westminster Abbey. Seacoal Lane and Newcastle Lane, where coal was unloaded at wharves along the River Fleet, are still in existence. (See Industrial processes below for modern uses of the term.) These easily accessible sources had largely become exhausted (or could not meet the growing demand) by the 13th century, when underground extraction by shaft mining or adits was developed. The alternative name was "pitcoal '', because it came from mines. The development of the Industrial Revolution led to the large - scale use of coal, as the steam engine took over from the water wheel. In 1700, five - sixths of the world 's coal was mined in Britain. Britain would have run out of suitable sites for watermills by the 1830s if coal had not been available as a source of energy. In 1947, there were some 750,000 miners in Britain, but by 2004, this had shrunk to some 5,000 miners working in around 20 collieries. Coal is primarily used as a solid fuel to produce electricity and heat through combustion. According to the EIA, world coal consumption is projected to increase from 2012 to 2040 at an average rate of 0.6 % / year, from 153 quadrillion Btu (1 Quad are 36,000,000 tonnes of coal) in 2012 to 169 quadrillion Btu in 2020, and to 180 quadrillion Btu in 2040. Efforts around the world to reduce the use of coal has led some regions to switch to natural gas. China produced 3.47 billion tonnes (3.83 billion short tons) in 2011. India produced about 578 million tonnes (637.1 million short tons) in 2011. 69 % of China 's electricity comes from coal. The US consumed about 13 % of the world total in 2010, i.e. 951 million tonnes (1.05 billion short tons), using 93 % of it for generation of electricity. 46 % of total power generated in the US was using coal. The United States Energy Information Administration estimates coal reserves at 7011948000000000000 ♠ 948 × 10 short tons (860 Gt). One estimate for resources is 18,000 Gt. When coal is used for electricity generation, it is usually pulverized and then burned in a furnace with a boiler. The furnace heat converts boiler water to steam, which is then used to spin turbines which turn generators and create electricity. The thermodynamic efficiency of this process has been improved over time; some older coal - fired power stations have thermal efficiencies in the vicinity of 25 % whereas the newest supercritical and "ultra-supercritical '' steam cycle turbines, operating at temperatures over 600 ° C and pressures over 27 MPa (over 3900 psi), can achieve thermal efficiencies in excess of 45 % (LHV basis) using anthracite fuel, or around 43 % (LHV basis) even when using lower - grade lignite fuel. Further thermal efficiency improvements are also achievable by improved pre-drying (especially relevant with high - moisture fuel such as lignite or biomass) and cooling technologies. An alternative approach of using coal for electricity generation with improved efficiency is the integrated gasification combined cycle (IGCC) power plant. Instead of pulverizing the coal and burning it directly as fuel in the steam - generating boiler, the coal is gasified (see coal gasification) to create syngas, which is burned in a gas turbine to produce electricity (just like natural gas is burned in a turbine). Hot exhaust gases from the turbine are used to raise steam in a heat recovery steam generator which powers a supplemental steam turbine. Thermal efficiencies of current IGCC power plants range from 39 % to 42 % (HHV basis) or ≈ 42 -- 45 % (LHV basis) for bituminous coal and assuming utilization of mainstream gasification technologies (Shell, GE Gasifier, CB&I). IGCC power plants outperform conventional pulverized coal - fueled plants in terms of pollutant emissions, and allow for relatively easy carbon capture. At least 40 % of the world 's electricity comes from coal, and in 2016, 30 % of the United States ' electricity came from coal, down from approximately 49 % in 2008. As of 2012 in the United States, use of coal to generate electricity was declining, as plentiful supplies of natural gas obtained by hydraulic fracturing of tight shale formations became available at low prices. In Denmark, a net electric efficiency of > 47 % has been obtained at the coal - fired Nordjyllandsværket CHP Plant and an overall plant efficiency of up to 91 % with cogeneration of electricity and district heating. The multifuel - fired Avedøreværket CHP Plant just outside Copenhagen can achieve a net electric efficiency as high as 49 %. The overall plant efficiency with cogeneration of electricity and district heating can reach as much as 94 %. An alternative form of coal combustion is as coal - water slurry fuel (CWS), which was developed in the Soviet Union. Other ways to use coal are combined heat and power cogeneration and an MHD topping cycle. The total known deposits recoverable by current technologies, including highly polluting, low - energy content types of coal (i.e., lignite, bituminous), is sufficient for many years. Consumption is increasing and maximal production could be reached within decades (see world coal reserves, below). On the other hand, much may have to be left in the ground to avoid climate change. Worldwide natural gas generated power has increased from 740 TW in 1973 to 5140 TW in 2014, generating 22 % of the worlds total electricity, approximately half as much as generated with coal. In addition to generating electricity, natural gas is also popular in some countries for heating and as an automotive fuel. The use of coal in the United Kingdom declined as a result of the development of North Sea oil and the subsequent Dash for Gas from the 1990s to 2000. In Canada some coal power plants such as the Hearn Generating Station have stopped burning coal by switching the plant to natural gas. In the United States, 27 gigawatts of capacity from coal - fired generators was slated to be retired from 175 US coal - fired power plants between 2012 and 2016. Natural gas showed a corresponding jump, increasing by a third over 2011. Coal 's share of US electricity generation dropped to just over 36 %. Due to emergence of shale gas, coal consumption declined from 2009. Natural gas accounted for 81 % of new power generation in the US between 2000 and 2010. Coal - fired generation puts out about twice the amount of carbon dioxide -- around 2,000 pounds for every megawatt hour generated -- than electricity generated by burning natural gas at 1,100 pounds of greenhouse gas per megawatt hour. As the fuel mix in the United States has changed to reduce coal and increase natural gas generation, carbon dioxide emissions have unexpectedly fallen. Those measured in the first quarter of 2012 were the lowest of any recorded for the first quarter of any year since 1992. Coke is a solid carbonaceous residue derived from coking coal (a low - ash, low - sulfur bituminous coal, also known as metallurgical coal), which is used in manufacturing steel and other iron products. Coke is made from coking coal by baking in an oven without oxygen at temperatures as high as 1,000 ° C (1,832 ° F), driving off the volatile constituents and fusing together the fixed carbon and residual ash. Metallurgical coke is used as a fuel and as a reducing agent in smelting iron ore in a blast furnace. The result is pig iron, and is too rich in dissolved carbon, so it must be treated further to make steel. Coking coal should be low in ash, sulfur, and phosphorus, so that these do not migrate to the metal. Based on the ash percentage, the coking coal can be divided into various grades. These grades are: The coke must be strong enough to resist the weight of overburden in the blast furnace, which is why coking coal is so important in making steel using the conventional route. However, the alternative route is direct reduced iron, where any carbonaceous fuel can be used to make sponge or pelletised iron. Coke from coal is grey, hard, and porous and has a heating value of 24.8 million Btu / ton (29.6 MJ / kg). Some cokemaking processes produce valuable byproducts, including coal tar, ammonia, light oils, and coal gas. Petroleum coke is the solid residue obtained in oil refining, which resembles coke, but contains too many impurities to be useful in metallurgical applications. Coal gasification can be used to produce syngas, a mixture of carbon monoxide (CO) and hydrogen (H) gas. Often syngas is used to fire gas turbines to produce electricity, but the versatility of syngas also allows it to be converted into transportation fuels, such as gasoline and diesel, through the Fischer - Tropsch process; alternatively, syngas can be converted into methanol, which can be blended into fuel directly or converted to gasoline via the methanol to gasoline process. Gasification combined with Fischer - Tropsch technology is currently used by the Sasol chemical company of South Africa to make motor vehicle fuels from coal and natural gas. Alternatively, the hydrogen obtained from gasification can be used for various purposes, such as powering a hydrogen economy, making ammonia, or upgrading fossil fuels. During gasification, the coal is mixed with oxygen and steam while also being heated and pressurized. During the reaction, oxygen and water molecules oxidize the coal into carbon monoxide (CO), while also releasing hydrogen gas (H). This process has been conducted in both underground coal mines and in the production of town gas which was piped to customers to burn for illumination, heating, and cooking. If the refiner wants to produce gasoline, the syngas is collected at this state and routed into a Fischer - Tropsch reaction. If hydrogen is the desired end - product, however, the syngas is fed into the water gas shift reaction, where more hydrogen is liberated. Coal can also be converted into synthetic fuels equivalent to gasoline or diesel by several different direct processes (which do not intrinsically require gasification or indirect conversion). In the direct liquefaction processes, the coal is either hydrogenated or carbonized. Hydrogenation processes are the Bergius process, the SRC - I and SRC - II (Solvent Refined Coal) processes, the NUS Corporation hydrogenation process and several other single - stage and two - stage processes. In the process of low - temperature carbonization, coal is coked at temperatures between 360 and 750 ° C (680 and 1,380 ° F). These temperatures optimize the production of coal tars richer in lighter hydrocarbons than normal coal tar. The coal tar is then further processed into fuels. An overview of coal liquefaction and its future potential is available. Coal liquefaction methods involve carbon dioxide (CO) emissions in the conversion process. If coal liquefaction is done without employing either carbon capture and storage (CCS) technologies or biomass blending, the result is lifecycle greenhouse gas footprints that are generally greater than those released in the extraction and refinement of liquid fuel production from crude oil. If CCS technologies are employed, reductions of 5 -- 12 % can be achieved in Coal to Liquid (CTL) plants and up to a 75 % reduction is achievable when co-gasifying coal with commercially demonstrated levels of biomass (30 % biomass by weight) in coal / biomass - to - liquids plants. For future synthetic fuel projects, carbon dioxide sequestration is proposed to avoid releasing CO into the atmosphere. Sequestration adds to the cost of production. Refined coal is the product of a coal - upgrading technology that removes moisture and certain pollutants from lower - rank coals such as sub-bituminous and lignite (brown) coals. It is one form of several precombustion treatments and processes for coal that alter coal 's characteristics before it is burned. The goals of precombustion coal technologies are to increase efficiency and reduce emissions when the coal is burned. Depending on the situation, precombustion technology can be used in place of or as a supplement to postcombustion technologies to control emissions from coal - fueled boilers. Finely ground bituminous coal, known in this application as sea coal, is a constituent of foundry sand. While the molten metal is in the mould, the coal burns slowly, releasing reducing gases at pressure, and so preventing the metal from penetrating the pores of the sand. It is also contained in ' mould wash ', a paste or liquid with the same function applied to the mould before casting. Sea coal can be mixed with the clay lining (the "bod '') used for the bottom of a cupola furnace. When heated, the coal decomposes and the bod becomes slightly friable, easing the process of breaking open holes for tapping the molten metal. Coal is an important feedstock in production of a wide range of chemical fertilizers and other chemical products. The main route to these products is coal gasification to produce syngas. Primary chemicals that are produced directly from the syngas include methanol, hydrogen and carbon monoxide, which are the chemical building blocks from which a whole spectrum of derivative chemicals are manufactured, including olefins, acetic acid, formaldehyde, ammonia, urea and others. The versatility of syngas as a precursor to primary chemicals and high - value derivative products provides the option of using relatively inexpensive coal to produce a wide range of valuable commodities. Historically, production of chemicals from coal has been used since the 1950s and has become established in the market. According to the 2010 Worldwide Gasification Database, a survey of current and planned gasifiers, from 2004 to 2007 chemical production increased its gasification product share from 37 % to 45 %. From 2008 to 2010, 22 % of new gasifier additions were to be for chemical production. Because the slate of chemical products that can be made via coal gasification can in general also use feedstocks derived from natural gas and petroleum, the chemical industry tends to use whatever feedstocks are most cost - effective. Therefore, interest in using coal tends to increase for higher oil and natural gas prices and during periods of high global economic growth that may strain oil and gas production. Also, production of chemicals from coal is of much higher interest in countries like South Africa, China, India and the United States where there are abundant coal resources. The abundance of coal combined with lack of natural gas resources in China is strong inducement for the coal to chemicals industry pursued there. In the United States, the best example of the industry is Eastman Chemical Company which has been successfully operating a coal - to - chemicals plant at its Kingsport, Tennessee, site since 1983. Similarly, Sasol has built and operated coal - to - chemicals facilities in South Africa. Coal to chemical processes do require substantial quantities of water. As of 2013 much of the coal to chemical production was in the People 's Republic of China where environmental regulation and water management was weak. In North America, Central Appalachian coal futures contracts are currently traded on the New York Mercantile Exchange (trading symbol QL). The trading unit is 1,550 short tons (1,410 t) per contract, and is quoted in U.S. dollars and cents per ton. Since coal is the principal fuel for generating electricity in the United States, coal futures contracts provide coal producers and the electric power industry an important tool for hedging and risk management. In addition to the NYMEX contract, the Intercontinental Exchange (ICE) has European (Rotterdam) and South African (Richards Bay) coal futures available for trading. The trading unit for these contracts is 5,000 tonnes (5,500 short tons), and are also quoted in U.S. dollars and cents per ton. The price of coal increased from around $30.00 per short ton in 2000 to around $150.00 per short ton as of September 2008. As of October 2008, the price per short ton had declined to $111.50. Prices further declined to $71.25 as of October 2010. In early 2015, it was trading near $56 / ton. The use of coal as fuel causes adverse health impacts and deaths. The deadly London smog was caused primarily by the heavy use of coal. In the United States coal - fired power plants were estimated in 2004 to cause nearly 24,000 premature deaths every year, including 2,800 from lung cancer. Annual health costs in Europe from use of coal to generate electricity are € 42.8 billion, or $55 billion. Yet the disease and mortality burden of coal use today falls most heavily upon China. Breathing in coal dust causes coalworker 's pneumoconiosis which is known colloquially as "black lung '', so - called because the coal dust literally turns the lungs black from their usual pink color. In the United States alone, it is estimated that 1,500 former employees of the coal industry die every year from the effects of breathing in coal mine dust. Around 10 % of coal is ash, Coal ash is hazardous and toxic to human beings and other living things. Coal ash contains the radioactive elements uranium and thorium. Coal ash and other solid combustion byproducts are stored locally and escape in various ways that expose those living near coal plants to radiation and environmental toxics. Huge amounts of coal ash and other waste is produced annually. In 2013, the US alone consumed on the order of 983 million short tonnes of coal per year. Use of coal on this scale generates hundreds of millions of tons of ash and other waste products every year. These include fly ash, bottom ash, and flue - gas desulfurization sludge, that contain mercury, uranium, thorium, arsenic, and other heavy metals, along with non-metals such as selenium. The American Lung Association, the American Nurses ' Association, and the Physicians for Social Responsibility released a report in 2009 which details in depth the detrimental impact of the coal industry on human health, including workers in the mines and individuals living in communities near plants burning coal as a power source. This report provides medical information regarding damage to the lungs, heart, and nervous system of Americans caused by the burning of coal as fuel. It details how the air pollution caused by the plume of coal smokestack emissions is a cause of asthma, strokes, reduced intelligence, artery blockages, heart attacks, congestive heart failure, cardiac arrhythmias, mercury poisoning, arterial occlusion, and lung cancer. More recently, the Chicago School of Public Health released a largely similar report, echoing many of the same findings. Though coal burning has increasingly been supplanted by less - toxic natural gas use in recent years, a 2010 study by the Clean Air Task Force still estimated that "air pollution from coal - fired power plants accounts for more than 13,000 premature deaths, 20,000 heart attacks, and 1.6 million lost workdays in the U.S. each year. '' The total monetary cost of these health impacts is over $100 billion annually. A 2017 study in the Economic Journal found that for Britain during the period 1851 -- 1860, "a one standard deviation increase in coal use raised infant mortality by 6 -- 8 % and that industrial coal use explains roughly one - third of the urban mortality penalty observed during this period. '' Coal mining and coal fueling of power station and industrial processes can cause major environmental damage. Water systems are affected by coal mining. For example, mining affects groundwater and water table levels and acidity. Spills of fly ash, such as the Kingston Fossil Plant coal fly ash slurry spill, can also contaminate land and waterways, and destroy homes. Power stations that burn coal also consume large quantities of water. This can affect the flows of rivers, and has consequential impacts on other land uses. One of the earliest known impacts of coal on the water cycle was acid rain. Approximately 75 Tg / S per year of sulfur dioxide (SO) is released from burning coal. After release, the sulfur dioxide is oxidized to gaseous H SO which scatters solar radiation, hence its increase in the atmosphere exerts a cooling effect on climate. This beneficially masks some of the warming caused by increased greenhouse gases. However, the sulfur is precipitated out of the atmosphere as acid rain in a matter of weeks, whereas carbon dioxide remains in the atmosphere for hundreds of years. Release of SO also contributes to the widespread acidification of ecosystems. Disused coal mines can also cause issues. Subsidence can occur above tunnels, causing damage to infrastructure or cropland. Coal mining can also cause long lasting fires, and it has been estimated that thousands of coal seam fires are burning at any given time. For example, there is a coal seam fire in Germany that has been burning since 1668, and is still burning in the 21st century. Some environmental impacts are modest, such as dust nuisance. However, perhaps the largest and most long term effect of coal use is the release of carbon dioxide, a greenhouse gas that causes climate change and global warming, according to the IPCC and the EPA. Coal is the largest contributor to the human - made increase of CO in the atmosphere. The production of coke from coal produces ammonia, coal tar, and gaseous compounds as by - products which if discharged to land, air or waterways can act as environmental pollutants. The Whyalla steelworks is one example of a coke producing facility where liquid ammonia is discharged to the marine environment. In 1999, world gross carbon dioxide emissions from coal usage were 8,666 million tonnes of carbon dioxide. In 2011, world gross emissions from coal usage were 14,416 million tonnes. For every megawatt - hour generated, coal - fired electric power generation emits around 2,000 pounds of carbon dioxide, which is almost double the approximately 1100 pounds of carbon dioxide released by a natural gas - fired electric plant. Because of this higher carbon efficiency of natural gas generation, as the market in the United States has changed to reduce coal and increase natural gas generation, carbon dioxide emissions may have fallen. Those measured in the first quarter of 2012 were the lowest of any recorded for the first quarter of any year since 1992. In 2013, the head of the UN climate agency advised that most of the world 's coal reserves should be left in the ground to avoid catastrophic global warming. "Clean '' coal technology is a collection of technologies being developed to mitigate the environmental impact of coal energy generation. Those technologies are being developed to remove or reduce pollutant emissions to the atmosphere. Some of the techniques that would be used to accomplish this include chemically washing minerals and impurities from the coal, gasification (see also IGCC), improved technology for treating flue gases to remove pollutants to increasingly stringent levels and at higher efficiency, carbon capture and storage technologies to capture the carbon dioxide from the flue gas and dewatering lower rank coals (brown coals) to improve the calorific value, and thus the efficiency of the conversion into electricity. Figures from the United States Environmental Protection Agency show that these technologies have made today 's coal - based generating fleet 77 percent cleaner on the basis of regulated emissions per unit of energy produced. Clean coal technology usually addresses atmospheric problems resulting from burning coal. Historically, the primary focus was on SO and NO, the most important gases in causation of acid rain, and particulates which cause visible air pollution and deleterious effects on human health. More recent focus has been on carbon dioxide (due to its impact on global warming) and concern over toxic species such as mercury. Concerns exist regarding the economic viability of these technologies and the timeframe of delivery, potentially high hidden economic costs in terms of social and environmental damage, and the costs and viability of disposing of removed carbon and other toxic matter. Several different technological methods are available for the purpose of carbon capture as demanded by the clean coal concept: The Kemper County IGCC Project, a 582 MW coal gasification - based power plant, will use pre-combustion capture of CO to capture 65 % of the CO the plant produces, which will be utilized / geologically sequestered in enhanced oil recovery operations. If the technology used at the Kemper Project is successful, it will be the United States ' first clean coal plant. The Saskatchewan Government 's Boundary Dam Power Station Integrated Carbon Capture and Sequestration Demonstration Project will use post-combustion, amine - based scrubber technology to capture 90 % of the CO emitted by Unit 3 of the power plant; this CO will be pipelined to and utilized for enhanced oil recovery in the Weyburn oil fields. However, only about a half of this CO2 will actually be permanently stored, the remainder is released into the atmosphere during capturing, and the processing in the oil field. An early example of a coal - based plant using (oxy - fuel) carbon - capture technology is Swedish company Vattenfall 's Schwarze Pumpe power station located in Spremberg, Germany, built by German firm Siemens, which went on - line in September 2008. The facility captures CO and acid rain producing pollutants, separates them, and compresses the CO into a liquid. Plans are to inject the CO into depleted natural gas fields or other geological formations. Vattenfall opines that this technology is considered not to be a final solution for CO reduction in the atmosphere, but provides an achievable solution in the near term while more desirable alternative solutions to power generation can be made economically practical. In 2014 research and development were discontinued due to high costs making the technology unviable. The white rot fungus Trametes versicolor can grow on and metabolize naturally occurring coal. The bacteria Diplococcus has been found to degrade coal, raising its temperature. Coal (by liquefaction technology) is one of the backstop resources that could limit escalation of oil prices and mitigate the effects of transportation energy shortage that will occur under peak oil. This is contingent on liquefaction production capacity becoming large enough to satiate the very large and growing demand for petroleum. Estimates of the cost of producing liquid fuels from coal suggest that domestic U.S. production of fuel from coal becomes cost - competitive with oil priced at around $35 per barrel, with the $35 being the break - even cost. With oil prices as low as around $40 per barrel in the U.S. as of December 2008, liquid coal lost some of its economic allure in the U.S., but will probably be re-vitalized, similar to oil sand projects, with an oil price around $70 per barrel. In China, due to an increasing need for liquid energy in the transportation sector, coal liquefaction projects were given high priority even during periods of oil prices below $40 per barrel. This is probably because China prefers not to be dependent on foreign oil, instead utilizing its enormous domestic coal reserves. As oil prices were increasing during the first half of 2009, the coal liquefaction projects in China were again boosted, and these projects are profitable with an oil barrel price of $40. China is the largest producer of coal in the world. It is the world 's largest energy consumer, and relies on coal to supply 69 % of its energy needs. An estimated 5 million people worked in China 's coal - mining industry in 2007. Coal pollution costs the EU € 43 billion each year. Measures to cut air pollution may have beneficial long - term economic impacts for individuals. The energy density of coal, i.e. its heating value, is roughly 24 megajoules per kilogram (approximately 6.7 kilowatt - hours per kg). For a coal power plant with a 40 % efficiency, it takes an estimated 325 kg (717 lb) of coal to power a 100 W lightbulb for one year. As of 2006, the average efficiency of electricity - generating power stations was 31 %; in 2002, coal represented about 23 % of total global energy supply, an equivalent of 3.4 billion tonnes of coal, of which 2.8 billion tonnes were used for electricity generation. The US Energy Information Agency 's 1999 report on CO emissions for energy generation quotes an emission factor of 0.963 kg CO / kWh for coal power, compared to 0.881 kg CO / kWh (oil), or 0.569 kg CO / kWh (natural gas). Thousands of coal fires are burning around the world. Those burning underground can be difficult to locate and many can not be extinguished. Fires can cause the ground above to subside, their combustion gases are dangerous to life, and breaking out to the surface can initiate surface wildfires. Coal seams can be set on fire by spontaneous combustion or contact with a mine fire or surface fire. Lightning strikes are an important source of ignition. The coal continues to burn slowly back into the seam until oxygen (air) can no longer reach the flame front. A grass fire in a coal area can set dozens of coal seams on fire. Coal fires in China burn an estimated 120 million tons of coal a year, emitting 360 million metric tons of CO, amounting to 2 -- 3 % of the annual worldwide production of CO from fossil fuels. In Centralia, Pennsylvania (a borough located in the Coal Region of the United States), an exposed vein of anthracite ignited in 1962 due to a trash fire in the borough landfill, located in an abandoned anthracite strip mine pit. Attempts to extinguish the fire were unsuccessful, and it continues to burn underground to this day. The Australian Burning Mountain was originally believed to be a volcano, but the smoke and ash come from a coal fire that has been burning for some 6,000 years. At Kuh i Malik in Yagnob Valley, Tajikistan, coal deposits have been burning for thousands of years, creating vast underground labyrinths full of unique minerals, some of them very beautiful. Local people once used this method to mine ammoniac. This place has been well - known since the time of Herodotus, but European geographers misinterpreted the Ancient Greek descriptions as the evidence of active volcanism in Turkestan (up to the 19th century, when the Russian army invaded the area). The reddish siltstone rock that caps many ridges and buttes in the Powder River Basin in Wyoming and in western North Dakota is called porcelanite, which resembles the coal burning waste "clinker '' or volcanic "scoria ''. Clinker is rock that has been fused by the natural burning of coal. In the Powder River Basin approximately 27 to 54 billion tons of coal burned within the past three million years. Wild coal fires in the area were reported by the Lewis and Clark Expedition as well as explorers and settlers in the area. In 2006, China was the top producer of coal with 38 % share followed by the United States and India, according to the British Geological Survey. As of 2012 coal production in the United States was falling at the rate of 7 % annually with many power plants using coal shut down or converted to natural gas; however, some of the reduced domestic demand was taken up by increased exports with five coal export terminals being proposed in the Pacific Northwest to export coal from the Powder River Basin to China and other Asian markets; however, as of 2013, environmental opposition was increasing. High - sulfur coal mined in Illinois which was unsaleable in the United States found a ready market in Asia as exports reached 13 million tons in 2012. The 948 billion short tons of recoverable coal reserves estimated by the Energy Information Administration are equal to about 4,196 BBOE (billion barrels of oil equivalent). The amount of coal burned during 2007 was estimated at 7.075 billion short tons, or 133.179 quadrillion BTUs. This is an average of 18.8 million BTU per short ton. In terms of heat content, this is about 57,000,000 barrels (9,100,000 m) of oil equivalent per day. By comparison in 2007, natural gas provided 51,000,000 barrels (8,100,000 m) of oil equivalent per day, while oil provided 85,800,000 barrels (13,640,000 m) per day. British Petroleum, in its 2007 report, estimated at 2006 end that there were 147 years reserves - to - production ratio based on proven coal reserves worldwide. This figure only includes reserves classified as "proven ''; exploration drilling programs by mining companies, particularly in under - explored areas, are continually providing new reserves. In many cases, companies are aware of coal deposits that have not been sufficiently drilled to qualify as "proven ''. However, some nations have n't updated their information and assume reserves remain at the same levels even with withdrawals. Of the three fossil fuels, coal has the most widely distributed reserves; coal is mined in over 100 countries, and on all continents except Antarctica. The largest reserves are found in the United States, Russia, China, Australia and India. Note the table below. The reserve life is an estimate based only on current production levels and proved reserves level for the countries shown, and makes no assumptions of future production or even current production trends. Countries with annual production higher than 100 million tonnes are shown. For comparison, data for the European Union is also shown. Shares are based on data expressed in tonnes oil equivalent. Countries with annual consumption higher than 100 million tonnes are shown. For comparison, data for the European Union is also shown. Shares are based on data expressed in tonnes oil equivalent. Countries with annual gross export higher than 10 million tonnes are shown. In terms of net export the largest exporters are still Australia (328.1 millions tonnes), Indonesia (316.2) and Russia (100.2). Countries with annual gross import higher than 20 million tonnes are shown. In terms of net import the largest importers are still Japan (206.0 millions tonnes), China (172.4) and South Korea (125.8). Coal is the official state mineral of Kentucky. and the official state rock of Utah; both U.S. states have a historic link to coal mining. Some cultures hold that children who misbehave will receive only a lump of coal from Santa Claus for Christmas in their christmas stockings instead of presents. It is also customary and considered lucky in Scotland and the North of England to give coal as a gift on New Year 's Day. This occurs as part of First - Footing and represents warmth for the year to come.
how do you calculate bun to creatinine ratio
BUN - to - creatinine ratio - wikipedia In medicine, the BUN - to - creatinine ratio is the ratio of two serum laboratory values, the blood urea nitrogen (BUN) (mg / dL) and serum creatinine (Cr) (mg / dL). Outside the United States, particularly in Canada and Europe, the truncated term urea is used (though it is still the same blood chemical) and the units are different (mmol / L). The units of creatinine are also different (μmol / L), and this value is termed the urea - to - creatinine ratio. The ratio may be used to determine the cause of acute kidney injury or dehydration. The principle behind this ratio is the fact that both urea (BUN) and creatinine are freely filtered by the glomerulus; however, urea reabsorbed by the tubules can be regulated (increased or decreased) whereas creatinine reabsorption remains the same (minimal reabsorption). Urea and creatinine are nitrogenous end products of metabolism. Urea is the primary metabolite derived from dietary protein and tissue protein turnover. Creatinine is the product of muscle creatine catabolism. Both are relatively small molecules (60 and 113 daltons, respectively) that distribute throughout total body water. In Europe, the whole urea molecule is assayed, whereas in the United States only the nitrogen component of urea (the blood or serum urea nitrogen, i.e., BUN or SUN) is measured. The BUN, then, is roughly one - half (7 / 15 or 0.466) of the blood urea. The normal range of urea nitrogen in blood or serum is 5 to 20 mg / dl, or 1.8 to 7.1 mmol urea per liter. The range is wide because of normal variations due to protein intake, endogenous protein catabolism, state of hydration, hepatic urea synthesis, and renal urea excretion. A BUN of 15 mg / dl would represent significantly impaired function for a woman in the thirtieth week of gestation. Her higher glomerular filtration rate (GFR), expanded extracellular fluid volume, and anabolism in the developing fetus contribute to her relatively low BUN of 5 to 7 mg / dl. In contrast, the rugged rancher who eats in excess of 125 g protein each day may have a normal BUN of 20 mg / dl. The normal serum creatinine (sCr) varies with the subject 's body muscle mass and with the technique used to measure it. For the adult male, the normal range is 0.6 to 1.2 mg / dl, or 53 to 106 μmol / L by the kinetic or enzymatic method, and 0.8 to 1.5 mg / dl, or 70 to 133 μmol / L by the older manual Jaffé reaction. For the adult female, with her generally lower muscle mass, the normal range is 0.5 to 1.1 mg / dl, or 44 to 97 μmol / L by the enzymatic method. Multiple methods for analysis of BUN and creatinine have evolved over the years. Most of those in current use are automated and give clinically reliable and reproducible results. There are two general methods for the measurement of urea nitrogen. The diacetyl, or Fearon, reaction develops a yellow chromogen with urea, and this is quantified by photometry. It has been modified for use in autoanalyzers and generally gives relatively accurate results. It still has limited specificity, however, as illustrated by spurious elevations with sulfonylurea compounds, and by colorimetric interference from hemoglobin when whole blood is used. In the more specific enzymatic methods, the enzyme urease converts urea to ammonia and carbonic acid. These products, which are proportional to the concentration of urea in the sample, are assayed in a variety of systems, some of which are automated. One system checks the decrease in absorbance at 340 mm when the ammonia reacts with alpha - ketoglutaric acid. The Astra system measures the rate of increase in conductivity of the solution in which urea is hydrolyzed. Even though the test is now performed mostly on serum, the term BUN is still retained by convention. The specimen should not be collected in tubes containing sodium fluoride because the fluoride inhibits urease. Also chloral hydrate and guanethidine have been observed to increase BUN values. The 1886 Jaffé reaction, in which creatinine is treated with an alkaline picrate solution to yield a red complex, is still the basis of most commonly used methods for measuring creatinine. This reaction is nonspecific and subject to interference from many noncreatinine chromogens, including acetone, acetoacetate, pyruvate, ascorbic acid, glucose, cephalosporins, barbiturates, and protein. It is also sensitive to pH and temperature changes. One or another of the many modifications designed to nullify these sources of error is used in most clinical laboratories today. For example, the recent kinetic - rate modification, which isolates the brief time interval during which only true creatinine contributes to total color formation, is the basis of the Astra modular system. More specific, non-Jaffé assays have also been developed. One of these, an automated dry - slide enzymatic method, measures ammonia generated when creatinine is hydrolyzed by creatinine iminohydrolase. Its simplicity, precision, and speed highly recommend it for routine use in the clinical laboratory. Only 5 - fluorocytosine interferes significantly with the test. Creatinine must be determined in plasma or serum and not whole blood because erythrocytes contain considerable amounts of noncreatinine chromogens. To minimize the conversion of creatine to creatinine, specimens must be as fresh as possible and maintained at pH 7 during storage. The amount of urea produced varies with substrate delivery to the liver and the adequacy of liver function. It is increased by a high - protein diet, by gastrointestinal bleeding (based on plasma protein level of 7.5 g / dl and a hemoglobin of 15 g / dl, 500 ml of whole blood is equivalent to 100 g protein), by catabolic processes such as fever or infection, and by antianabolic drugs such as tetracyclines (except doxycycline) or glucocorticoids. It is decreased by low - protein diet, malnutrition or starvation, and by impaired metabolic activity in the liver due to parenchymal liver disease or, rarely, to congenital deficiency of urea cycle enzymes. The normal subject on a 70 g protein diet produces about 12 g of urea each day. This newly synthesized urea distributes throughout total body water. Some of it is recycled through the enterohepatic circulation. Usually, a small amount (less than 0.5 g / day) is lost through the gastrointestinal tract, lungs, and skin; during exercise, a substantial fraction may be excreted in sweat. The bulk of the urea, about 10 g each day, is excreted by the kidney in a process that begins with glomerular filtration. At high urine flow rates (greater than 2 ml / min), 40 % of the filtered load is reabsorbed, and at flow rates lower than 2 ml / min, reabsorption may increase to 60 %. Low flow, as in urinary tract obstruction, allows more time for reabsorption and is often associated with increases in antidiuretic hormone (ADH), which increases the permeability of the terminal collecting tubule to urea. During ADH - induced antidiuresis, urea secretion contributes to the intratubular concentration of urea. The subsequent buildup of urea in the inner medulla is critical to the process of urinary concentration. Reabsorption is also increased by volume contraction, reduced renal plasma flow as in congestive heart failure, and decreased glomerular filtration. Creatinine formation begins with the transamidination from arginine to glycine to form glycocyamine or guanidoacetic acid (GAA). This reaction occurs primarily in the kidneys, but also in the mucosa of the small intestine and the pancreas. The GAA is transported to the liver where it is methylated by S - adenosyl methionine (SAM) to form creatine. Creatine enters the circulation, and 90 % of it is taken up and stored by muscle tissue Normal serum values Serum Ratios An elevated BUN: Cr due to a low or low - normal creatinine and a BUN within the reference range is unlikely to be of clinical significance. The ratio is predictive of prerenal injury when BUN: Cr exceeds 20 or when urea: Cr exceeds 100. In prerenal injury, urea increases disproportionately to creatinine due to enhanced proximal tubular reabsorption that follows the enhanced transport of sodium and water. The ratio is useful for the diagnosis of bleeding from the gastrointestinal (GI) tract in patients who do not present with overt vomiting of blood. In children, a BUN: Cr ratio of 30 or greater has a sensitivity of 68.8 % and a specificity of 98 % for upper gastrointestinal bleeding. A common assumption is that the ratio is elevated because of amino acid digestion, since blood, (excluding water) consists largely of the protein hemoglobin, is broken down by digestive enzymes of the upper GI tract into amino acids, which are then reabsorbed in the GI tract and broken down into urea. However, elevated BUN: Cr ratios are not observed when other high protein loads (e.g., steak) are consumed. Renal hypoperfusion secondary to the blood lost from the GI bleed has been postulated to explain the elevated BUN: Cr ratio. However, other research has found that renal hypoperfusion can not fully explain the elevation. Because of decreased muscle mass, elderly patients may have an elevated BUN: Cr at baseline. Hypercatabolic states, high - dose glucocorticoids, and resorption of large hematomas have all been cited as causes of a disproportionate rise in BUN relative to the creatinine. Body water: Intracellular fluid / Cytosol
is there a prize for 2 lucky stars in euromillions
EuroMillions - wikipedia EuroMillions is a transnational lottery requiring 7 correct numbers to win the jackpot. It was launched on 7 February 2004 by France 's Française des Jeux, Spain 's Loterías y Apuestas del Estado and the United Kingdom 's Camelot. The first draw was held on Friday 13 February 2004 in Paris. Initially, only the UK, France and Spain participated, with the Austrian, Belgian, Irish, Luxembourgish, Portuguese and Swiss lotteries joining for the 8 October 2004 drawing. Draws are held every Tuesday and Friday night at 20: 45 CET in Paris. A standard EuroMillions ticket costs € 2.50, £ 2.50 or CHF 3.50 per line played, depending on the local currency. An option called Plus is currently available only in Ireland, adds € 1.00 per line; as of February 2014, a non-optional addition called "My Million '' in France adds € 0.50 per line, while in Portugal it 's called "M1lhão '' and represents € 0.30 of the whole € 2.50 bet. The cost of playing in the UK increased from £ 1.50 to £ 2.00 per line on 7 November 2009, due to the combination of the EUR / GBP exchange rate, and an automatic entry in its Millionaire Raffle. From 24 September 2016 the cost per line increased from £ 2.00 to £ 2.50 in the UK. From September 24, 2016 the amount of lucky stars changed from a pool of 11 to a pool of 12 numbers. Decreasing the jackpot winning odds from 1: 117million to 1: 140million. From September 24, 2016 the cost of entry in Ireland and Spain rose to € 2.50 per line. All prizes, including the jackpot, are tax - free (except in Switzerland, Spain and Portugal since 2013) and are paid as a lump sum. Draws take place at 20: 45 every Tuesday and Friday in Paris. The results are published shortly after the draw on associated and independent websites around 23: 00 hours. To participate in the EuroMillions Lotto, tickets can be purchased from many outlets, namely at licensed stores and online websites. The gameplay changed on Tuesday 10 May 2011 with a second weekly draw and the number of "lucky stars '' in the Pacquerette machine increasing from 9 to 11. A prize for matching two main numbers and no lucky stars was also introduced on the same date. On Saturday 24 September 2016 the number of "lucky stars '' increased again, from 11 to 12. As of 24 September 2016 the structure of the draw was changed, with the Lucky Star being drawn from a pool of 12 numbers instead of the old 11. The prize structure as of Tuesday 27 September 2016 is as follows: The booster fund is available to contribute to the jackpot, for example to boost the initial jackpot in a sequence of growing jackpots. The amount utilized each week is determined in advance by the participating lotteries. Effective 7 November 2009 new rules were put in place regarding rollovers. A new rule change of 12 January 2012 locks the Jackpot cap at € 190,000,000 permanently and if the jackpot is not won after two draws, the prize money will be distributed amongst the winners at the next level. A new rule change of 24 September 2016: if the jackpot is not won after five draws, the prize money will be distributed amongst the winners at the next level. The participating national lotteries in the EuroMillions game have each established a EuroMillions Trust account. This is used for the settlement of all amounts due and for holding amounts in respect of future prizes. This trust arrangement protects the participating lotteries between them from a default from one of the national companies and ultimately the players ' interests. EuroMillions is the largest euro lottery, its history of creation is dated back to the distant year of 1994 when several EU countries first had the idea of launching a multi-European lottery. Super Draws and Event draws are special drawings when the Jackpot is set to a guaranteed amount - often € 100,000,000. The difference being that a Super Draw jackpot will roll over to the next drawing if not won but an Event Draw jackpot will be distributed amongst the winners in the next lower tier (i.e. match 5 + 1). So far there has not been an event draw, until now, jackpots in a Super Draw have rolled over to the next drawing if not won The first Super draw of 2011 took place on Tuesday 10 May to mark the introduction of the second weekly Euromillions draw and changes to the game format (11 lucky stars instead of 9 and a new "match 2 main numbers and no lucky stars '' prize tier). The first Super draw of 2016 took place on Friday 30 September to introduce the change to the game format (12 lucky stars instead of 11 and increased price). Super draws have been held to date on (A € 100,000,000 Super draw was planned for 6 June 2014 but was cancelled when the jackpot rolled over to € 105,000,000). (This is a change to the game rules as of 4 April 2011 when the Event Draw was added.) In the UK, the total EuroMillions revenue is broken down as follows: When Chris and Colin Weir, one couple who won the EuroMillions, pledged to donate their prize money to good causes, cyber criminals started using the couple 's name in their email scams to fool the general public and ultimately cheat them of money. In June 2007, with the success of the main EuroMillions game, the Irish National Lottery launched EuroMillions Plus. For an extra € 1 per line, players could enter the additional draw with the top prize each week of € 500,000. Sales of the main EuroMillions in Ireland for 2006 were over € 145 million; this success led to the introduction of ' Plus '. Since November 2009 at least one UK player every week has won a guaranteed £ 1,000,000. With the introduction of the Tuesday EuroMillions Draw on Tuesday 10 May 2011 there were 2 Millionaire Raffle winners each week. The latest changes to Euromillions in September 2016 now mean that two guaranteed Millionaire Raffle winners are made per draw, or 4 per week across the two draws. According to the Euromillions website, the chances of winning the UK Millionaire Maker game on a Tuesday can be estimated as 1 in 1,900,000 but can shrink to 1 in 2,250,000 in the events of rollovers. On a Friday, it can be calculated as 1 in 2,950,000 but again the odds can fall to 1 in 3,400,000 in the events of a 4 times rollover. Winning in this game depends entirely on the number of the playslips sold so the odds fluctuate. The odds may also fluctuate during a super draw or a special event in the UK Millionaire Raffle. Prices per line in the UK increased by 50p to £ 2.00. The 50p was added due to weak exchange rates between the pound and the euro and to cover the expense of the new Millionaire Maker. On 24 September 2016 the price per line in the UK was increased by an additional 50p to £ 2.50.
french brandy produced outside of cognac or armagnac is know as
Armagnac (brandy) - wikipedia Armagnac (/ ˈɑːmənˌjæk /; French pronunciation: ​ (aʁ. maˈɲak)) is a distinctive kind of brandy produced in the Armagnac region in Gascony, southwest France. It is distilled from wine usually made from a blend of grapes including Baco 22A, Colombard, Folle blanche and Ugni blanc, traditionally using column stills rather than the pot stills used in the production of cognac. The resulting spirit is then aged in oak barrels before release. Production is overseen by the Institut national de l'origine et de la qualité (INAO) and the Bureau National Interprofessionel de l'Armagnac (BNIA). Armagnac was one of the first areas in France to begin distilling spirits, but the overall volume of production is far smaller than cognac production and therefore is less known outside Europe. In addition, it is for the most part made and sold by small producers, whereas cognac production is dominated by big - name brands. Armagnac is the oldest brandy distilled in France; and, in the past, it was consumed for its therapeutic benefits. In the 14th century, Prior Vital Du Four, a cardinal, wrote that it had 40 virtues: It makes disappear redness and burning of the eyes, and stops them from tearing; it cures hepatitis, sober consumption adhering. It cures gout, cankers, and fistula by ingestion; restores the paralysed member by massage; and heals wounds of the skin by application. It enlivens the spirit, partaken in moderation, recalls the past to memory, renders men joyous, preserves youth and retards senility. And when retained in the mouth, it loosens the tongue and emboldens the wit, if someone timid from time to time himself permits. Between the 15th and 17th centuries, Armagnac was traded on the markets of Saint - Sever, Mont - de-Marsan, and Aire - sur - l'Adour. Subsequently, Dutch merchants began promoting the trade more widely. The traditional French gourmet dish ortolan has traditionally been prepared by force - feeding an ortolan bunting before drowning it in Armagnac and roasting it. The dish is now legally prohibited due to laws protecting the bird. The Armagnac region lies between the Adour and Garonne rivers in the foothills of the Pyrenees. The region was granted AOC status in 1936. The official production area is divided into three districts that lie in the departements of Gers, Landes, and Lot - et - Garonne. The region contains 15,000 hectares (37,000 acres) of grape - producing vines. The Fallières Decree of 25 May 1909 describes the three districts: Each of these areas is controlled by separate appellation regulations. More recently, a new appellation -- "Blanche d'Armagnac '' -- was established to allow the production and export of clear, white brandies that are unaged. Armagnac is traditionally distilled once, resulting in 52 % of alcohol. This results in a more fragrant and flavorful spirit than Cognac, where double distillation takes place. Long ageing in oak barrels softens the taste and causes the development of more complex flavours and a brown colour. Ageing in the barrel removes a part of the alcohol and water by evaporation (known as part des anges -- "angels ' tribute '' or "angels ' share '') and allows more complex aromatic compounds to appear by oxidation, which further modifies the flavour. Since alcohol evaporates faster than water, the alcohol degree is naturally reduced by an average of 0.4 % per year depending on the characteristics of the cellars (average temperature and humidity). When the Armagnac is considered as matured, it is transferred to large glass bottles (called "Dame Jeanne '') for storage. The main difference between Armagnac and other spirits is, that due to its relatively low alcoholic content, it is generally not diluted with water. Armagnac is sold under several classifications, mostly referring to the age of the constituent brandies. Armagnac is allowed to be sold under vintages. When Armagnacs of different ages have been blended, the age on the bottle refers to the youngest component. A three - star, or VS, Armagnac is a mix of several Armagnacs that have seen at least two years of ageing in wood. For VSOP, the ageing is at least three years, and for XO, at least six. Hors d'âge means the youngest component in the blend is at least ten years old. Older and better Armagnacs are often sold as vintages, with the bottles containing Armagnac from a single year, the year being noted on the bottle. Ten different varieties of Armagnac grapes are authorised for use in the production of Armagnac. Of these, four are most common:
when did the internet become available in homes
History of the Internet - Wikipedia The history of the Internet begins with the development of electronic computers in the 1950s. Initial concepts of wide area networking originated in several computer science laboratories in the United States, United Kingdom, and France. The US Department of Defense awarded contracts as early as the 1960s, including for the development of the ARPANET project, directed by Robert Taylor and managed by Lawrence Roberts. The first message was sent over the ARPANET in 1969 from computer science Professor Leonard Kleinrock 's laboratory at University of California, Los Angeles (UCLA) to the second network node at Stanford Research Institute (SRI). Packet switching networks such as the NPL network, ARPANET, Tymnet, Merit Network, CYCLADES, and Telenet, were developed in the late 1960s and early 1970s using a variety of communications protocols. Donald Davies first demonstrated packet switching in 1967 at the National Physics Laboratory (NPL) in the UK, which became a testbed for UK research for almost two decades. The ARPANET project led to the development of protocols for internetworking, in which multiple separate networks could be joined into a network of networks. The Internet protocol suite (TCP / IP) was developed by Robert E. Kahn and Vint Cerf in the 1970s and became the standard networking protocol on the ARPANET, incorporating concepts from the French CYCLADES project directed by Louis Pouzin. In the early 1980s the NSF funded the establishment for national supercomputing centers at several universities, and provided interconnectivity in 1986 with the NSFNET project, which also created network access to the supercomputer sites in the United States from research and education organizations. Commercial Internet service providers (ISPs) began to emerge in the very late 1980s. The ARPANET was decommissioned in 1990. Limited private connections to parts of the Internet by officially commercial entities emerged in several American cities by late 1989 and 1990, and the NSFNET was decommissioned in 1995, removing the last restrictions on the use of the Internet to carry commercial traffic. In the 1980s, research at CERN in Switzerland by British computer scientist Tim Berners - Lee resulted in the World Wide Web, linking hypertext documents into an information system, accessible from any node on the network. Since the mid-1990s, the Internet has had a revolutionary impact on culture, commerce, and technology, including the rise of near - instant communication by electronic mail, instant messaging, voice over Internet Protocol (VoIP) telephone calls, two - way interactive video calls, and the World Wide Web with its discussion forums, blogs, social networking, and online shopping sites. The research and education community continues to develop and use advanced networks such as JANET in the United Kingdom and Internet2 in the United States. Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1 - Gbit / s, 10 - Gbit / s, or more. The Internet 's takeover of the global communication landscape was almost instant in historical terms: it only communicated 1 % of the information flowing through two - way telecommunications networks in the year 1993, already 51 % by 2000, and more than 97 % of the telecommunicated information by 2007. Today the Internet continues to grow, driven by ever greater amounts of online information, commerce, entertainment, and social networking. Early research and development: Merging the networks and creating the Internet: Commercialization, privatization, broader access leads to the modern Internet: Examples of Internet services: The concept of data communication -- transmitting data between two different places through an electromagnetic medium such as radio or an electric wire -- pre-dates the introduction of the first computers. Such communication systems were typically limited to point to point communication between two end devices. Telegraph systems and telex machines can be considered early precursors of this kind of communication. The Telegraph in the late 19th century was the first fully digital communication system. Fundamental theoretical work in data transmission and information theory was developed by Claude Shannon, Harry Nyquist, and Ralph Hartley in the early 20th century. Early computers had a central processing unit and remote terminals. As the technology evolved, new systems were devised to allow communication over longer distances (for terminals) or with higher speed (for interconnection of local devices) that were necessary for the mainframe computer model. These technologies made it possible to exchange data (such as files) between remote computers. However, the point - to - point communication model was limited, as it did not allow for direct communication between any two arbitrary systems; a physical link was necessary. The technology was also considered unsafe for strategic and military use because there were no alternative paths for the communication in case of an enemy attack. With limited exceptions, the earliest computers were connected directly to terminals used by individual users, typically in the same building or site. Such networks became known as local area networks (LANs). Networking beyond this scope, known as wide area networks (WANs), emerged during the 1950s and became established during the 1960s. J.C.R. Licklider, Vice President at Bolt Beranek and Newman, Inc., proposed a global network in his January 1960 paper Man - Computer Symbiosis: A network of such centers, connected to one another by wide - band communication lines (...) the functions of present - day libraries together with anticipated advances in information storage and retrieval and symbiotic functions suggested earlier in this paper In August 1962, Licklider and Welden Clark published the paper "On - Line Man - Computer Communication '' which was one of the first descriptions of a networked future. In October 1962, Licklider was hired by Jack Ruina as director of the newly established Information Processing Techniques Office (IPTO) within DARPA, with a mandate to interconnect the United States Department of Defense 's main computers at Cheyenne Mountain, the Pentagon, and SAC HQ. There he formed an informal group within DARPA to further computer research. He began by writing memos describing a distributed network to the IPTO staff, whom he called "Members and Affiliates of the Intergalactic Computer Network ''. As part of the information processing office 's role, three network terminals had been installed: one for System Development Corporation in Santa Monica, one for Project Genie at University of California, Berkeley, and one for the Compatible Time - Sharing System project at Massachusetts Institute of Technology (MIT). Licklider 's identified need for inter-networking would become obvious by the apparent waste of resources this caused. For each of these three terminals, I had three different sets of user commands. So if I was talking online with someone at S.D.C. and I wanted to talk to someone I knew at Berkeley or M.I.T. about this, I had to get up from the S.D.C. terminal, go over and log into the other terminal and get in touch with them... I said, oh man, it 's obvious what to do: If you have these three terminals, there ought to be one terminal that goes anywhere you want to go where you have interactive computing. That idea is the ARPAnet. Although he left the IPTO in 1964, five years before the ARPANET went live, it was his vision of universal networking that provided the impetus for one of his successors, Robert Taylor, to initiate the ARPANET development. Licklider later returned to lead the IPTO in 1973 for two years. The issue of connecting separate physical networks to form one logical network was the first of many problems. Early networks used message switched systems that required rigid routing structures prone to single point of failure. In the 1960s, Paul Baran of the RAND Corporation produced a study of survivable networks for the U.S. military in the event of nuclear war. Information transmitted across Baran 's network would be divided into what he called "message blocks ''. Independently, Donald Davies (National Physical Laboratory, UK), proposed and was the first to put into practice a local area network based on what he called packet switching, the term that would ultimately be adopted. Larry Roberts applied Davies ' concepts of packet switching for the ARPANET wide area network, and sought input from Paul Baran and Leonard Kleinrock. Kleinrock subsequently developed the mathematical theory behind the performance of this technology building on his earlier work on queueing theory. Packet switching is a rapid store and forward networking design that divides messages up into arbitrary packets, with routing decisions made per - packet. It provides better bandwidth utilization and response times than the traditional circuit - switching technology used for telephony, particularly on resource - limited interconnection links. Following discussions with J.C.R. Licklider, Donald Davies became interested in data communications for computer networks. At the National Physical Laboratory (United Kingdom) in 1965, Davies designed and proposed a national data network based on packet switching. The following year, he described the use of an "Interface computer '' to act as a router. The proposal was not taken up nationally but by 1967, a pilot experiment had demonstrated the feasibility of packet switched networks. By 1969 he had begun building the Mark I packet - switched network to meet the needs of the multidisciplinary laboratory and prove the technology under operational conditions. In 1976, 12 computers and 75 terminal devices were attached, and more were added until the network was replaced in 1986. NPL, followed by ARPANET, were the first two networks in the world to use packet switching, and were interconnected in the early 1970s. Robert Taylor was promoted to the head of the information processing office at Defense Advanced Research Projects Agency (DARPA) in June 1966. He intended to realize Licklider 's ideas of an interconnected networking system. Bringing in Larry Roberts from MIT, he initiated a project to build such a network. The first ARPANET link was established between the University of California, Los Angeles (UCLA) and the Stanford Research Institute at 22: 30 hours on October 29, 1969. "We set up a telephone connection between us and the guys at SRI... '', Kleinrock... said in an interview: "We typed the L and we asked on the phone, Yet a revolution had begun ''... By December 5, 1969, a 4 - node network was connected by adding the University of Utah and the University of California, Santa Barbara. Building on ideas developed in ALOHAnet, the ARPANET grew rapidly. By 1981, the number of hosts had grown to 213, with a new host being added approximately every twenty days. ARPANET development was centered around the Request for Comments (RFC) process, still used today for proposing and distributing Internet Protocols and Systems. RFC 1, entitled "Host Software '', was written by Steve Crocker from the University of California, Los Angeles, and published on April 7, 1969. These early years were documented in the 1972 film Computer Networks: The Heralds of Resource Sharing. ARPANET became the technical core of what would become the Internet, and a primary tool in developing the technologies used. The early ARPANET used the Network Control Program (NCP, sometimes Network Control Protocol) rather than TCP / IP. On January 1, 1983, known as flag day, NCP on the ARPANET was replaced by the more flexible and powerful family of TCP / IP protocols, marking the start of the modern Internet. International collaborations on ARPANET were sparse. For various political reasons, European developers were concerned with developing the X. 25 networks. Notable exceptions were the Norwegian Seismic Array (NORSAR) in 1972, followed in 1973 by Sweden with satellite links to the Tanum Earth Station and Peter Kirstein 's research group in the UK, initially at the Institute of Computer Science, London University and later at University College London. The Merit Network was formed in 1966 as the Michigan Educational Research Information Triad to explore computer networking between three of Michigan 's public universities as a means to help the state 's educational and economic development. With initial support from the State of Michigan and the National Science Foundation (NSF), the packet - switched network was first demonstrated in December 1971 when an interactive host to host connection was made between the IBM mainframe computer systems at the University of Michigan in Ann Arbor and Wayne State University in Detroit. In October 1972 connections to the CDC mainframe at Michigan State University in East Lansing completed the triad. Over the next several years in addition to host to host interactive connections the network was enhanced to support terminal to host connections, host to host batch connections (remote job submission, remote printing, batch file transfer), interactive file transfer, gateways to the Tymnet and Telenet public data networks, X. 25 host attachments, gateways to X. 25 data networks, Ethernet attached hosts, and eventually TCP / IP and additional public universities in Michigan join the network. All of this set the stage for Merit 's role in the NSFNET project starting in the mid-1980s. The CYCLADES packet switching network was a French research network designed and directed by Louis Pouzin. First demonstrated in 1973, it was developed to explore alternatives to the early ARPANET design and to support network research generally. It was the first network to make the hosts responsible for reliable delivery of data, rather than the network itself, using unreliable datagrams and associated end - to - end protocol mechanisms. Concepts of this network influenced later ARPANET architecture. Based on ARPA 's research, packet switching network standards were developed by the International Telecommunication Union (ITU) in the form of X. 25 and related standards. While using packet switching, X. 25 is built on the concept of virtual circuits emulating traditional telephone connections. In 1974, X. 25 formed the basis for the SERCnet network between British academic and research sites, which later became JANET. The initial ITU Standard on X. 25 was approved in March 1976. The British Post Office, Western Union International and Tymnet collaborated to create the first international packet switched network, referred to as the International Packet Switched Service (IPSS), in 1978. This network grew from Europe and the US to cover Canada, Hong Kong, and Australia by 1981. By the 1990s it provided a worldwide networking infrastructure. Unlike ARPANET, X. 25 was commonly available for business use. Telenet offered its Telemail electronic mail service, which was also targeted to enterprise use rather than the general email system of the ARPANET. The first public dial - in networks used asynchronous TTY terminal protocols to reach a concentrator operated in the public network. Some networks, such as CompuServe, used X. 25 to multiplex the terminal sessions into their packet - switched backbones, while others, such as Tymnet, used proprietary protocols. In 1979, CompuServe became the first service to offer electronic mail capabilities and technical support to personal computer users. The company broke new ground again in 1980 as the first to offer real - time chat with its CB Simulator. Other major dial - in networks were America Online (AOL) and Prodigy that also provided communications, content, and entertainment features. Many bulletin board system (BBS) networks also provided on - line access, such as FidoNet which was popular amongst hobbyist computer users, many of them hackers and amateur radio operators. In 1979, two students at Duke University, Tom Truscott and Jim Ellis, originated the idea of using Bourne shell scripts to transfer news and messages on a serial line UUCP connection with nearby University of North Carolina at Chapel Hill. Following public release of the software in 1980, the mesh of UUCP hosts forwarding on the Usenet news rapidly expanded. UUCPnet, as it would later be named, also created gateways and links between FidoNet and dial - up BBS hosts. UUCP networks spread quickly due to the lower costs involved, ability to use existing leased lines, X. 25 links or even ARPANET connections, and the lack of strict use policies compared to later networks like CSNET and Bitnet. All connects were local. By 1981 the number of UUCP hosts had grown to 550, nearly doubling to 940 in 1984. -- Sublink Network, operating since 1987 and officially founded in Italy in 1989, based its interconnectivity upon UUCP to redistribute mail and news groups messages throughout its Italian nodes (about 100 at the time) owned both by private individuals and small companies. Sublink Network represented possibly one of the first examples of the Internet technology becoming progress through popular diffusion. With so many different network methods, something was needed to unify them. Robert E. Kahn of DARPA and ARPANET recruited Vinton Cerf of Stanford University to work with him on the problem. By 1973, they had worked out a fundamental reformulation, where the differences between network protocols were hidden by using a common internetwork protocol, and instead of the network being responsible for reliability, as in the ARPANET, the hosts became responsible. Cerf credits Hubert Zimmermann, Gerard LeLann and Louis Pouzin (designer of the CYCLADES network) with important work on this design. The specification of the resulting protocol, RFC 675 -- Specification of Internet Transmission Control Program, by Vinton Cerf, Yogen Dalal and Carl Sunshine, Network Working Group, December 1974, contains the first attested use of the term internet, as a shorthand for internetworking; later RFCs repeat this use, so the word started out as an adjective rather than the noun it is today. With the role of the network reduced to the bare minimum, it became possible to join almost any networks together, no matter what their characteristics were, thereby solving Kahn 's initial problem. DARPA agreed to fund development of prototype software, and after several years of work, the first demonstration of a gateway between the Packet Radio network in the SF Bay area and the ARPANET was conducted by the Stanford Research Institute. On November 22, 1977 a three network demonstration was conducted including the ARPANET, the SRI 's Packet Radio Van on the Packet Radio Network and the Atlantic Packet Satellite network. Stemming from the first specifications of TCP in 1974, TCP / IP emerged in mid-late 1978 in nearly its final form, as used for the first decades of the Internet, known as "IPv4 ''. which is described in IETF publication RFC 791 (September 1981). IPv4 uses 32 - bit addresses which limits the address space to 2 addresses, i.e. 4294967296 addresses. The last available IPv4 address was assigned in January 2011. IPv4 is being replaced by its successor, called "IPv6 '', which uses 128 bit addresses, providing 2 addresses, i.e. 340 282 366 920 938 463 463 374 607 431 768 211 456. This is a vastly increased address space. The shift to IPv6 is expected to take many years, decades, or perhaps longer, to complete, since there were four billion machines with IPv4 when the shift began. The associated standards for IPv4 were published by 1981 as RFCs 791, 792 and 793, and adopted for use. DARPA sponsored or encouraged the development of TCP / IP implementations for many operating systems and then scheduled a migration of all hosts on all of its packet networks to TCP / IP. On January 1, 1983, known as flag day, TCP / IP protocols became the only approved protocol on the ARPANET, replacing the earlier NCP protocol. After the ARPANET had been up and running for several years, ARPA looked for another agency to hand off the network to; ARPA 's primary mission was funding cutting edge research and development, not running a communications utility. Eventually, in July 1975, the network had been turned over to the Defense Communications Agency, also part of the Department of Defense. In 1983, the U.S. military portion of the ARPANET was broken off as a separate network, the MILNET. MILNET subsequently became the unclassified but military - only NIPRNET, in parallel with the SECRET - level SIPRNET and JWICS for TOP SECRET and above. NIPRNET does have controlled security gateways to the public Internet. The networks based on the ARPANET were government funded and therefore restricted to noncommercial uses such as research; unrelated commercial use was strictly forbidden. This initially restricted connections to military sites and universities. During the 1980s, the connections expanded to more educational institutions, and even to a growing number of companies such as Digital Equipment Corporation and Hewlett - Packard, which were participating in research projects or providing services to those who were. Several other branches of the U.S. government, the National Aeronautics and Space Administration (NASA), the National Science Foundation (NSF), and the Department of Energy (DOE) became heavily involved in Internet research and started development of a successor to ARPANET. In the mid-1980s, all three of these branches developed the first Wide Area Networks based on TCP / IP. NASA developed the NASA Science Network, NSF developed CSNET and DOE evolved the Energy Sciences Network or ESNet. NASA developed the TCP / IP based NASA Science Network (NSN) in the mid-1980s, connecting space scientists to data and information stored anywhere in the world. In 1989, the DECnet - based Space Physics Analysis Network (SPAN) and the TCP / IP - based NASA Science Network (NSN) were brought together at NASA Ames Research Center creating the first multiprotocol wide area network called the NASA Science Internet, or NSI. NSI was established to provide a totally integrated communications infrastructure to the NASA scientific community for the advancement of earth, space and life sciences. As a high - speed, multiprotocol, international network, NSI provided connectivity to over 20,000 scientists across all seven continents. In 1981 NSF supported the development of the Computer Science Network (CSNET). CSNET connected with ARPANET using TCP / IP, and ran TCP / IP over X. 25, but it also supported departments without sophisticated network connections, using automated dial - up mail exchange. In 1986, the NSF created NSFNET, a 56 kbit / s backbone to support the NSF - sponsored supercomputing centers. The NSFNET also provided support for the creation of regional research and education networks in the United States, and for the connection of university and college campus networks to the regional networks. The use of NSFNET and the regional networks was not limited to supercomputer users and the 56 kbit / s network quickly became overloaded. NSFNET was upgraded to 1.5 Mbit / s in 1988 under a cooperative agreement with the Merit Network in partnership with IBM, MCI, and the State of Michigan. The existence of NSFNET and the creation of Federal Internet Exchanges (FIXes) allowed the ARPANET to be decommissioned in 1990. NSFNET was expanded and upgraded to 45 Mbit / s in 1991, and was decommissioned in 1995 when it was replaced by backbones operated by several commercial Internet Service Providers. The term "internet '' was adopted in the first RFC published on the TCP protocol (RFC 675: Internet Transmission Control Program, December 1974) as an abbreviation of the term internetworking and the two terms were used interchangeably. In general, an internet was any network using TCP / IP. It was around the time when ARPANET was interlinked with NSFNET in the late 1980s, that the term was used as the name of the network, Internet, being the large and global TCP / IP network. As interest in networking grew and new applications for it were developed, the Internet 's technologies spread throughout the rest of the world. The network - agnostic approach in TCP / IP meant that it was easy to use any existing network infrastructure, such as the IPSS X. 25 network, to carry Internet traffic. In 1982, one year earlier than ARPANET, University College London replaced its transatlantic satellite links with TCP / IP over IPSS. Many sites unable to link directly to the Internet created simple gateways for the transfer of electronic mail, the most important application of the time. Sites with only intermittent connections used UUCP or FidoNet and relied on the gateways between these networks and the Internet. Some gateway services went beyond simple mail peering, such as allowing access to File Transfer Protocol (FTP) sites via UUCP or mail. Finally, routing technologies were developed for the Internet to remove the remaining centralized routing aspects. The Exterior Gateway Protocol (EGP) was replaced by a new protocol, the Border Gateway Protocol (BGP). This provided a meshed topology for the Internet and reduced the centric architecture which ARPANET had emphasized. In 1994, Classless Inter-Domain Routing (CIDR) was introduced to support better conservation of address space which allowed use of route aggregation to decrease the size of routing tables. Between 1984 and 1988 CERN began installation and operation of TCP / IP to interconnect its major internal computer systems, workstations, PCs and an accelerator control system. CERN continued to operate a limited self - developed system (CERNET) internally and several incompatible (typically proprietary) network protocols externally. There was considerable resistance in Europe towards more widespread use of TCP / IP, and the CERN TCP / IP intranets remained isolated from the Internet until 1989. In 1988, Daniel Karrenberg, from Centrum Wiskunde & Informatica (CWI) in Amsterdam, visited Ben Segal, CERN 's TCP / IP Coordinator, looking for advice about the transition of the European side of the UUCP Usenet network (much of which ran over X. 25 links) over to TCP / IP. In 1987, Ben Segal had met with Len Bosack from the then still small company Cisco about purchasing some TCP / IP routers for CERN, and was able to give Karrenberg advice and forward him on to Cisco for the appropriate hardware. This expanded the European portion of the Internet across the existing UUCP networks, and in 1989 CERN opened its first external TCP / IP connections. This coincided with the creation of Réseaux IP Européens (RIPE), initially a group of IP network administrators who met regularly to carry out coordination work together. Later, in 1992, RIPE was formally registered as a cooperative in Amsterdam. At the same time as the rise of internetworking in Europe, ad hoc networking to ARPA and in - between Australian universities formed, based on various technologies such as X. 25 and UUCP Net. These were limited in their connection to the global networks, due to the cost of making individual international UUCP dial - up or X. 25 connections. In 1989, Australian universities joined the push towards using IP protocols to unify their networking infrastructures. AARNet was formed in 1989 by the Australian Vice-Chancellors ' Committee and provided a dedicated IP based network for Australia. The Internet began to penetrate Asia in the 1980s. In May 1982 South Korea became the second country to successfully set up TCP / IP IPv4 network. Japan, which had built the UUCP - based network JUNET in 1984, connected to NSFNET in 1989. It hosted the annual meeting of the Internet Society, INET'92, in Kobe. Singapore developed TECHNET in 1990, and Thailand gained a global Internet connection between Chulalongkorn University and UUNET in 1992. While developed countries with technological infrastructures were joining the Internet, developing countries began to experience a digital divide separating them from the Internet. On an essentially continental basis, they are building organizations for Internet resource administration and sharing operational experience, as more and more transmission facilities go into place. At the beginning of the 1990s, African countries relied upon X. 25 IPSS and 2400 baud modem UUCP links for international and internetwork computer communications. In August 1995, InfoMail Uganda, Ltd., a privately held firm in Kampala now known as InfoCom, and NSN Network Services of Avon, Colorado, sold in 1997 and now known as Clear Channel Satellite, established Africa 's first native TCP / IP high - speed satellite Internet services. The data connection was originally carried by a C - Band RSCC Russian satellite which connected InfoMail 's Kampala offices directly to NSN 's MAE - West point of presence using a private network from NSN 's leased ground station in New Jersey. InfoCom 's first satellite connection was just 64 kbit / s, serving a Sun host computer and twelve US Robotics dial - up modems. In 1996, a USAID funded project, the Leland Initiative, started work on developing full Internet connectivity for the continent. Guinea, Mozambique, Madagascar and Rwanda gained satellite earth stations in 1997, followed by Ivory Coast and Benin in 1998. Africa is building an Internet infrastructure. AfriNIC, headquartered in Mauritius, manages IP address allocation for the continent. As do the other Internet regions, there is an operational forum, the Internet Community of Operational Networking Specialists. There are many programs to provide high - performance transmission plant, and the western and southern coasts have undersea optical cable. High - speed cables join North Africa and the Horn of Africa to intercontinental cable systems. Undersea cable development is slower for East Africa; the original joint effort between New Partnership for Africa 's Development (NEPAD) and the East Africa Submarine System (Eassy) has broken off and may become two efforts. The Asia Pacific Network Information Centre (APNIC), headquartered in Australia, manages IP address allocation for the continent. APNIC sponsors an operational forum, the Asia - Pacific Regional Internet Conference on Operational Technologies (APRICOT). South Korea 's first Internet system, the System Development Network (SDN) began operation on 15 May 1982. SDN was connected to the rest of the world in August 1983 using UUCP (Unixto - Unix - Copy); connected to CSNET in December 1984; and formally connected to the U.S. Internet in 1990. In 1991, the People 's Republic of China saw its first TCP / IP college network, Tsinghua University 's TUNET. The PRC went on to make its first global Internet connection in 1994, between the Beijing Electro - Spectrometer Collaboration and Stanford University 's Linear Accelerator Center. However, China went on to implement its own digital divide by implementing a country - wide content filter. As with the other regions, the Latin American and Caribbean Internet Addresses Registry (LACNIC) manages the IP address space and other resources for its area. LACNIC, headquartered in Uruguay, operates DNS root, reverse DNS, and other key services. Initially, as with its predecessor networks, the system that would evolve into the Internet was primarily for government and government body use. However, interest in commercial use of the Internet quickly became a commonly debated topic. Although commercial use was forbidden, the exact definition of commercial use was unclear and subjective. UUCP Net and the X. 25 IPSS had no such restrictions, which would eventually see the official barring of UUCPNet use of ARPANET and NSFNET connections. (Some UUCP links still remained connecting to these networks however, as administrators cast a blind eye to their operation.) As a result, during the late 1980s, the first Internet service provider (ISP) companies were formed. Companies like PSINet, UUNET, Netcom, and Portal Software were formed to provide service to the regional research networks and provide alternate network access, UUCP - based email and Usenet News to the public. The first commercial dialup ISP in the United States was The World, which opened in 1989. In 1992, the U.S. Congress passed the Scientific and Advanced - Technology Act, 42 U.S.C. § 1862 (g), which allowed NSF to support access by the research and education communities to computer networks which were not used exclusively for research and education purposes, thus permitting NSFNET to interconnect with commercial networks. This caused controversy within the research and education community, who were concerned commercial use of the network might lead to an Internet that was less responsive to their needs, and within the community of commercial network providers, who felt that government subsidies were giving an unfair advantage to some organizations. By 1990, ARPANET 's goals had been fulfilled and new networking technologies exceeded the original scope and the project came to a close. New network service providers including PSINet, Alternet, CERFNet, ANS CO + RE, and many others were offering network access to commercial customers. NSFNET was no longer the de facto backbone and exchange point of the Internet. The Commercial Internet eXchange (CIX), Metropolitan Area Exchanges (MAEs), and later Network Access Points (NAPs) were becoming the primary interconnections between many networks. The final restrictions on carrying commercial traffic ended on April 30, 1995 when the National Science Foundation ended its sponsorship of the NSFNET Backbone Service and the service ended. NSF provided initial support for the NAPs and interim support to help the regional research and education networks transition to commercial ISPs. NSF also sponsored the very high speed Backbone Network Service (vBNS) which continued to provide support for the supercomputing centers and research and education in the United States. The World Wide Web (sometimes abbreviated "www '' or "W3 '') is an information space where documents and other web resources are identified by URIs, interlinked by hypertext links, and can be accessed via the Internet using a web browser and (more recently) web - based applications. It has become known simply as "the Web ''. As of the 2010s, the World Wide Web is the primary tool billions use to interact on the Internet, and it has changed people 's lives immeasurably. Precursors to the web browser emerged in the form of hyperlinked applications during the mid and late 1980s (the bare concept of hyperlinking had by then existed for some decades). Following these, Tim Berners - Lee is credited with inventing the World Wide Web in 1989 and developing in 1990 both the first web server, and the first web browser, called WorldWideWeb (no spaces) and later renamed Nexus. Many others were soon developed, with Marc Andreessen 's 1993 Mosaic (later Netscape), being particularly easy to use and install, and often credited with sparking the internet boom of the 1990s. Today, the major web browsers are Firefox, Internet Explorer, Google Chrome, Opera and Safari. A boost in web users was triggered in September 1993 by NCSA Mosaic, a graphical browser which eventually ran on several popular office and home computers. This was the first web browser aiming to bring multimedia content to non-technical users, and therefore included images and text on the same page, unlike previous browser designs; its founder, Marc Andreessen, also established the company that in 1994, released Netscape Navigator, which resulted in one of the early browser wars, when it ended up in a competition for dominance (which it lost) with Microsoft Windows ' Internet Explorer. Commercial use restrictions were lifted in 1995. The online service America Online (AOL) offered their users a connection to the Internet via their own internal browser. During the first decade or so of the public internet, the immense changes it would eventually enable in the 2000s were still nascent. In terms of providing context for this period, mobile cellular devices ("smartphones '' and other cellular devices) which today provide near - universal access, were used for business and not a routine household item owned by parents and children worldwide. Social media in the modern sense had yet to come into existence, laptops were bulky and most households did not have computers. Data rates were slow and most people lacked means to video or digitize video; media storage was transitioning slowly from analog tape to digital optical discs (DVD and to an extent still, floppy disc to CD). Enabling technologies used from the early 2000s such as PHP, modern Javascript and Java, technologies such as AJAX, HTML 4 (and its emphasis on CSS), and various software frameworks, which enabled and simplified speed of web development, largely awaited invention and their eventual widespread adoption. The Internet was widely used for mailing lists, emails, e-commerce and early popular online shopping (Amazon and eBay for example), online forums and bulletin boards, and personal websites and blogs, and use was growing rapidly, but by more modern standards the systems used were static and lacked widespread social engagement. It awaited a number of events in the early 2000s to change from a communications technology to gradually develop into a key part of global society 's infrastructure. Typical design elements of these "Web 1.0 '' era websites included: Static pages instead of dynamic HTML; content served from filesystems instead of relational databases; pages built using Server Side Includes or CGI instead of a web application written in a dynamic programming language; HTML 3.2 - era structures such as frames and tables to create page layouts; online guestbooks; overuse of GIF buttons and similar small graphics promoting particular items; and HTML forms sent via email. (Support for server side scripting was rare on shared servers so the usual feedback mechanism was via email, using mailto forms and their email program. During the period 1997 to 2001, the first speculative investment bubble related to the Internet took place, in which "dot - com '' companies (referring to the ``. com '' top level domain used by businesses) were propelled to exceedingly high valuations as investors rapidly stoked stock values, followed by a market crash; the first dot - com bubble. However this only temporarily slowed enthusiasm and growth, which quickly recovered and continued to grow. The changes that would propel the Internet into its place as a social system took place during a relatively short period of no more than five years, starting from around 2004. They included: and shortly after (approximately 2007 -- 2008 onward): With the call to Web 2.0, the period up to around 2004 -- 2005 was retrospectively named and described by some as Web 1.0. The term "Web 2.0 '' describes websites that emphasize user - generated content (including user - to - user interaction), usability, and interoperability. It first appeared in a January 1999 article called "Fragmented Future '' written by Darcy DiNucci, a consultant on electronic information design, where she wrote: The term resurfaced during 2002 -- 2004, and gained prominence in late 2004 following presentations by Tim O'Reilly and Dale Dougherty at the first Web 2.0 Conference. In their opening remarks, John Battelle and Tim O'Reilly outlined their definition of the "Web as Platform '', where software applications are built upon the Web as opposed to upon the desktop. The unique aspect of this migration, they argued, is that "customers are building your business for you ''. They argued that the activities of users generating content (in the form of ideas, text, videos, or pictures) could be "harnessed '' to create value. Web 2.0 does not refer to an update to any technical specification, but rather to cumulative changes in the way Web pages are made and used. Web 2.0 describes an approach, in which sites focus substantially upon allowing users to interact and collaborate with each other in a social media dialogue as creators of user - generated content in a virtual community, in contrast to Web sites where people are limited to the passive viewing of content. Examples of Web 2.0 include social networking sites, blogs, wikis, folksonomies, video sharing sites, hosted services, Web applications, and mashups. Terry Flew, in his 3rd Edition of New Media described what he believed to characterize the differences between Web 1.0 and Web 2.0: This era saw several household names gain prominence through their community - oriented operation -- YouTube, Twitter, Facebook, Reddit and Wikipedia being some examples. The process of change generally described as "Web 2.0 '' was itself greatly accelerated and transformed only a short time later by the increasing growth in mobile devices. This mobile revolution meant that computers in the form of smartphones became something many people used, took with them everywhere, communicated with, used for photographs and videos they instantly shared or to shop or seek information "on the move '' -- and used socially, as opposed to items on a desk at home or just used for work. Location - based services, services using location and other sensor information, and crowdsourcing (frequently but not always location based), became common, with posts tagged by location, or websites and services becoming location aware. Mobile - targeted websites (such as "m.website.com '') became common, designed especially for the new devices used. Netbooks, ultrabooks, widespread 4G and Wi - Fi, and mobile chips capable or running at nearly the power of desktops from not many years before on far lower power usage, became enablers of this stage of Internet development, and the term "App '' emerged (short for "Application program '' or "Program '') as did the "App store ''. The first Internet link into low earth orbit was established on January 22, 2010 when astronaut T.J. Creamer posted the first unassisted update to his Twitter account from the International Space Station, marking the extension of the Internet into space. (Astronauts at the ISS had used email and Twitter before, but these messages had been relayed to the ground through a NASA data link before being posted by a human proxy.) This personal Web access, which NASA calls the Crew Support LAN, uses the space station 's high - speed Ku band microwave link. To surf the Web, astronauts can use a station laptop computer to control a desktop computer on Earth, and they can talk to their families and friends on Earth using Voice over IP equipment. Communication with spacecraft beyond earth orbit has traditionally been over point - to - point links through the Deep Space Network. Each such data link must be manually scheduled and configured. In the late 1990s NASA and Google began working on a new network protocol, Delay - tolerant networking (DTN) which automates this process, allows networking of spaceborne transmission nodes, and takes the fact into account that spacecraft can temporarily lose contact because they move behind the Moon or planets, or because space weather disrupts the connection. Under such conditions, DTN retransmits data packages instead of dropping them, as the standard TCP / IP Internet Protocol does. NASA conducted the first field test of what it calls the "deep space internet '' in November 2008. Testing of DTN - based communications between the International Space Station and Earth (now termed Disruption - Tolerant Networking) has been ongoing since March 2009, and is scheduled to continue until March 2014. This network technology is supposed to ultimately enable missions that involve multiple spacecraft where reliable inter-vessel communication might take precedence over vessel - to - earth downlinks. According to a February 2011 statement by Google 's Vint Cerf, the so - called "Bundle protocols '' have been uploaded to NASA 's EPOXI mission spacecraft (which is in orbit around the Sun) and communication with Earth has been tested at a distance of approximately 80 light seconds. As a globally distributed network of voluntarily interconnected autonomous networks, the Internet operates without a central governing body. It has no centralized governance for either technology or policies, and each constituent network chooses what technologies and protocols it will deploy from the voluntary technical standards that are developed by the Internet Engineering Task Force (IETF). However, throughout its entire history, the Internet system has had an "Internet Assigned Numbers Authority '' (IANA) for the allocation and assignment of various technical identifiers needed for the operation of the Internet. The Internet Corporation for Assigned Names and Numbers (ICANN) provides oversight and coordination for two principal name spaces in the Internet, the Internet Protocol address space and the Domain Name System. The IANA function was originally performed by USC Information Sciences Institute (ISI), and it delegated portions of this responsibility with respect to numeric network and autonomous system identifiers to the Network Information Center (NIC) at Stanford Research Institute (SRI International) in Menlo Park, California. ISI 's Jonathan Postel managed the IANA, served as RFC Editor and performed other key roles until his premature death in 1998. As the early ARPANET grew, hosts were referred to by names, and a HOSTS. TXT file would be distributed from SRI International to each host on the network. As the network grew, this became cumbersome. A technical solution came in the form of the Domain Name System, created by ISI 's Paul Mockapetris in 1983. The Defense Data Network -- Network Information Center (DDN - NIC) at SRI handled all registration services, including the top - level domains (TLDs) of. mil,. gov,. edu,. org,. net,. com and. us, root nameserver administration and Internet number assignments under a United States Department of Defense contract. In 1991, the Defense Information Systems Agency (DISA) awarded the administration and maintenance of DDN - NIC (managed by SRI up until this point) to Government Systems, Inc., who subcontracted it to the small private - sector Network Solutions, Inc. The increasing cultural diversity of the Internet also posed administrative challenges for centralized management of the IP addresses. In October 1992, the Internet Engineering Task Force (IETF) published RFC 1366, which described the "growth of the Internet and its increasing globalization '' and set out the basis for an evolution of the IP registry process, based on a regionally distributed registry model. This document stressed the need for a single Internet number registry to exist in each geographical region of the world (which would be of "continental dimensions ''). Registries would be "unbiased and widely recognized by network providers and subscribers '' within their region. The RIPE Network Coordination Centre (RIPE NCC) was established as the first RIR in May 1992. The second RIR, the Asia Pacific Network Information Centre (APNIC), was established in Tokyo in 1993, as a pilot project of the Asia Pacific Networking Group. Since at this point in history most of the growth on the Internet was coming from non-military sources, it was decided that the Department of Defense would no longer fund registration services outside of the. mil TLD. In 1993 the U.S. National Science Foundation, after a competitive bidding process in 1992, created the InterNIC to manage the allocations of addresses and management of the address databases, and awarded the contract to three organizations. Registration Services would be provided by Network Solutions; Directory and Database Services would be provided by AT&T; and Information Services would be provided by General Atomics. Over time, after consultation with the IANA, the IETF, RIPE NCC, APNIC, and the Federal Networking Council (FNC), the decision was made to separate the management of domain names from the management of IP numbers. Following the examples of RIPE NCC and APNIC, it was recommended that management of IP address space then administered by the InterNIC should be under the control of those that use it, specifically the ISPs, end - user organizations, corporate entities, universities, and individuals. As a result, the American Registry for Internet Numbers (ARIN) was established as in December 1997, as an independent, not - for - profit corporation by direction of the National Science Foundation and became the third Regional Internet Registry. In 1998, both the IANA and remaining DNS - related InterNIC functions were reorganized under the control of ICANN, a California non-profit corporation contracted by the United States Department of Commerce to manage a number of Internet - related tasks. As these tasks involved technical coordination for two principal Internet name spaces (DNS names and IP addresses) created by the IETF, ICANN also signed a memorandum of understanding with the IAB to define the technical work to be carried out by the Internet Assigned Numbers Authority. The management of Internet address space remained with the regional Internet registries, which collectively were defined as a supporting organization within the ICANN structure. ICANN provides central coordination for the DNS system, including policy coordination for the split registry / registrar system, with competition among registry service providers to serve each top - level - domain and multiple competing registrars offering DNS services to end - users. The Internet Engineering Task Force (IETF) is the largest and most visible of several loosely related ad - hoc groups that provide technical direction for the Internet, including the Internet Architecture Board (IAB), the Internet Engineering Steering Group (IESG), and the Internet Research Task Force (IRTF). The IETF is a loosely self - organized group of international volunteers who contribute to the engineering and evolution of Internet technologies. It is the principal body engaged in the development of new Internet standard specifications. Much of the work of the IETF is organized into Working Groups. Standardization efforts of the Working Groups are often adopted by the Internet community, but the IETF does not control or patrol the Internet. The IETF grew out of quarterly meeting of U.S. government - funded researchers, starting in January 1986. Non-government representatives were invited by the fourth IETF meeting in October 1986. The concept of Working Groups was introduced at the fifth meeting in February 1987. The seventh meeting in July 1987 was the first meeting with more than one hundred attendees. In 1992, the Internet Society, a professional membership society, was formed and IETF began to operate under it as an independent international standards body. The first IETF meeting outside of the United States was held in Amsterdam, The Netherlands, in July 1993. Today, the IETF meets three times per year and attendance has been as high as ca. 2,000 participants. Typically one in three IETF meetings are held in Europe or Asia. The number of non-US attendees is typically ca. 50 %, even at meetings held in the United States. The IETF is not a legal entity, has no governing board, no members, and no dues. The closest status resembling membership is being on an IETF or Working Group mailing list. IETF volunteers come from all over the world and from many different parts of the Internet community. The IETF works closely with and under the supervision of the Internet Engineering Steering Group (IESG) and the Internet Architecture Board (IAB). The Internet Research Task Force (IRTF) and the Internet Research Steering Group (IRSG), peer activities to the IETF and IESG under the general supervision of the IAB, focus on longer term research issues. Request for Comments (RFCs) are the main documentation for the work of the IAB, IESG, IETF, and IRTF. RFC 1, "Host Software '', was written by Steve Crocker at UCLA in April 1969, well before the IETF was created. Originally they were technical memos documenting aspects of ARPANET development and were edited by Jon Postel, the first RFC Editor. RFCs cover a wide range of information from proposed standards, draft standards, full standards, best practices, experimental protocols, history, and other informational topics. RFCs can be written by individuals or informal groups of individuals, but many are the product of a more formal Working Group. Drafts are submitted to the IESG either by individuals or by the Working Group Chair. An RFC Editor, appointed by the IAB, separate from IANA, and working in conjunction with the IESG, receives drafts from the IESG and edits, formats, and publishes them. Once an RFC is published, it is never revised. If the standard it describes changes or its information becomes obsolete, the revised standard or updated information will be re-published as a new RFC that "obsoletes '' the original. The Internet Society (ISOC) is an international, nonprofit organization founded during 1992 "to assure the open development, evolution and use of the Internet for the benefit of all people throughout the world ''. With offices near Washington, DC, USA, and in Geneva, Switzerland, ISOC has a membership base comprising more than 80 organizational and more than 50,000 individual members. Members also form "chapters '' based on either common geographical location or special interests. There are currently more than 90 chapters around the world. ISOC provides financial and organizational support to and promotes the work of the standards settings bodies for which it is the organizational home: the Internet Engineering Task Force (IETF), the Internet Architecture Board (IAB), the Internet Engineering Steering Group (IESG), and the Internet Research Task Force (IRTF). ISOC also promotes understanding and appreciation of the Internet model of open, transparent processes and consensus - based decision - making. Since the 1990s, the Internet 's governance and organization has been of global importance to governments, commerce, civil society, and individuals. The organizations which held control of certain technical aspects of the Internet were the successors of the old ARPANET oversight and the current decision - makers in the day - to - day technical aspects of the network. While recognized as the administrators of certain aspects of the Internet, their roles and their decision - making authority are limited and subject to increasing international scrutiny and increasing objections. These objections have led to the ICANN removing themselves from relationships with first the University of Southern California in 2000, and in September 2009, gaining autonomy from the US government by the ending of its longstanding agreements, although some contractual obligations with the U.S. Department of Commerce continued. Finally, on October 1, 2016 ICANN ended its contract with the United States Department of Commerce National Telecommunications and Information Administration (NTIA), allowing oversight to pass to the global Internet community. The IETF, with financial and organizational support from the Internet Society, continues to serve as the Internet 's ad - hoc standards body and issues Request for Comments. In November 2005, the World Summit on the Information Society, held in Tunis, called for an Internet Governance Forum (IGF) to be convened by United Nations Secretary General. The IGF opened an ongoing, non-binding conversation among stakeholders representing governments, the private sector, civil society, and the technical and academic communities about the future of Internet governance. The first IGF meeting was held in October / November 2006 with follow up meetings annually thereafter. Since WSIS, the term "Internet governance '' has been broadened beyond narrow technical concerns to include a wider range of Internet - related policy issues. Due to its prominence and immediacy as an effective means of mass communication, the Internet has also become more politicized as it has grown. This has led in turn, to discourses and activities that would once have taken place in other ways, migrating to being mediated by internet. Examples include political activities such as public protest and canvassing of support and votes, but also -- On April 23, 2014, the Federal Communications Commission (FCC) was reported to be considering a new rule that would permit Internet service providers to offer content providers a faster track to send content, thus reversing their earlier net neutrality position. A possible solution to net neutrality concerns may be municipal broadband, according to Professor Susan Crawford, a legal and technology expert at Harvard Law School. On May 15, 2014, the FCC decided to consider two options regarding Internet services: first, permit fast and slow broadband lanes, thereby compromising net neutrality; and second, reclassify broadband as a telecommunication service, thereby preserving net neutrality. On November 10, 2014, President Obama recommended the FCC reclassify broadband Internet service as a telecommunications service in order to preserve net neutrality. On January 16, 2015, Republicans presented legislation, in the form of a U.S. Congress H.R. discussion draft bill, that makes concessions to net neutrality but prohibits the FCC from accomplishing the goal or enacting any further regulation affecting Internet service providers (ISPs). On January 31, 2015, AP News reported that the FCC will present the notion of applying ("with some caveats '') Title II (common carrier) of the Communications Act of 1934 to the internet in a vote expected on February 26, 2015. Adoption of this notion would reclassify internet service from one of information to one of telecommunications and, according to Tom Wheeler, chairman of the FCC, ensure net neutrality. The FCC is expected to enforce net neutrality in its vote, according to The New York Times. On February 26, 2015, the FCC ruled in favor of net neutrality by applying Title II (common carrier) of the Communications Act of 1934 and Section 706 of the Telecommunications act of 1996 to the Internet. The FCC Chairman, Tom Wheeler, commented, "This is no more a plan to regulate the Internet than the First Amendment is a plan to regulate free speech. They both stand for the same concept. '' On March 12, 2015, the FCC released the specific details of the net neutrality rules. On April 13, 2015, the FCC published the final rule on its new "Net Neutrality '' regulations. On December 14, 2017, the F.C.C Repealed their March 12, 2015 decision by a 3 - 2 vote regarding net neutrality rules. E-mail has often been called the killer application of the Internet. It predates the Internet, and was a crucial tool in creating it. Email started in 1965 as a way for multiple users of a time - sharing mainframe computer to communicate. Although the history is undocumented, among the first systems to have such a facility were the System Development Corporation (SDC) Q32 and the Compatible Time - Sharing System (CTSS) at MIT. The ARPANET computer network made a large contribution to the evolution of electronic mail. An experimental inter-system transferred mail on the ARPANET shortly after its creation. In 1971 Ray Tomlinson created what was to become the standard Internet electronic mail addressing format, using the @ sign to separate mailbox names from host names. A number of protocols were developed to deliver messages among groups of time - sharing computers over alternative transmission systems, such as UUCP and IBM 's VNET email system. Email could be passed this way between a number of networks, including ARPANET, BITNET and NSFNET, as well as to hosts connected directly to other sites via UUCP. See the history of SMTP protocol. In addition, UUCP allowed the publication of text files that could be read by many others. The News software developed by Steve Daniel and Tom Truscott in 1979 was used to distribute news and bulletin board - like messages. This quickly grew into discussion groups, known as newsgroups, on a wide range of topics. On ARPANET and NSFNET similar discussion groups would form via mailing lists, discussing both technical issues and more culturally focused topics (such as science fiction, discussed on the sflovers mailing list). During the early years of the Internet, email and similar mechanisms were also fundamental to allow people to access resources that were not available due to the absence of online connectivity. UUCP was often used to distribute files using the ' alt. binary ' groups. Also, FTP e-mail gateways allowed people that lived outside the US and Europe to download files using ftp commands written inside email messages. The file was encoded, broken in pieces and sent by email; the receiver had to reassemble and decode it later, and it was the only way for people living overseas to download items such as the earlier Linux versions using the slow dial - up connections available at the time. After the popularization of the Web and the HTTP protocol such tools were slowly abandoned. As the Internet grew through the 1980s and early 1990s, many people realized the increasing need to be able to find and organize files and information. Projects such as Archie, Gopher, WAIS, and the FTP Archive list attempted to create ways to organize distributed data. In the early 1990s, Gopher, invented by Mark P. McCahill offered a viable alternative to the World Wide Web. However, in 1993 the World Wide Web saw many advances to indexing and ease of access through search engines, which often neglected Gopher and Gopherspace. As popularity increased through ease of use, investment incentives also grew until in the middle of 1994 the WWW 's popularity gained the upper hand. Then it became clear that Gopher and the other projects were doomed fall short. One of the most promising user interface paradigms during this period was hypertext. The technology had been inspired by Vannevar Bush 's "Memex '' and developed through Ted Nelson 's research on Project Xanadu and Douglas Engelbart 's research on NLS. Many small self - contained hypertext systems had been created before, such as Apple Computer 's HyperCard (1987). Gopher became the first commonly used hypertext interface to the Internet. While Gopher menu items were examples of hypertext, they were not commonly perceived in that way. In 1989, while working at CERN, Tim Berners - Lee invented a network - based implementation of the hypertext concept. By releasing his invention to public use, he ensured the technology would become widespread. For his work in developing the World Wide Web, Berners - Lee received the Millennium technology prize in 2004. One early popular web browser, modeled after HyperCard, was ViolaWWW. A turning point for the World Wide Web began with the introduction of the Mosaic web browser in 1993, a graphical browser developed by a team at the National Center for Supercomputing Applications at the University of Illinois at Urbana - Champaign (NCSA - UIUC), led by Marc Andreessen. Funding for Mosaic came from the High - Performance Computing and Communications Initiative, a funding program initiated by the High Performance Computing and Communication Act of 1991 also known as the Gore Bill. Mosaic 's graphical interface soon became more popular than Gopher, which at the time was primarily text - based, and the WWW became the preferred interface for accessing the Internet. (Gore 's reference to his role in "creating the Internet '', however, was ridiculed in his presidential election campaign. See the full article Al Gore and information technology). Mosaic was superseded in 1994 by Andreessen 's Netscape Navigator, which replaced Mosaic as the world 's most popular browser. While it held this title for some time, eventually competition from Internet Explorer and a variety of other browsers almost completely displaced it. Another important event held on January 11, 1994, was The Superhighway Summit at UCLA 's Royce Hall. This was the "first public conference bringing together all of the major industry, government and academic leaders in the field (and) also began the national dialogue about the Information Superhighway and its implications. '' 24 Hours in Cyberspace, "the largest one - day online event '' (February 8, 1996) up to that date, took place on the then - active website, cyber24.com. It was headed by photographer Rick Smolan. A photographic exhibition was unveiled at the Smithsonian Institution 's National Museum of American History on January 23, 1997, featuring 70 photos from the project. Even before the World Wide Web, there were search engines that attempted to organize the Internet. The first of these was the Archie search engine from McGill University in 1990, followed in 1991 by WAIS and Gopher. All three of those systems predated the invention of the World Wide Web but all continued to index the Web and the rest of the Internet for several years after the Web appeared. There are still Gopher servers as of 2006, although there are a great many more web servers. As the Web grew, search engines and Web directories were created to track pages on the Web and allow people to find things. The first full - text Web search engine was WebCrawler in 1994. Before WebCrawler, only Web page titles were searched. Another early search engine, Lycos, was created in 1993 as a university project, and was the first to achieve commercial success. During the late 1990s, both Web directories and Web search engines were popular -- Yahoo! (founded 1994) and Altavista (founded 1995) were the respective industry leaders. By August 2001, the directory model had begun to give way to search engines, tracking the rise of Google (founded 1998), which had developed new approaches to relevancy ranking. Directory features, while still commonly available, became after - thoughts to search engines. Database size, which had been a significant marketing feature through the early 2000s, was similarly displaced by emphasis on relevancy ranking, the methods by which search engines attempt to sort the best results first. Relevancy ranking first became a major issue circa 1996, when it became apparent that it was impractical to review full lists of results. Consequently, algorithms for relevancy ranking have continuously improved. Google 's PageRank method for ordering the results has received the most press, but all major search engines continually refine their ranking methodologies with a view toward improving the ordering of results. As of 2006, search engine rankings are more important than ever, so much so that an industry has developed ("search engine optimizers '', or "SEO '') to help web - developers improve their search ranking, and an entire body of case law has developed around matters that affect search engine rankings, such as use of trademarks in metatags. The sale of search rankings by some search engines has also created controversy among librarians and consumer advocates. On June 3, 2009, Microsoft launched its new search engine, Bing. The following month Microsoft and Yahoo! announced a deal in which Bing would power Yahoo! Search. Resource or file sharing has been an important activity on computer networks from well before the Internet was established and was supported in a variety of ways including bulletin board systems (1978), Usenet (1980), Kermit (1981), and many others. The File Transfer Protocol (FTP) for use on the Internet was standardized in 1985 and is still in use today. A variety of tools were developed to aid the use of FTP by helping users discover files they might want to transfer, including the Wide Area Information Server (WAIS) in 1991, Gopher in 1991, Archie in 1991, Veronica in 1992, Jughead in 1993, Internet Relay Chat (IRC) in 1988, and eventually the World Wide Web (WWW) in 1991 with Web directories and Web search engines. In 1999, Napster became the first peer - to - peer file sharing system. Napster used a central server for indexing and peer discovery, but the storage and transfer of files was decentralized. A variety of peer - to - peer file sharing programs and services with different levels of decentralization and anonymity followed, including: Gnutella, eDonkey2000, and Freenet in 2000, FastTrack, Kazaa, Limewire, and BitTorrent in 2001, and Poisoned in 2003. All of these tools are general purpose and can be used to share a wide variety of content, but sharing of music files, software, and later movies and videos are major uses. And while some of this sharing is legal, large portions are not. Lawsuits and other legal actions caused Napster in 2001, eDonkey2000 in 2005, Kazaa in 2006, and Limewire in 2010 to shut down or refocus their efforts. The Pirate Bay, founded in Sweden in 2003, continues despite a trial and appeal in 2009 and 2010 that resulted in jail terms and large fines for several of its founders. File sharing remains contentious and controversial with charges of theft of intellectual property on the one hand and charges of censorship on the other. Suddenly the low price of reaching millions worldwide, and the possibility of selling to or hearing from those people at the same moment when they were reached, promised to overturn established business dogma in advertising, mail - order sales, customer relationship management, and many more areas. The web was a new killer app -- it could bring together unrelated buyers and sellers in seamless and low - cost ways. Entrepreneurs around the world developed new business models, and ran to their nearest venture capitalist. While some of the new entrepreneurs had experience in business and economics, the majority were simply people with ideas, and did not manage the capital influx prudently. Additionally, many dot - com business plans were predicated on the assumption that by using the Internet, they would bypass the distribution channels of existing businesses and therefore not have to compete with them; when the established businesses with strong existing brands developed their own Internet presence, these hopes were shattered, and the newcomers were left attempting to break into markets dominated by larger, more established businesses. Many did not have the ability to do so. The dot - com bubble burst in March 2000, with the technology heavy NASDAQ Composite index peaking at 5,048.62 on March 10 (5,132.52 intraday), more than double its value just a year before. By 2001, the bubble 's deflation was running full speed. A majority of the dot - coms had ceased trading, after having burnt through their venture capital and IPO capital, often without ever making a profit. But despite this, the Internet continues to grow, driven by commerce, ever greater amounts of online information and knowledge and social networking. The first mobile phone with Internet connectivity was the Nokia 9000 Communicator, launched in Finland in 1996. The viability of Internet services access on mobile phones was limited until prices came down from that model, and network providers started to develop systems and services conveniently accessible on phones. NTT DoCoMo in Japan launched the first mobile Internet service, i - mode, in 1999 and this is considered the birth of the mobile phone Internet services. In 2001, the mobile phone email system by Research in Motion (now BlackBerry Limited) for their BlackBerry product was launched in America. To make efficient use of the small screen and tiny keypad and one - handed operation typical of mobile phones, a specific document and networking model was created for mobile devices, the Wireless Application Protocol (WAP). Most mobile device Internet services operate using WAP. The growth of mobile phone services was initially a primarily Asian phenomenon with Japan, South Korea and Taiwan all soon finding the majority of their Internet users accessing resources by phone rather than by PC. Developing countries followed, with India, South Africa, Kenya, the Philippines, and Pakistan all reporting that the majority of their domestic users accessed the Internet from a mobile phone rather than a PC. The European and North American use of the Internet was influenced by a large installed base of personal computers, and the growth of mobile phone Internet access was more gradual, but had reached national penetration levels of 20 -- 30 % in most Western countries. The cross-over occurred in 2008, when more Internet access devices were mobile phones than personal computers. In many parts of the developing world, the ratio is as much as 10 mobile phone users to one PC user. Web pages were initially conceived as structured documents based upon Hypertext Markup Language (HTML) which can allow access to images, video, and other content. Hyperlinks in the page permit users to navigate to other pages. In the earliest browsers, images opened in a separate "helper '' application. Marc Andreessen 's 1993 Mosaic and 1994 Netscape introduced mixed text and images for non-technical users. HTML evolved during the 1990s, leading to HTML 4 which introduced large elements of CSS styling and, later, extensions to allow browser code to make calls and ask for content from servers in a structured way (AJAX). There are nearly insurmountable problems in supplying a historiography of the Internet 's development. The process of digitization represents a twofold challenge both for historiography in general and, in particular, for historical communication research. A sense of the difficulty in documenting early developments that led to the internet can be gathered from the quote: "The Arpanet period is somewhat well documented because the corporation in charge -- BBN -- left a physical record. Moving into the NSFNET era, it became an extraordinarily decentralized process. The record exists in people 's basements, in closets... So much of what happened was done verbally and on the basis of individual trust. ''
tremors that have occurred in earth's crust are known as
Earthquake - Wikipedia An earthquake (also known as a quake, tremor or temblor) is the shaking of the surface of the Earth, resulting from the sudden release of energy in the Earth 's lithosphere that creates seismic waves. Earthquakes can range in size from those that are so weak that they can not be felt to those violent enough to toss people around and destroy whole cities. The seismicity or seismic activity of an area refers to the frequency, type and size of earthquakes experienced over a period of time. At the Earth 's surface, earthquakes manifest themselves by shaking and sometimes displacement of the ground. When the epicenter of a large earthquake is located offshore, the seabed may be displaced sufficiently to cause a tsunami. Earthquakes can also trigger landslides, and occasionally volcanic activity. In its most general sense, the word earthquake is used to describe any seismic event -- whether natural or caused by humans -- that generates seismic waves. Earthquakes are caused mostly by rupture of geological faults, but also by other events such as volcanic activity, landslides, mine blasts, and nuclear tests. An earthquake 's point of initial rupture is called its focus or hypocenter. The epicenter is the point at ground level directly above the hypocenter. Tectonic earthquakes occur anywhere in the earth where there is sufficient stored elastic strain energy to drive fracture propagation along a fault plane. The sides of a fault move past each other smoothly and aseismically only if there are no irregularities or asperities along the fault surface that increase the frictional resistance. Most fault surfaces do have such asperities and this leads to a form of stick - slip behavior. Once the fault has locked, continued relative motion between the plates leads to increasing stress and therefore, stored strain energy in the volume around the fault surface. This continues until the stress has risen sufficiently to break through the asperity, suddenly allowing sliding over the locked portion of the fault, releasing the stored energy. This energy is released as a combination of radiated elastic strain seismic waves, frictional heating of the fault surface, and cracking of the rock, thus causing an earthquake. This process of gradual build - up of strain and stress punctuated by occasional sudden earthquake failure is referred to as the elastic - rebound theory. It is estimated that only 10 percent or less of an earthquake 's total energy is radiated as seismic energy. Most of the earthquake 's energy is used to power the earthquake fracture growth or is converted into heat generated by friction. Therefore, earthquakes lower the Earth 's available elastic potential energy and raise its temperature, though these changes are negligible compared to the conductive and convective flow of heat out from the Earth 's deep interior. There are three main types of fault, all of which may cause an interplate earthquake: normal, reverse (thrust) and strike - slip. Normal and reverse faulting are examples of dip - slip, where the displacement along the fault is in the direction of dip and movement on them involves a vertical component. Normal faults occur mainly in areas where the crust is being extended such as a divergent boundary. Reverse faults occur in areas where the crust is being shortened such as at a convergent boundary. Strike - slip faults are steep structures where the two sides of the fault slip horizontally past each other; transform boundaries are a particular type of strike - slip fault. Many earthquakes are caused by movement on faults that have components of both dip - slip and strike - slip; this is known as oblique slip. Reverse faults, particularly those along convergent plate boundaries are associated with the most powerful earthquakes, megathrust earthquakes, including almost all of those of magnitude 8 or more. Strike - slip faults, particularly continental transforms, can produce major earthquakes up to about magnitude 8. Earthquakes associated with normal faults are generally less than magnitude 7. For every unit increase in magnitude, there is a roughly thirtyfold increase in the energy released. For instance, an earthquake of magnitude 6.0 releases approximately 30 times more energy than a 5.0 magnitude earthquake and a 7.0 magnitude earthquake releases 900 times (30 × 30) more energy than a 5.0 magnitude of earthquake. An 8.6 magnitude earthquake releases the same amount of energy as 10,000 atomic bombs like those used in World War II. This is so because the energy released in an earthquake, and thus its magnitude, is proportional to the area of the fault that ruptures and the stress drop. Therefore, the longer the length and the wider the width of the faulted area, the larger the resulting magnitude. The topmost, brittle part of the Earth 's crust, and the cool slabs of the tectonic plates that are descending down into the hot mantle, are the only parts of our planet which can store elastic energy and release it in fault ruptures. Rocks hotter than about 300 degrees Celsius flow in response to stress; they do not rupture in earthquakes. The maximum observed lengths of ruptures and mapped faults (which may break in a single rupture) are approximately 1000 km. Examples are the earthquakes in Chile, 1960; Alaska, 1957; Sumatra, 2004, all in subduction zones. The longest earthquake ruptures on strike - slip faults, like the San Andreas Fault (1857, 1906), the North Anatolian Fault in Turkey (1939) and the Denali Fault in Alaska (2002), are about half to one third as long as the lengths along subducting plate margins, and those along normal faults are even shorter. The most important parameter controlling the maximum earthquake magnitude on a fault is however not the maximum available length, but the available width because the latter varies by a factor of 20. Along converging plate margins, the dip angle of the rupture plane is very shallow, typically about 10 degrees. Thus the width of the plane within the top brittle crust of the Earth can become 50 to 100 km (Japan, 2011; Alaska, 1964), making the most powerful earthquakes possible. Strike - slip faults tend to be oriented near vertically, resulting in an approximate width of 10 km within the brittle crust, thus earthquakes with magnitudes much larger than 8 are not possible. Maximum magnitudes along many normal faults are even more limited because many of them are located along spreading centers, as in Iceland, where the thickness of the brittle layer is only about 6 km. In addition, there exists a hierarchy of stress level in the three fault types. Thrust faults are generated by the highest, strike slip by intermediate, and normal faults by the lowest stress levels. This can easily be understood by considering the direction of the greatest principal stress, the direction of the force that ' pushes ' the rock mass during the faulting. In the case of normal faults, the rock mass is pushed down in a vertical direction, thus the pushing force (greatest principal stress) equals the weight of the rock mass itself. In the case of thrusting, the rock mass ' escapes ' in the direction of the least principal stress, namely upward, lifting the rock mass up, thus the overburden equals the least principal stress. Strike - slip faulting is intermediate between the other two types described above. This difference in stress regime in the three faulting environments can contribute to differences in stress drop during faulting, which contributes to differences in the radiated energy, regardless of fault dimensions. Where plate boundaries occur within the continental lithosphere, deformation is spread out over a much larger area than the plate boundary itself. In the case of the San Andreas fault continental transform, many earthquakes occur away from the plate boundary and are related to strains developed within the broader zone of deformation caused by major irregularities in the fault trace (e.g., the "Big bend '' region). The Northridge earthquake was associated with movement on a blind thrust within such a zone. Another example is the strongly oblique convergent plate boundary between the Arabian and Eurasian plates where it runs through the northwestern part of the Zagros Mountains. The deformation associated with this plate boundary is partitioned into nearly pure thrust sense movements perpendicular to the boundary over a wide zone to the southwest and nearly pure strike - slip motion along the Main Recent Fault close to the actual plate boundary itself. This is demonstrated by earthquake focal mechanisms. All tectonic plates have internal stress fields caused by their interactions with neighboring plates and sedimentary loading or unloading (e.g. deglaciation). These stresses may be sufficient to cause failure along existing fault planes, giving rise to intraplate earthquakes. The majority of tectonic earthquakes originate at the ring of fire in depths not exceeding tens of kilometers. Earthquakes occurring at a depth of less than 70 km are classified as ' shallow - focus ' earthquakes, while those with a focal - depth between 70 and 300 km are commonly termed ' mid-focus ' or ' intermediate - depth ' earthquakes. In subduction zones, where older and colder oceanic crust descends beneath another tectonic plate, Deep - focus earthquakes may occur at much greater depths (ranging from 300 up to 700 kilometers). These seismically active areas of subduction are known as Wadati -- Benioff zones. Deep - focus earthquakes occur at a depth where the subducted lithosphere should no longer be brittle, due to the high temperature and pressure. A possible mechanism for the generation of deep - focus earthquakes is faulting caused by olivine undergoing a phase transition into a spinel structure. Earthquakes often occur in volcanic regions and are caused there, both by tectonic faults and the movement of magma in volcanoes. Such earthquakes can serve as an early warning of volcanic eruptions, as during the 1980 eruption of Mount St. Helens. Earthquake swarms can serve as markers for the location of the flowing magma throughout the volcanoes. These swarms can be recorded by seismometers and tiltmeters (a device that measures ground slope) and used as sensors to predict imminent or upcoming eruptions. A tectonic earthquake begins by an initial rupture at a point on the fault surface, a process known as nucleation. The scale of the nucleation zone is uncertain, with some evidence, such as the rupture dimensions of the smallest earthquakes, suggesting that it is smaller than 100 m while other evidence, such as a slow component revealed by low - frequency spectra of some earthquakes, suggest that it is larger. The possibility that the nucleation involves some sort of preparation process is supported by the observation that about 40 % of earthquakes are preceded by foreshocks. Once the rupture has initiated, it begins to propagate along the fault surface. The mechanics of this process are poorly understood, partly because it is difficult to recreate the high sliding velocities in a laboratory. Also the effects of strong ground motion make it very difficult to record information close to a nucleation zone. Rupture propagation is generally modeled using a fracture mechanics approach, likening the rupture to a propagating mixed mode shear crack. The rupture velocity is a function of the fracture energy in the volume around the crack tip, increasing with decreasing fracture energy. The velocity of rupture propagation is orders of magnitude faster than the displacement velocity across the fault. Earthquake ruptures typically propagate at velocities that are in the range 70 -- 90 % of the S - wave velocity, and this is independent of earthquake size. A small subset of earthquake ruptures appear to have propagated at speeds greater than the S - wave velocity. These supershear earthquakes have all been observed during large strike - slip events. The unusually wide zone of coseismic damage caused by the 2001 Kunlun earthquake has been attributed to the effects of the sonic boom developed in such earthquakes. Some earthquake ruptures travel at unusually low velocities and are referred to as slow earthquakes. A particularly dangerous form of slow earthquake is the tsunami earthquake, observed where the relatively low felt intensities, caused by the slow propagation speed of some great earthquakes, fail to alert the population of the neighboring coast, as in the 1896 Sanriku earthquake. Tides may induce some seismicity, see tidal triggering of earthquakes for details. Most earthquakes form part of a sequence, related to each other in terms of location and time. Most earthquake clusters consist of small tremors that cause little to no damage, but there is a theory that earthquakes can recur in a regular pattern. An aftershock is an earthquake that occurs after a previous earthquake, the mainshock. An aftershock is in the same region of the main shock but always of a smaller magnitude. If an aftershock is larger than the main shock, the aftershock is redesignated as the main shock and the original main shock is redesignated as a foreshock. Aftershocks are formed as the crust around the displaced fault plane adjusts to the effects of the main shock. Earthquake swarms are sequences of earthquakes striking in a specific area within a short period of time. They are different from earthquakes followed by a series of aftershocks by the fact that no single earthquake in the sequence is obviously the main shock, therefore none have notable higher magnitudes than the other. An example of an earthquake swarm is the 2004 activity at Yellowstone National Park. In August 2012, a swarm of earthquakes shook Southern California 's Imperial Valley, showing the most recorded activity in the area since the 1970s. Sometimes a series of earthquakes occur in what has been called an earthquake storm, where the earthquakes strike a fault in clusters, each triggered by the shaking or stress redistribution of the previous earthquakes. Similar to aftershocks but on adjacent segments of fault, these storms occur over the course of years, and with some of the later earthquakes as damaging as the early ones. Such a pattern was observed in the sequence of about a dozen earthquakes that struck the North Anatolian Fault in Turkey in the 20th century and has been inferred for older anomalous clusters of large earthquakes in the Middle East. Quaking or shaking of the earth is a common phenomenon undoubtedly known to humans from earliest times. Prior to the development of strong - motion accelerometers that can measure peak ground speed and acceleration directly, the intensity of the earth - shaking was estimated on the basis of the observed effects, as categorized on various seismic intensity scales. Only in the last century has the source of such shaking been identified as ruptures in the earth 's crust, with the intensity of shaking at any locality dependent not only on the local ground conditions, but also on the strength or magnitude of the rupture, and on its distance. The first scale for measuring earthquake magnitudes was developed by Charles F. Richter in 1935. Subsequent scales (see seismic magnitude scales) have retained a key feature, where each unit represents a ten-fold difference in the amplitude of the ground shaking, and a 32-fold difference in energy. Subsequent scales are also adjusted to have approximately the same numeric value within the limits of the scale. Although the mass media commonly reports earthquake magnitudes as "Richter magnitude '' or "Richter scale '', standard practice by most seismological authorities is to express an earthquake 's strength on the moment magnitude scale, which is based on the actual energy released by an earthquake. It is estimated that around 500,000 earthquakes occur each year, detectable with current instrumentation. About 100,000 of these can be felt. Minor earthquakes occur nearly constantly around the world in places like California and Alaska in the U.S., as well as in El Salvador, Mexico, Guatemala, Chile, Peru, Indonesia, Iran, Pakistan, the Azores in Portugal, Turkey, New Zealand, Greece, Italy, India, Nepal and Japan, but earthquakes can occur almost anywhere, including Downstate New York, England, and Australia. Larger earthquakes occur less frequently, the relationship being exponential; for example, roughly ten times as many earthquakes larger than magnitude 4 occur in a particular time period than earthquakes larger than magnitude 5. In the (low seismicity) United Kingdom, for example, it has been calculated that the average recurrences are: an earthquake of 3.7 -- 4.6 every year, an earthquake of 4.7 -- 5.5 every 10 years, and an earthquake of 5.6 or larger every 100 years. This is an example of the Gutenberg -- Richter law. The number of seismic stations has increased from about 350 in 1931 to many thousands today. As a result, many more earthquakes are reported than in the past, but this is because of the vast improvement in instrumentation, rather than an increase in the number of earthquakes. The United States Geological Survey estimates that, since 1900, there have been an average of 18 major earthquakes (magnitude 7.0 -- 7.9) and one great earthquake (magnitude 8.0 or greater) per year, and that this average has been relatively stable. In recent years, the number of major earthquakes per year has decreased, though this is probably a statistical fluctuation rather than a systematic trend. More detailed statistics on the size and frequency of earthquakes is available from the United States Geological Survey (USGS). A recent increase in the number of major earthquakes has been noted, which could be explained by a cyclical pattern of periods of intense tectonic activity, interspersed with longer periods of low - intensity. However, accurate recordings of earthquakes only began in the early 1900s, so it is too early to categorically state that this is the case. Most of the world 's earthquakes (90 %, and 81 % of the largest) take place in the 40,000 km long, horseshoe - shaped zone called the circum - Pacific seismic belt, known as the Pacific Ring of Fire, which for the most part bounds the Pacific Plate. Massive earthquakes tend to occur along other plate boundaries, too, such as along the Himalayan Mountains. With the rapid growth of mega-cities such as Mexico City, Tokyo and Tehran, in areas of high seismic risk, some seismologists are warning that a single quake may claim the lives of up to 3 million people. While most earthquakes are caused by movement of the Earth 's tectonic plates, human activity can also produce earthquakes. Four main activities contribute to this phenomenon: storing large amounts of water behind a dam (and possibly building an extremely heavy building), drilling and injecting liquid into wells, and by coal mining and oil drilling. Perhaps the best known example is the 2008 Sichuan earthquake in China 's Sichuan Province in May; this tremor resulted in 69,227 fatalities and is the 19th deadliest earthquake of all time. The Zipingpu Dam is believed to have fluctuated the pressure of the fault 1,650 feet (503 m) away; this pressure probably increased the power of the earthquake and accelerated the rate of movement for the fault. The greatest earthquake in Australia 's history is also claimed to be induced by humanity, through coal mining. The city of Newcastle was built over a large sector of coal mining areas. The earthquake has been reported to be spawned from a fault that reactivated due to the millions of tonnes of rock removed in the mining process. The instrumental scales used to describe the size of an earthquake began with the Richter magnitude scale in the 1930s. It is a relatively simple measurement of an event 's amplitude, and its use has become minimal in the 21st century. Seismic waves travel through the Earth 's interior and can be recorded by seismometers at great distances. The surface wave magnitude was developed in the 1950s as a means to measure remote earthquakes and to improve the accuracy for larger events. The moment magnitude scale measures the amplitude of the shock, but also takes into account the seismic moment (total rupture area, average slip of the fault, and rigidity of the rock). The Japan Meteorological Agency seismic intensity scale, the Medvedev -- Sponheuer -- Karnik scale, and the Mercalli intensity scale are based on the observed effects. Every tremor produces different types of seismic waves, which travel through rock with different velocities: Propagation velocity of the seismic waves ranges from approx. 3 km / s up to 13 km / s, depending on the density and elasticity of the medium. In the Earth 's interior the shock - or P waves travel much faster than the S waves (approx. relation 1.7: 1). The differences in travel time from the epicenter to the observatory are a measure of the distance and can be used to image both sources of quakes and structures within the Earth. Also, the depth of the hypocenter can be computed roughly. In solid rock P - waves travel at about 6 to 7 km per second; the velocity increases within the deep mantle to ~ 13 km / s. The velocity of S - waves ranges from 2 -- 3 km / s in light sediments and 4 -- 5 km / s in the Earth 's crust up to 7 km / s in the deep mantle. As a consequence, the first waves of a distant earthquake arrive at an observatory via the Earth 's mantle. On average, the kilometer distance to the earthquake is the number of seconds between the P and S wave times 8. Slight deviations are caused by inhomogeneities of subsurface structure. By such analyses of seismograms the Earth 's core was located in 1913 by Beno Gutenberg. S waves and later arriving surface waves do main damage compared to P waves. P wave squeezes and expands material in the same direction it is traveling. S wave shakes the ground up and down and back and forth. Earthquakes are not only categorized by their magnitude but also by the place where they occur. The world is divided into 754 Flinn -- Engdahl regions (F-E regions), which are based on political and geographical boundaries as well as seismic activity. More active zones are divided into smaller F-E regions whereas less active zones belong to larger F-E regions. Standard reporting of earthquakes includes its magnitude, date and time of occurrence, geographic coordinates of its epicenter, depth of the epicenter, geographical region, distances to population centers, location uncertainty, a number of parameters that are included in USGS earthquake reports (number of stations reporting, number of observations, etc.), and a unique event ID. Although relatively slow seismic waves have traditionally been used to detect earthquakes, scientists realized in 2016 that gravitational measurements could provide instantaneous detection of earthquakes, and confirmed this by analyzing gravitational records associated with the 2011 Tohoku - Oki ("Fukushima '') earthquake. The effects of earthquakes include, but are not limited to, the following: Shaking and ground rupture are the main effects created by earthquakes, principally resulting in more or less severe damage to buildings and other rigid structures. The severity of the local effects depends on the complex combination of the earthquake magnitude, the distance from the epicenter, and the local geological and geomorphological conditions, which may amplify or reduce wave propagation. The ground - shaking is measured by ground acceleration. Specific local geological, geomorphological, and geostructural features can induce high levels of shaking on the ground surface even from low - intensity earthquakes. This effect is called site or local amplification. It is principally due to the transfer of the seismic motion from hard deep soils to soft superficial soils and to effects of seismic energy focalization owing to typical geometrical setting of the deposits. Ground rupture is a visible breaking and displacement of the Earth 's surface along the trace of the fault, which may be of the order of several meters in the case of major earthquakes. Ground rupture is a major risk for large engineering structures such as dams, bridges and nuclear power stations and requires careful mapping of existing faults to identify any which are likely to break the ground surface within the life of the structure. Earthquakes, along with severe storms, volcanic activity, coastal wave attack, and wildfires, can produce slope instability leading to landslides, a major geological hazard. Landslide danger may persist while emergency personnel are attempting rescue. Earthquakes can cause fires by damaging electrical power or gas lines. In the event of water mains rupturing and a loss of pressure, it may also become difficult to stop the spread of a fire once it has started. For example, more deaths in the 1906 San Francisco earthquake were caused by fire than by the earthquake itself. Soil liquefaction occurs when, because of the shaking, water - saturated granular material (such as sand) temporarily loses its strength and transforms from a solid to a liquid. Soil liquefaction may cause rigid structures, like buildings and bridges, to tilt or sink into the liquefied deposits. For example, in the 1964 Alaska earthquake, soil liquefaction caused many buildings to sink into the ground, eventually collapsing upon themselves. Tsunamis are long - wavelength, long - period sea waves produced by the sudden or abrupt movement of large volumes of water - including when an earthquake occurs at sea. In the open ocean the distance between wave crests can surpass 100 kilometers (62 mi), and the wave periods can vary from five minutes to one hour. Such tsunamis travel 600 - 800 kilometers per hour (373 -- 497 miles per hour), depending on water depth. Large waves produced by an earthquake or a submarine landslide can overrun nearby coastal areas in a matter of minutes. Tsunamis can also travel thousands of kilometers across open ocean and wreak destruction on far shores hours after the earthquake that generated them. Ordinarily, subduction earthquakes under magnitude 7.5 on the Richter magnitude scale do not cause tsunamis, although some instances of this have been recorded. Most destructive tsunamis are caused by earthquakes of magnitude 7.5 or more. A flood is an overflow of any amount of water that reaches land. Floods occur usually when the volume of water within a body of water, such as a river or lake, exceeds the total capacity of the formation, and as a result some of the water flows or sits outside of the normal perimeter of the body. However, floods may be secondary effects of earthquakes, if dams are damaged. Earthquakes may cause landslips to dam rivers, which collapse and cause floods. The terrain below the Sarez Lake in Tajikistan is in danger of catastrophic flood if the landslide dam formed by the earthquake, known as the Usoi Dam, were to fail during a future earthquake. Impact projections suggest the flood could affect roughly 5 million people. An earthquake may cause injury and loss of life, road and bridge damage, general property damage, and collapse or destabilization (potentially leading to future collapse) of buildings. The aftermath may bring disease, lack of basic necessities, mental consequences such as panic attacks, depression to survivors, and higher insurance premiums. One of the most devastating earthquakes in recorded history was the 1556 Shaanxi earthquake, which occurred on 23 January 1556 in Shaanxi province, China. More than 830,000 people died. Most houses in the area were yaodongs -- dwellings carved out of loess hillsides -- and many victims were killed when these structures collapsed. The 1976 Tangshan earthquake, which killed between 240,000 and 655,000 people, was the deadliest of the 20th century. The 1960 Chilean earthquake is the largest earthquake that has been measured on a seismograph, reaching 9.5 magnitude on 22 May 1960. Its epicenter was near Cañete, Chile. The energy released was approximately twice that of the next most powerful earthquake, the Good Friday earthquake (March 27, 1964) which was centered in Prince William Sound, Alaska. The ten largest recorded earthquakes have all been megathrust earthquakes; however, of these ten, only the 2004 Indian Ocean earthquake is simultaneously one of the deadliest earthquakes in history. Earthquakes that caused the greatest loss of life, while powerful, were deadly because of their proximity to either heavily populated areas or the ocean, where earthquakes often create tsunamis that can devastate communities thousands of kilometers away. Regions most at risk for great loss of life include those where earthquakes are relatively rare but powerful, and poor regions with lax, unenforced, or nonexistent seismic building codes. Earthquake prediction is a branch of the science of seismology concerned with the specification of the time, location, and magnitude of future earthquakes within stated limits. Many methods have been developed for predicting the time and place in which earthquakes will occur. Despite considerable research efforts by seismologists, scientifically reproducible predictions can not yet be made to a specific day or month. While forecasting is usually considered to be a type of prediction, earthquake forecasting is often differentiated from earthquake prediction. Earthquake forecasting is concerned with the probabilistic assessment of general earthquake hazard, including the frequency and magnitude of damaging earthquakes in a given area over years or decades. For well - understood faults the probability that a segment may rupture during the next few decades can be estimated. Earthquake warning systems have been developed that can provide regional notification of an earthquake in progress, but before the ground surface has begun to move, potentially allowing people within the system 's range to seek shelter before the earthquake 's impact is felt. The objective of earthquake engineering is to foresee the impact of earthquakes on buildings and other structures and to design such structures to minimize the risk of damage. Existing structures can be modified by seismic retrofitting to improve their resistance to earthquakes. Earthquake insurance can provide building owners with financial protection against losses resulting from earthquakes. Emergency management strategies can be employed by a government or organization to mitigate risks and prepare for consequences. From the lifetime of the Greek philosopher Anaxagoras in the 5th century BCE to the 14th century CE, earthquakes were usually attributed to "air (vapors) in the cavities of the Earth. '' Thales of Miletus, who lived from 625 -- 547 (BCE) was the only documented person who believed that earthquakes were caused by tension between the earth and water. Other theories existed, including the Greek philosopher Anaxamines ' (585 -- 526 BCE) beliefs that short incline episodes of dryness and wetness caused seismic activity. The Greek philosopher Democritus (460 -- 371 BCE) blamed water in general for earthquakes. Pliny the Elder called earthquakes "underground thunderstorms. '' In recent studies, geologists claim that global warming is one of the reasons for increased seismic activity. According to these studies melting glaciers and rising sea levels disturb the balance of pressure on Earth 's tectonic plates thus causing increase in the frequency and intensity of earthquakes. In Norse mythology, earthquakes were explained as the violent struggling of the god Loki. When Loki, god of mischief and strife, murdered Baldr, god of beauty and light, he was punished by being bound in a cave with a poisonous serpent placed above his head dripping venom. Loki 's wife Sigyn stood by him with a bowl to catch the poison, but whenever she had to empty the bowl the poison dripped on Loki 's face, forcing him to jerk his head away and thrash against his bonds, which caused the earth to tremble. In Greek mythology, Poseidon was the cause and god of earthquakes. When he was in a bad mood, he struck the ground with a trident, causing earthquakes and other calamities. He also used earthquakes to punish and inflict fear upon people as revenge. In Japanese mythology, Namazu (鯰) is a giant catfish who causes earthquakes. Namazu lives in the mud beneath the earth, and is guarded by the god Kashima who restrains the fish with a stone. When Kashima lets his guard fall, Namazu thrashes about, causing violent earthquakes. In modern popular culture, the portrayal of earthquakes is shaped by the memory of great cities laid waste, such as Kobe in 1995 or San Francisco in 1906. Fictional earthquakes tend to strike suddenly and without warning. For this reason, stories about earthquakes generally begin with the disaster and focus on its immediate aftermath, as in Short Walk to Daylight (1972), The Ragged Edge (1968) or Aftershock: Earthquake in New York (1999). A notable example is Heinrich von Kleist 's classic novella, The Earthquake in Chile, which describes the destruction of Santiago in 1647. Haruki Murakami 's short fiction collection After the Quake depicts the consequences of the Kobe earthquake of 1995. The most popular single earthquake in fiction is the hypothetical "Big One '' expected of California 's San Andreas Fault someday, as depicted in the novels Richter 10 (1996), Goodbye California (1977), 2012 (2009) and San Andreas (2015) among other works. Jacob M. Appel 's widely anthologized short story, A Comparative Seismology, features a con artist who convinces an elderly woman that an apocalyptic earthquake is imminent. Contemporary depictions of earthquakes in film are variable in the manner in which they reflect human psychological reactions to the actual trauma that can be caused to directly afflicted families and their loved ones. Disaster mental health response research emphasizes the need to be aware of the different roles of loss of family and key community members, loss of home and familiar surroundings, loss of essential supplies and services to maintain survival. Particularly for children, the clear availability of caregiving adults who are able to protect, nourish, and clothe them in the aftermath of the earthquake, and to help them make sense of what has befallen them has been shown even more important to their emotional and physical health than the simple giving of provisions. As was observed after other disasters involving destruction and loss of life and their media depictions, recently observed in the 2010 Haiti earthquake, it is also important not to pathologize the reactions to loss and displacement or disruption of governmental administration and services, but rather to validate these reactions, to support constructive problem - solving and reflection as to how one might improve the conditions of those affected.
who wrote the original story of star wars
Star Wars - wikipedia Trilogies: Anthology films: Animated film: TV films: Star Wars is an American epic space opera media franchise, centered on a film series created by George Lucas. It depicts the adventures of various characters "a long time ago in a galaxy far, far away ''. The franchise began in 1977 with the release of the film Star Wars (later subtitled Episode IV: A New Hope in 1981), which became a worldwide pop culture phenomenon. It was followed by the successful sequels The Empire Strikes Back (1980) and Return of the Jedi (1983); these three films constitute the original Star Wars trilogy. A prequel trilogy was released between 1999 and 2005, which received mixed reactions from both critics and fans. A sequel trilogy began in 2015 with the release of Star Wars: The Force Awakens and continued in 2017 with the release of Star Wars: The Last Jedi. The first eight films were nominated for Academy Awards (with wins going to the first two films released) and have been commercial successes, with a combined box office revenue of over US $8.5 billion, making Star Wars the second highest - grossing film series. Spin - off films include the animated Star Wars: The Clone Wars (2008) and Rogue One (2016), the latter of which is the first in a planned series of anthology films. The series has spawned an extensive media franchise including books, television series, computer and video games, theme park attractions and lands, and comic books, resulting in significant development of the series ' fictional universe. Star Wars holds a Guinness World Records title for the "Most successful film merchandising franchise ''. In 2015, the total value of the Star Wars franchise was estimated at US $42 billion, making Star Wars the second - highest - grossing media franchise of all time. In 2012, The Walt Disney Company bought Lucasfilm for US $4.06 billion and earned the distribution rights to all subsequent Star Wars films, beginning with the release of The Force Awakens in 2015. The former distributor, 20th Century Fox, was to retain the physical distribution rights for the first two Star Wars trilogies, was to own permanent rights for the original 1977 film and was to continue to hold the rights for the prequel trilogy and the first two sequels to A New Hope until May 2020. Walt Disney Studios currently owns digital distribution rights to all the Star Wars films, excluding A New Hope. On December 14, 2017, the Walt Disney Company announced its pending acquisition of 21st Century Fox, including the film studio and all distribution rights to A New Hope. The Star Wars franchise takes place in a distant unnamed fictional galaxy at an undetermined point in the ancient past, where many species of aliens (often humanoid) co-exist. People own robotic droids, who assist them in their daily routines, and space travel is common. The spiritual and mystical element of the Star Wars galaxy is known as "the Force ''. It is described in the original film as "an energy field created by all living things (that) surrounds us, penetrates us, (and) binds the galaxy together ''. The people who are born deeply connected to the Force have better reflexes; through training and meditation, they are able to achieve various supernatural feats (such as telekinesis, clairvoyance, precognition, and mind control). The Force is wielded by two major factions at conflict: the Jedi, who harness the light side of the Force, and the Sith, who use the dark side of the Force through hate and aggression. In 1971, Universal Studios made a contract for George Lucas to direct two films. In 1973, American Graffiti was completed, and released to critical acclaim including Academy Award nominations for Best Director and Original Screenplay for George Lucas. Months later, Lucas started work on his second film 's script draft, The Journal of the Whills, telling the tale of the training of apprentice CJ Thorpe as a "Jedi - Bendu '' space commando by the legendary Mace Windy. After Universal rejected the film, 20th Century Fox decided to invest on it. On April 17, 1973, Lucas felt frustrated about his story being too difficult to understand, so he began writing a 13 - page script with thematic parallels to Akira Kurosawa 's The Hidden Fortress, this draft was renamed The Star Wars. By 1974, he had expanded the script into a rough draft screenplay, adding elements such as the Sith, the Death Star, and a protagonist named Annikin Starkiller. Numerous subsequent drafts would go through numerous drastic changes, before evolving into the script of the original film. Lucas insisted that the movie would be part of a 9 - part series and negotiated to retain the sequel rights, to ensure all the movies would be made. Tom Pollock, then Lucas ' lawyer writes: "So in the negotiations that were going on, we drew up a contract with Fox 's head of business affairs Bill Immerman, and me. We came to an agreement that George would retain the sequel rights. Not all the (merchandising rights) that came later, mind you; just the sequel rights. And Fox would get a first opportunity and last refusal right to make the movie. '' Lucas was offered $50,000 to write, another $50,000 to produce, and $50,000 to direct the film. Later the offer was increased. Star Wars was released on May 25, 1977. It was followed by The Empire Strikes Back, released on May 21, 1980, and Return of the Jedi, released on May 25, 1983. The sequels were all self - financed by Lucasfilm. The opening crawl of the sequels disclosed that they were numbered as "Episode V '' and "Episode VI '' respectively, though the films were generally advertised solely under their subtitles. Though the first film in the series was simply titled Star Wars, with its 1981 re-release it had the subtitle Episode IV: A New Hope added to remain consistent with its sequel, and to establish it as the middle chapter of a continuing saga. The plot of the original trilogy centers on the Galactic Civil War of the Rebel Alliance trying to free the galaxy from the clutches of the Galactic Empire, as well as on Luke Skywalker 's quest to become a Jedi. Near the orbit of the desert planet Tatooine, a Rebel spaceship is intercepted by the Empire. Aboard, the deadliest Imperial agent Darth Vader and his stormtroopers capture Princess Leia Organa, a secret member of the rebellion. Before her capture, Leia makes sure the astromech R2 - D2, along with the protocol droid C - 3PO, escapes with stolen Imperial blueprints stored inside and a holographic message for the retired Jedi Knight Obi - Wan Kenobi, who has been living in exile on Tatooine. The droids fall under the ownership of Luke Skywalker, an orphan farm boy raised by his step - uncle and aunt. Luke helps the droids locate Obi - Wan, now a solitary old hermit known as Ben Kenobi, who reveals himself as a friend of Luke 's absent father, the Jedi Knight Anakin Skywalker. Obi - Wan confides to Luke that Anakin was "betrayed and murdered '' by Vader (who was Obi - Wan 's former Jedi apprentice) years ago, and he gives Luke his father 's former lightsaber to keep. After viewing Leia 's message, they both hire the smuggler Han Solo and his Wookiee co-pilot Chewbacca to, aboard their space freighter the Millennium Falcon, help them deliver the stolen blueprints inside R2 - D2 to the Rebel Alliance with the hope of finding a weakness to the Empire 's planet - destroying space station: the Death Star. For The Star Wars second draft, Lucas made heavy simplifications. It added a mystical energy field known as "The Force '' and introduced the young hero on a farm as Luke Starkiller. Annikin became Luke 's father, a wise Jedi knight. The third draft killed the father Annikin, replacing him with mentor figure Ben Kenobi. Later, Lucas felt the film would not in fact be the first in the sequence, but a film in the second trilogy in the saga. The draft contained a sub-plot leading to a sequel about "The Princess of Ondos '', and by that time some months later Lucas had negotiated a contract that gave him rights to make two sequels. Not long after, Lucas hired author Alan Dean Foster, to write two sequels as novels. In 1976, a fourth draft had been prepared for principal photography. The film was titled Adventures of Luke Starkiller, as taken from the Journal of the Whills, Saga I: The Star Wars. During production, Lucas changed Luke 's name to Skywalker and altered the title to simply The Star Wars and finally Star Wars. At that point, Lucas was not expecting the film to have sequels. The fourth draft of the script underwent subtle changes it discarded "the Princess of Ondos '' sub-plot, to become a self - contained film, that ended with the destruction of the Galactic Empire itself by way of destroying the Death Star. However, Lucas previously conceived of the film as the first of a series. The intention was that if Star Wars was successful, Lucas could adapt Dean Foster 's novels into low - budget sequels. By that point, Lucas had developed an elaborate backstory to aid his writing process. Before its release, Lucas considered walking away from Star Wars sequels, thinking the film would be a flop. However the film exceeded all expectations. The success of the film, as well as its merchandise sales, and Lucas desire to create an independent film - making center. Both led Lucas to make Star Wars the basis of an elaborate film serial, and use the profits to finance his film - making center, Skywalker Ranch. Alan Dean Foster was already writing the first sequel - novel Splinter of the Mind 's Eye, released in 1978. But Lucas decided not to adapt Foster 's work, knowing a sequel would be allowed more budget. At first, Lucas envisioned a series of films with no set number of entries, like the James Bond series. In an interview with Rolling Stone in August 1977, he said that he wanted his friends to each take a turn at directing the films and giving unique interpretations on the series. He added that the backstory in which Darth Vader turns to the dark side, kills Luke 's father and fights Obi - Wan Kenobi on a volcano as the Galactic Republic falls would make an excellent sequel. Three years after the destruction of the Death Star, the Rebels are forced to evacuate their secret base on Hoth as they are hunted by the Empire. At the request of the late Obi - Wan 's spirit, Luke travels to the swamp - infested world of Dagobah to find the exiled Jedi Master Yoda and begin his Jedi training. However, Luke 's training is interrupted by Vader, who lures him into a trap by capturing Han and Leia at Cloud City, governed by Han 's old friend Lando Calrissian. During a fierce lightsaber duel with the Sith Lord, Luke learns that Vader is his father. After the success of the original film, Lucas hired science fiction author Leigh Brackett to write Star Wars II with him. They held story conferences and, by late November 1977, Lucas had produced a handwritten treatment called The Empire Strikes Back. It was similar to the final film, except that Darth Vader does not reveal he is Luke 's father. Brackett finished her first draft in early 1978; in it, Luke 's father appeared as a ghost to instruct Luke. Lucas has said he was disappointed with it, but before he could discuss it with her, she died of cancer. With no writer available, Lucas had to write his next draft himself. It was this draft in which Lucas first made use of the "Episode '' numbering for the films; Empire Strikes Back was listed as Episode II. As Michael Kaminski argues in The Secret History of Star Wars, the disappointment with the first draft probably made Lucas consider different directions in which to take the story. He made use of a new plot twist: Darth Vader claims to be Luke 's father. According to Lucas, he found this draft enjoyable to write, as opposed to the yearlong struggles writing the first film, and quickly wrote two more drafts, both in April 1978. This new story point of Darth Vader being Luke 's father had drastic effects on the series. After writing these two drafts, Lucas revised the backstory between Anakin Skywalker, Kenobi, and the Emperor. With this new backstory in place, Lucas decided that the series would be a trilogy, changing Empire Strikes Back from Episode II to Episode V in the next draft. Lawrence Kasdan, who had just completed writing Raiders of the Lost Ark, was then hired to write the next drafts, and was given additional input from director Irvin Kershner. Kasdan, Kershner, and producer Gary Kurtz saw the film as a more serious and adult film, which was helped by the new, darker storyline, and developed the series from the light adventure roots of the first film. A year after Vader 's shocking revelation, Luke leads a rescue attempt to save Han from the gangster Jabba the Hutt. Afterward, Luke returns to Dagobah to complete his Jedi training, only to find the 900 - year - old Yoda on his deathbed. In his last words Yoda confirms that Vader is Luke 's father, Anakin Skywalker, and that Luke must confront his father again in order to complete his training. Moments later, the spirit of Obi - Wan reveals to Luke that Leia is his twin sister, but Obi - Wan insists that Luke must face Vader again. As the Rebels lead an attack on the Death Star II, Luke engages Vader in another lightsaber duel as Emperor Palpatine watches; both Sith Lords intend to turn Luke to the dark side of the Force and take him as their apprentice. By the time Lucas began writing Episode VI in 1981 (then titled Revenge of the Jedi), much had changed. Making Empire Strikes Back was stressful and costly, and Lucas ' personal life was disintegrating. Burned out and not wanting to make any more Star Wars films, he vowed that he was done with the series in a May 1983 interview with Time magazine. Lucas ' 1981 rough drafts had Darth Vader competing with the Emperor for possession of Luke -- and in the second script, the "revised rough draft '', Vader became a sympathetic character. Lawrence Kasdan was hired to take over once again and, in these final drafts, Vader was explicitly redeemed and finally unmasked. This change in character would provide a springboard to the "Tragedy of Darth Vader '' storyline that underlies the prequels. After losing much of his fortune in a divorce settlement in 1987, George Lucas had no desire to return to Star Wars, and had unofficially canceled the sequel trilogy by the time of Return of the Jedi. At that point, the prequels were only still a series of basic ideas partially pulled from his original drafts of "The Star Wars ''. Nevertheless, technical advances in the late 1980s and 1990s continued to fascinate Lucas, and he considered that they might make it possible to revisit his 20 - year - old material. The popularity of the franchise was reinvigorated by the Star Wars expanded universe storylines set after the original trilogy films, such as the Thrawn trilogy of novels written by Timothy Zahn and the Dark Empire comic book series published by Dark Horse Comics. Due to the renewed popularity of Star Wars, Lucas saw that there was still a large audience. His children were older, and with the explosion of CGI technology he was now considering returning to directing. The prequel trilogy consists of Episode I: The Phantom Menace, released on May 19, 1999; Episode II: Attack of the Clones, released on May 16, 2002; and Episode III: Revenge of the Sith, released on May 19, 2005. The plot focuses on the fall of the Galactic Republic, as well as the tragedy of Anakin Skywalker 's turn to the dark side. About 32 years before the start of the Galactic Civil War, the corrupt Trade Federation sets a blockade around the planet Naboo. The Sith Lord Darth Sidious had secretly planned the blockade to give his alter ego, Senator Palpatine, a pretense to overthrow and replace the Supreme Chancellor of the Republic. At the Chancellor 's request, the Jedi Knight Qui - Gon Jinn and his apprentice, a younger Obi - Wan Kenobi, are sent to Naboo to negotiate with the Federation. However, the two Jedi are forced to instead help the Queen of Naboo, Padmé Amidala, escape from the blockade and plead her planet 's crisis before the Republic Senate on Coruscant. When their starship is damaged during the escape, they land on Tatooine for repairs. Palpatine dispatches his first Sith apprentice, Darth Maul, to hunt down the Queen and her Jedi protectors. While on Tatooine, Qui - Gon discovers a nine - year - old slave named Anakin Skywalker. Qui - Gon helps liberate the boy from slavery, believing Anakin to be the "Chosen One '' foretold by a Jedi prophecy to bring balance to the Force. However, the Jedi Council (led by Yoda) suspects the boy possesses too much fear and anger within him. In 1993, it was announced, in Variety among other sources, that Lucas would be making the prequels. He began penning more to the story, now indicating the series would be a tragic one examining Anakin Skywalker 's fall to the dark side. Lucas began to reevaluate how the prequels would exist relative to the originals; at first they were supposed to be a "filling - in '' of history tangential to the originals, but he later realized that they could form the beginning of one long story that started with Anakin 's childhood and ended with his death. This was the final step towards turning the film series into a "Saga ''. In 1994, Lucas began writing the screenplay to the first prequel, initially titled Episode I: The Beginning. Following the release of that film, Lucas announced that he would be directing the next two, and began work on Episode II. Ten years after the Battle of Naboo, Anakin is reunited with Padmé, now serving as the Senator of Naboo, and they fall in love despite Anakin 's obligations to the Jedi Order. At the same time, the entire galaxy gets swept up in the Clone Wars between the armies of the Republic, led by the Jedi Order, and the Confederacy of Independent Systems, led by the fallen Jedi Count Dooku. The first draft of Episode II was completed just weeks before principal photography, and Lucas hired Jonathan Hales, a writer from The Young Indiana Jones Chronicles, to polish it. Unsure of a title, Lucas had jokingly called the film "Jar Jar 's Great Adventure ''. In writing The Empire Strikes Back, Lucas initially decided that Lando Calrissian was a clone and came from a planet of clones which caused the "Clone Wars '' mentioned by both Luke Skywalker and Princess Leia in A New Hope; he later came up with an alternate concept of an army of clone shocktroopers from a remote planet which attacked the Republic and were repelled by the Jedi. The basic elements of that backstory became the plot basis for Episode II, with the new wrinkle added that Palpatine secretly orchestrated the crisis. Three years after the start of the Clone Wars, Anakin and Obi - Wan lead a rescue mission to save the kidnapped Chancellor Palpatine from Count Dooku and the droid commander General Grievous. Later, Anakin begins to have prophetic visions of his secret wife Padmé dying in childbirth. Palpatine, who had been secretly engineering the Clone Wars to destroy the Jedi Order, convinces Anakin that the dark side of the Force holds the power to save Padmé 's life. Desperate, Anakin submits to Palpatine 's Sith teachings and is renamed Darth Vader. While Palpatine re-organizes the Republic into the tyrannical Empire, Vader participates in the extermination of the Jedi Order; culminating in a lightsaber duel between himself and his former master Obi - Wan on the volcanic planet Mustafar. Lucas began working on Episode III before Attack of the Clones was released, offering concept artists that the film would open with a montage of seven Clone War battles. As he reviewed the storyline that summer, however, he says he radically re-organized the plot. Michael Kaminski, in The Secret History of Star Wars, offers evidence that issues in Anakin 's fall to the dark side prompted Lucas to make massive story changes, first revising the opening sequence to have Palpatine kidnapped and his apprentice, Count Dooku, murdered by Anakin as the first act in the latter 's turn towards the dark side. After principal photography was complete in 2003, Lucas made even more massive changes in Anakin 's character, re-writing his entire turn to the dark side; he would now turn primarily in a quest to save Padmé 's life, rather than the previous version in which that reason was one of several, including that he genuinely believed that the Jedi were evil and plotting to take over the Republic. This fundamental re-write was accomplished both through editing the principal footage, and new and revised scenes filmed during pick - ups in 2004. On August 15, 2008, the standalone animated film Star Wars: The Clone Wars was released theatrically as a lead - in to the animated TV series with the same name. This series includes 6 seasons, which were broadcast on Cartoon Network, with the exception of the last one. The final season was cut short following Disney 's purchase of the franchise. There were also two more seasons in the works, but these were also cancelled. Over the years, Lucas often exaggerated the amount of material he wrote for the series; much of the exaggerations stemmed from the post ‐ 1978 period when the series grew into a phenomenon. Michael Kaminski explained that the exaggerations were both a publicity and security measure, further rationalizing that since the series ' story radically changed throughout the years, it was always Lucas ' intention to change the original story retroactively because audiences would only view the material from his perspective. The exaggerations created rumors of Lucas having plot outlines a sequel trilogy of (Episodes VII, VIII and IX), which would continue the story after 1983 's Episode VI: Return of the Jedi Lucasfilm and George Lucas had denied plans for a sequel trilogy for many years, insisting that Star Wars was meant to be a six - part series, and that no further films would be released after the conclusion of the prequel trilogy in 2005. Although Lucas made an exception by releasing the animated Star Wars: The Clone Wars film in 2008, while promoting it, Lucas maintained his status on the sequel trilogy: "I get asked all the time, ' What happens after Return of the Jedi?, ' and there really is no answer for that. The movies were the story of Anakin Skywalker and Luke Skywalker, and when Luke saves the galaxy and redeems his father, that 's where that story ends. '' In January 2012, Lucas announced that he would step away from blockbuster films and instead produce smaller arthouse films. Asked whether the criticism he received following the prequel trilogy and the alterations to the re-releases of the original trilogy had influenced his decision to retire, Lucas said: "Why would I make any more when everybody yells at you all the time and says what a terrible person you are? '' Despite insisting that a sequel trilogy would never happen, Lucas began working on story treatments for three new Star Wars films in 2011. In October 2012, The Walt Disney Company agreed to buy Lucasfilm and announced that Star Wars Episode VII would be released in 2015. Later, it was revealed that the three new upcoming films (Episodes VII -- IX) would be based on story treatments that had been written by George Lucas prior to the sale of Lucasfilm. The co-chairman of Lucasfilm, Kathleen Kennedy, became president of the company, reporting to Walt Disney Studios chairman Alan Horn. In addition, Kennedy will serve as executive producer on new Star Wars feature films, with franchise creator and Lucasfilm founder Lucas serving as creative consultant. The sequel trilogy began with Episode VII: The Force Awakens, released on December 18, 2015. It was followed by Episode VIII: The Last Jedi, released on December 15, 2017. About 30 years after the destruction of the Death Star II, Luke Skywalker has vanished following the demise of the new Jedi Order he was attempting to build. The remnants of the Empire have become the First Order, and seek to destroy Luke and the New Republic, while the Resistance opposes, led by princess - turned - general Leia Organa and backed by the Republic. On Jakku, Resistance pilot Poe Dameron obtains a map to Luke 's location. Stormtroopers under the command of Kylo Ren, the son of Leia and Han Solo, capture Poe. Poe 's droid BB - 8 escapes with the map, and encounters a scavenger Rey. Kylo tortures Poe and learns of BB - 8. Stormtrooper FN - 2187 defects from the First Order, and frees Poe who dubs him "Finn '', while both escape in a TIE fighter that crashes on Jakku, seemingly killing Poe. Finn finds Rey and BB - 8, but the First Order does too; both escape Jakku in a stolen Millennium Falcon. The Falcon is recaptured by Han and Chewbacca, smugglers again since abandoning the Resistance. They agree to help deliver the map inside BB - 8 to the Resistance. The screenplay for Episode VII was originally set to be written by Michael Arndt, but in October 2013 it was announced that writing duties would be taken over by Lawrence Kasdan and J.J. Abrams. On January 25, 2013, The Walt Disney Studios and Lucasfilm officially announced J.J. Abrams as Star Wars Episode VII 's director and producer, along with Bryan Burk and Bad Robot Productions. Right after the destruction of Starkiller Base, Rey finds Luke Skywalker on the planet Ahch - To and convinces him to teach her the ways of the Jedi and seeks answers of her past with the help from Luke and Kylo Ren. Meanwhile, Finn, Leia, Poe, BB - 8, Rose Tico, and the rest of the Resistance make an escape to the planet Crait from the First Order. Kylo Ren assassinates Supreme Leader Snoke and takes control of the First Order. which culminates in a battle on Crait with the Resistance. On November 20, 2012, The Hollywood Reporter reported that Lawrence Kasdan and Simon Kinberg would write and produce Episodes VIII and IX. Kasdan and Kinberg were later confirmed as creative consultants on those films, in addition to writing standalone films. In addition, John Williams, who wrote the music for the previous six episodes, was hired to compose the music for Episodes VII, VIII and IX. On March 12, 2015, Lucasfilm announced that Looper director Rian Johnson would direct Episode VIII with Ram Bergman as producer for Ram Bergman Productions. Reports initially claimed Johnson would also direct Episode IX, but it was later confirmed he would write only a story treatment. Johnson later wrote on his Twitter that the information about him writing a treatment for Episode IX is old, and he 's not involved with the writing of that film. When asked about Episode VIII in an August 2014 interview, Johnson said "it 's boring to talk about, because the only thing I can really say is, I 'm just happy. I do n't have the terror I kind of expected I would, at least not yet. I 'm sure I will at some point. '' Principal photography on The Last Jedi began in February 2016. Additional filming took place in Dubrovnik from March 9 to March 16, 2016, as well as in Ireland in May 2016. Principal photography wrapped in July 2016. On December 27, 2016, Carrie Fisher died after going into cardiac arrest a few days earlier. Before her death, Fisher had completed filming her role as General Leia Organa in The Last Jedi. The film was released on December 15, 2017. Production on Episode IX was scheduled to begin sometime in 2017. Variety and Reuters reported that Carrie Fisher was slated for a key role in Episode IX. Now, Lucasfilm, Disney and others involved with the film have been forced to find a way to address her death in the upcoming film and alter her character 's role. In January 2017, Lucasfilm stated they would not digitally generate Fisher 's performance for the film. In April 2017, Fisher 's brother Todd and daughter Billie Lourd gave Disney permission to use recent footage of Fisher for the film, but later that month, Kennedy stated that Fisher will not appear in the film. Principal photography of Star Wars: Episode IX is set to begin in July 2018. On February 5, 2013, Disney CEO Bob Iger confirmed the development of two standalone films, each individually written by Lawrence Kasdan and Simon Kinberg. On February 6, Entertainment Weekly reported one film would focus on Han Solo, while the other on Boba Fett. Disney CFO Jay Rasulo has described the standalone films as origin stories. Kathleen Kennedy explained that the standalone films will not crossover with the films of the sequel trilogy, stating, "George was so clear as to how that works. The canon that he created was the Star Wars saga. Right now, Episode VII falls within that canon. The spin - off movies, or we may come up with some other way to call those films, they exist within that vast universe that he created. There is no attempt being made to carry characters (from the standalone films) in and out of the saga episodes. Consequently, from the creative standpoint, it 's a roadmap that George made pretty clear. '' In April 2015, Lucasfilm and Kennedy announced that the standalone films would be referred to as the Star Wars Anthology films. Rogue One: A Star Wars Story was released on December 16, 2016 as the first in an anthology series of films separate from the main episodic saga. The story about the group of rebels who stole the Death Star plans, ending directly before Episode IV: A New Hope. The idea for the film was conceived by John Knoll who worked as a visual effects supervisor of the prequel trilogy films. In May 2014, Lucasfilm announced Gareth Edwards as the director of the first anthology film, with Gary Whitta writing the first draft, for a release on December 16, 2016. On March 12, 2015, the film 's title was revealed to be Rogue One, with Chris Weitz rewriting the script, and starring Felicity Jones, Ben Mendelsohn, and Diego Luna. In April 2015, a teaser trailer was shown during the closing of the Star Wars Celebration. Lucasfilm announced filming would begin in the summer of 2015 and released the plot synopsis. Director Edwards stated, "It comes down to a group of individuals who do n't have magical powers that have to somehow bring hope to the galaxy. ''; and describing the style of the film as similar to that of a war film: "It 's the reality of war. Good guys are bad. Bad guys are good. It 's complicated, layered; a very rich scenario in which to set a movie. '' After its debut, Rogue One received generally positive reviews, with its performances, action sequences, soundtrack, visual effects and darker tone being praised. The film grossed over US $500 million worldwide within a week of its release. Characters from the animated series appear, Saw Gerrera (from The Clone Wars) in a pivotal role in the plot and Chopper (from Star Wars: Rebels) in a cameo. A film focusing on Han Solo before the events of Episode IV: A New Hope. Before selling Lucasfilm to Disney, George Lucas started develpment on a film about a young Han Solo. Lucas hired Star Wars original trilogy veteran script writer Lawrence Kasdan, along his son Jon Kasdan to write the script. The film stars Alden Ehrenreich as a young Han Solo, Joonas Suotamo as Chewbacca (after serving as a double for the character in The Force Awakens and The Last Jedi), Donald Glover as Lando Calrissian, and Emilia Clarke and Woody Harrelson. Directors Phil Lord and Christopher Miller began principal photography on the film, but due to creative differences, the pair left the project in June 2017 with three and a half weeks remaining in principal photography. Academy Award - winning director Ron Howard was announced as their replacement. While his first Star Wars film, Howard had previously collaborated with producing company Lucasfilm as an actor in the George Lucas - directed film American Graffiti (1973) and as director of Willow (1988). Howard was one of the three directors George Lucas asked to direct Episode I: The Phantom Menace, though Howard declined, saying, "George, you should do it! ''. The film is distributed by Walt Disney Studios Motion Pictures and will be released on May 25, 2018. A third Anthology film will be released in 2020. A writer for the film has been hired as of September 2016. In February 2013, Entertainment Weekly reported that Lucasfilm hired Josh Trank to direct a Star Wars standalone film, with the news being confirmed soon after. However, in November 2016 Disney announced that their contract with Trank was terminated due to the overwhelmingly negative reviews of Fantastic Four. By 2017, it was reported that the film was still in early development at Lucasfilm, and with reports stating that the film would focus on bounty hunter, Boba Fett. Lucasfilm never confirmed what the plot was about, but revealed that the film Josh Trank left was a different film from Solo: A Star Wars Story. In August 2016, Ewan McGregor stated he would be open to return to the role of Obi - Wan Kenobi, albeit for a spin - off film on the character, should he be approached, wanting to tell a story between Episode III and IV. Fans showed interest in the idea; a fan - trailer for an Obi - Wan film, with footage from the film Last Days in the Desert (which starred McGregor) became viral and widely praised by fans. The film was voted as the most wanted anthology film in a poll by The Hollywood Reporter despite there being only rumors of the film 's production. Lucasfilm and McGregor for years denied the development of such film, despite fans ' continued interest and rumors. In March 2017, McGregor again stated his interest in starring in a solo film, if Lucasfilm wanted him to. By August 2017, it was reported that a movie centered around Obi - Wan Kenobi is in early developments, with Stephen Daldry in early negotiations to co-write and direct the project. Liam Neeson expressed his interest in returning to the franchise, reprising his role as Qui - Gon Jinn. Joel Edgerton, who played Luke Skywalker 's step - uncle Owen in the prequel trilogy, said he would like to reprise his role in an Obi - Wan standalone film, if it were to be made. Additional reports stated Lucasfilm was considering various films about different characters including movies focusing on Boba Fett, as well as Jedi Master Yoda. Temuera Morrison has expressed interest in portraying Boba, or Captain Rex, both clones of his previous character Jango Fett. Daniel Logan, who played Boba Fett as a child in Attack of the Clones, has also expressed interest in reprising his role in the rumored Boba Fett film. In 2015, director Guillermo Del Toro pitched an idea to Lucasfilm for a film about Jabba the Hutt, and in 2017, it was reported that it is among the projects being considered by the studio. Samuel L. Jackson has expressed interest in returning as Mace Windu, insisting that his character survived his apparent death. Ian McDiarmid has also expressed interest in returning as Emperor Palpatine. Fans have also expressed interest towards the possibility of Ahsoka Tano appearing in a live - action film, with Rosario Dawson expressing interest in the role. In November 2017, Lucasfilm announced that Rian Johnson, the writer / director of The Last Jedi, would be working on a new trilogy. The films will reportedly differ from the Skywalker - focused films in favor of focusing on new characters. Johnson is confirmed to write and direct the first film. On the same day, Disney announced that a live - action Star Wars television series was in development exclusively for their upcoming streaming service. In February 2018, it was announced that David Benioff and D.B. Weiss would write and produce a series of Star Wars films that are not Skywalker - focused films, similar to but separate from Rian Johnson 's upcoming installments in the franchise. In March 2018, it was revealed by Deadline that Simon Kinberg is writing the script to an as - of - yet undisclosed film within the franchise. Previously, the filmmaker had been attached to producing the Boba Fett - centered film and is known for co-creating the Star Wars Rebels animated television series. From 1977 to 2014, the term Expanded Universe (abbreviated as EU), was an umbrella term for all officially licensed Star Wars storytelling materials set outside the events depicted within the theatrical films, including television series, novels, comics, and video games. Lucasfilm maintained internal continuity between the films and television content and the EU material until April 25, 2014, when the company announced all of the EU works would cease production. Existing works would no longer be considered canon to the franchise and subsequent reprints would be rebranded under the Star Wars Legends label, with downloadable content for the massively multiplayer online game Star Wars: The Old Republic being the only Legends material to still be produced. The Star Wars canon was subsequently restructured to only include the existing six feature films, the animated film Star Wars: The Clone Wars (2008), and its companion animated series Star Wars: The Clone Wars. All future projects and creative developments across all types of media would be overseen and coordinated by the Story Group, announced as a division of Lucasfilm created to maintain continuity and a cohesive vision on the storytelling of the franchise. Lucasfilm announced that the change was made "to give maximum creative freedom to the filmmakers and also preserve an element of surprise and discovery for the audience ''., The animated series Star Wars Rebels was the first project produced after the announcement, followed by multiple comics series from Marvel, novels published by Del Rey, and the sequel film The Force Awakens (2015). In the two - hour Star Wars Holiday Special produced for CBS in 1978, Chewbacca returns to his home planet of Kashyyyk to celebrate "Life Day '' with his family. Along with the stars of the original 1977 film, celebrities Bea Arthur, Art Carney, Diahann Carroll, and Jefferson Starship appear in plot - related skits and musical numbers. Lucas loathed the special and forbade it to ever be aired again after its original broadcast, or reproduced on home video. An 11 - minute animated sequence in the Holiday Special featuring the first appearance of bounty hunter Boba Fett, is considered to be the sole silver lining of the production, with Lucas even including it as a special feature on a 2011 Blu - ray release (making it the only part of the Holiday Special to ever receive an official home media release). The segment is the first Star Wars animation ever produced. The television film Caravan of Courage: An Ewok Adventure aired on ABC on Thanksgiving weekend in 1984. With a story by Lucas and a screenplay by Bob Carrau, it features the Ewok Wicket from Return of the Jedi as he helps two children rescue their parents from a giant known as Gorax. The 1985 sequel, Ewoks: The Battle for Endor, finds Wicket and his friends protecting their village from invaders. Nelvana, the animation studio that had animated the animated segment of the Holiday Special was hired to create two animated series. Star Wars: Droids (1985 -- 1986), which aired for one season on ABC, follows the adventures of the droids C - 3PO and R2 - D2, 15 years before the events of the 1977 film Star Wars. Its sister series Star Wars: Ewoks (1985 -- 1987) features the adventures of the Ewoks before Return of the Jedi and the Ewok movies. After the release of Attack of the Clones, Cartoon Network animated and aired Star Wars: Clone Wars from 2003 to weeks before the 2005 release of Revenge of the Sith, as the series featured event set between those films. It won the Primetime Emmy Awards for Outstanding Animated Program in 2004 and 2005. Lucas decided to invest in creating his own animation company, Lucasfilm Animation, and used it to create his first in - house Star Wars CGI - animated series. Star Wars: The Clone Wars (2008 -- 2014) was introduced through a 2008 animated film of the same name, and set in the same time period as the previous Clone Wars series (albeit ignoring it). While all previous television works were reassigned to the Legends brand in 2014, Lucasfilm accepted The Clone Wars and its originating film, as part of the canon. All series released after would also be part of the canon. In 2014, Disney XD began airing Star Wars Rebels, the next CGI - animated series. Set between Revenge of the Sith and A New Hope, it followed a band of rebels as they fight the Galactic Empire and helped close some of the arcs in The Clone Wars. Another animated series debuted in 2017, Star Wars Forces of Destiny focused on the female characters of the franchise. Since 2005, when Lucas announced plans for a television series set between the prequel and original trilogies, the television project has been in varying stages of development at Lucasfilm Producer Rick McCallum revealed the working title, Star Wars: Underworld, in 2012, and said in 2013 that 50 scripts had been written. He called the project "The most provocative, the most bold and daring material that we 've ever done. '' The proposed series explores criminal and political power struggles in the decades prior to A New Hope, and as of December 2015 was still in development at Lucasfilm. In November 2017, Bob Iger discussed the development of a Star Wars series for Disney 's upcoming digital streaming service, due to launch in 2019. It is unknown if the series would be based on the Star Wars Underworld scripts or if it would follow an entirely new idea. In February 2018, it was reported that there are multiple live action Star Wars TV series currently in development, with "rather significant '' talent involved in the productions. Jon Favreau, who had previously voiced Pre Vizsla in The Clone Wars animated series, will produce and write the television series. Star Wars - based fiction predates the release of the first film, with the December 1976 novelization of Star Wars, subtitled From the Adventures of Luke Skywalker. Credited to Lucas, it was ghost - written by Alan Dean Foster. The first Expanded Universe story appeared in Marvel Comics ' Star Wars # 7 in January 1978 (the first six issues of the series having been an adaptation of the film), followed quickly by Foster 's novel Splinter of the Mind 's Eye the following month. Star Wars: From the Adventures of Luke Skywalker is a 1976 novelization of the original film by Alan Dean Foster, who followed it with the sequel Splinter of the Mind 's Eye (1978), which Lucas decided not to film. The film novelizations for The Empire Strikes Back (1980) by Donald F. Glut and Return of the Jedi (1983) by James Kahn followed, as well as The Han Solo Adventures trilogy (1979 -- 1980) by Brian Daley, and The Adventures of Lando Calrissian (1983) trilogy by L. Neil Smith. Timothy Zahn 's bestselling Thrawn trilogy (1991 -- 1993) reignited interest in the franchise and introduced the popular characters Grand Admiral Thrawn, Mara Jade, Talon Karrde, and Gilad Pellaeon. The first novel, Heir to the Empire, reached # 1 on the New York Times Best Seller list, and the series finds Luke, Leia, and Han facing off against tactical genius Thrawn, who is plotting to retake the galaxy for the Empire. In The Courtship of Princess Leia (1994) by Dave Wolverton, set immediately before the Thrawn trilogy, Leia considers an advantageous political marriage to Prince Isolder of the planet Hapes, but she and Han ultimately marry. Steve Perry 's Shadows of the Empire (1996), set in the as - yet - unexplored time period between The Empire Strikes Back and Return of the Jedi, was part of a multimedia campaign that included a comic book series and video game. The novel introduced the crime lord Prince Xizor, another popular character who would appear in multiple other works. Other notable series from Bantam include the Jedi Academy trilogy (1994) by Kevin J. Anderson, the 14 - book Young Jedi Knights series (1995 -- 1998) by Anderson and Rebecca Moesta, and the X-wing series (1996 -- 2012) by Michael A. Stackpole and Aaron Allston. Del Rey took over Star Wars book publishing in 1999, releasing what would become a 19 - installment novel series called The New Jedi Order (1999 -- 2003). Written by multiple authors, the series was set 25 to 30 years after the original films and introduced the Yuuzhan Vong, a powerful alien race attempting to invade and conquer the entire galaxy. The bestselling multi-author series Legacy of the Force (2006 -- 2008) chronicles the crossover of Han and Leia 's son Jacen Solo to the dark side of the Force; among his evil deeds, he kills Luke 's wife Mara Jade as a sacrifice to join the Sith. The story parallels the fallen son of Han and Leia, Ben Solo / Kylo Ren, in the 2015 film The Force Awakens. Three series were introduced for younger audiences: the 18 - book Jedi Apprentice (1999 -- 2002) chronicles the adventures of Obi - Wan Kenobi and his master Qui - Gon Jinn in the years before The Phantom Menace; the 11 - book Jedi Quest (2001 -- 2004) follows Obi - Wan and his own apprentice, Anakin Skywalker in between The Phantom Menace and Attack of the Clones; and the 10 - book The Last of the Jedi (2005 -- 2008), set almost immediately after Revenge of the Sith, features Obi - Wan and the last few surviving Jedi. Maul: Lockdown by Joe Schreiber, released in January 2014, was the last Star Wars novel published before Lucasfilm announced the creation of the Star Wars Legends brand. Though Thrawn had been designated a Legends character in 2014, he was reintroduced into the canon in the 2016 third season of Star Wars Rebels, with Zahn returning to write more novels based in the character, and set in the reworked canon. Marvel Comics published a Star Wars comic book series from 1977 to 1986. Original Star Wars comics were serialized in the Marvel magazine Pizzazz between 1977 and 1979. The 1977 installments were the first original Star Wars stories not directly adapted from the films to appear in print form, as they preceded those of the Star Wars comic series. From 1985 -- 1987, the animated children 's series Ewoks and Droids inspired comic series from Marvel 's Star Comics line. In the late 1980s, Marvel dropped a new Star Wars comic it had in development, which was picked up by Dark Horse Comics and published as the popular Dark Empire sequence (1991 -- 1995). Dark Horse subsequently launched dozens of series set after the original film trilogy, including Tales of the Jedi (1993 -- 1998), X-wing Rogue Squadron (1995 -- 1998), Star Wars: Republic (1998 -- 2006), Star Wars Tales (1999 -- 2005), Star Wars: Empire (2002 -- 2006), and Knights of the Old Republic (2006 -- 2010). After Disney 's acquisition of Lucasfilm, it was announced in January 2014 that in 2015 the Star Wars comics license would return to Marvel Comics, whose parent company, Marvel Entertainment, Disney had purchased in 2009. Launched in 2015, the first three publications in were titled Star Wars, Star Wars: Darth Vader, and the limited series Star Wars: Princess Leia. Radio adaptations of the films were also produced. Lucas, a fan of the NPR - affiliated campus radio station of his alma mater the University of Southern California, licensed the Star Wars radio rights to KUSC - FM for US $1. The production used John Williams ' original film score, along with Ben Burtt 's sound effects. The first was written by science fiction author Brian Daley and directed by John Madden. It was broadcast on National Public Radio in 1981, adapting the original 1977 film into 13 - episodes. Mark Hamill and Anthony Daniels reprised their film roles. The overwhelming success, led to a 10 - episode adaptation of The Empire Strikes Back in 1982. Billy Dee Williams joined the other two stars, reprising his role as Lando Calrissian. In 1983, Buena Vista Records released an original, 30 - minute Star Wars audio drama titled Rebel Mission to Ord Mantell, written by Daley. In the 1990s, Time Warner Audio Publishing adapted several Star Wars series from Dark Horse Comics into audio dramas: the three - part Dark Empire saga, Tales of the Jedi, Dark Lords of the Sith, the Dark Forces trilogy, and Crimson Empire (1998). Return of the Jedi was adapted into 6 - episodes in 1996, featuring Daniels. The first officially licensed Star Wars electronic game was Kenner 's 1979 table - top Star Wars Electronic Battle Command. In 1982, Parker Brothers published the first licensed Star Wars video game, Star Wars: The Empire Strikes Back, for the Atari 2600. It was followed in 1983 by Atari 's rail shooter arcade game Star Wars, which used vector graphics and was based on the "Death Star trench run '' scene from the 1977 film. The next game, Return of the Jedi (1984), used more traditional raster graphics, with the following game The Empire Strikes Back (1985) returning to the 1983 's arcade game vector graphics, but recreating the "Battle of Hoth '' scene instead. Lucasfilm had started its own video game company in the early 1980s, which became known for adventure games and World War II flight combat games. In 1993, LucasArts released Star Wars: X-Wing, the first self - published Star Wars video game and the first space flight simulation based on the franchise. X-Wing was one of the best - selling games of 1993, and established its own series of games. Released in 1995, Dark Forces was the first Star Wars first - person shooter video game. A hybrid adventure game incorporating puzzles and strategy, it featured new gameplay features and graphical elements not then common in other games, made possible by LucasArts ' custom - designed game engine, called the Jedi. The game was well received and well reviewed, and was followed by four sequels. Dark Forces introduced the popular character Kyle Katarn, who would later appear in multiple games, novels, and comics. Katarn is a former Imperial stormtrooper who joins the Rebellion and ultimately becomes a Jedi, a plot arc similar to that of Finn in the 2015 film The Force Awakens. Disney has partnered with Lenovo to create the Augmented Reality game ' Star Wars: Jedi Challenges ' that works with a Lenovo Mirage AR headset, a tracking sensor and a Lightsaber controller that will launch in December 2017. Aside from its well - known science fictional technology, Star Wars features elements such as knighthood, chivalry, and princesses that are related to archetypes of the fantasy genre. The Star Wars world, unlike fantasy and science - fiction films that featured sleek and futuristic settings, was portrayed as dirty and grimy. Lucas ' vision of a "used future '' was further popularized in the science fiction - horror films Alien, which was set on a dirty space freighter; Mad Max 2, which is set in a post-apocalyptic desert; and Blade Runner, which is set in a crumbling, dirty city of the future. Lucas made a conscious effort to parallel scenes and dialogue between films, and especially to parallel the journeys of Luke Skywalker with that of his father Anakin when making the prequels. Star Wars contains many themes of political science that mainly favor democracy over dictatorship. Political science has been an important element of Star Wars since the franchise first launched in 1977. The plot climax of Revenge of the Sith is modeled after the fall of the democratic Roman Republic and the formation of an empire. The stormtroopers from the movies share a name with the Nazi stormtroopers (see also Sturmabteilung). Imperial officers ' uniforms resemble some historical German uniforms of World War II and the political and security officers of the Empire resemble the black - clad SS down to the imitation silver death 's head insignia on their officer 's caps. World War II terms were used for names in Star Wars; examples include the planets Kessel (a term that refers to a group of encircled forces) and Hoth (Hermann Hoth was a German general who served on the snow - laden Eastern Front). Palpatine being Chancellor before becoming Emperor mirrors Adolf Hitler 's role as Chancellor before appointing himself Dictator. The Great Jedi Purge alludes to the events of The Holocaust, the Great Purge, the Cultural Revolution, and the Night of the Long Knives. In addition, Lucas himself has drawn parallels between Palpatine and his rise to power to historical dictators such as Julius Caesar, Napoleon Bonaparte, and Adolf Hitler. The final medal awarding scene in A New Hope, however, references Leni Riefenstahl 's Triumph of the Will. The space battles in A New Hope were based on filmed World War I and World War II dogfights. Continuing the use of Nazi inspiration for the Empire, J.J. Abrams, the director of Star Wars: The Force Awakens, has said that the First Order, an Imperial offshoot which serves as the main antagonist of the sequel trilogy, is inspired by another aspect of the Nazi regime. Abrams spoke of how several Nazis fled to Argentina after the war and he claims that the concept for the First Order came from conversations between the scriptwriters about what would have happened if they had started working together again. The Star Wars saga has had a significant impact on modern popular culture. Star Wars references are deeply embedded in popular culture; Phrases like "evil empire '' and "May the Force be with you '' have become part of the popular lexicon. The first Star Wars film in 1977 was a cultural unifier, enjoyed by a wide spectrum of people. The film can be said to have helped launch the science fiction boom of the late 1970s and early 1980s, making science fiction films a blockbuster genre or mainstream. This very impact made it a prime target for parody works and homages, with popular examples including Spaceballs, Family Guy 's Laugh It Up, Fuzzball, Robot Chicken 's "Star Wars Episode I '', "Star Wars Episode II '' and "Star Wars Episode III '', and Hardware Wars by Ernie Fosselius. In 1989, the Library of Congress selected the original Star Wars film for preservation in the U.S. National Film Registry, as being "culturally, historically, or aesthetically significant. '' Its sequel, The Empire Strikes Back, was selected in 2010. Despite these callings for archival, it is unclear whether copies of the 1977 and 1980 theatrical sequences of Star Wars and Empire -- or copies of the 1997 Special Edition versions -- have been archived by the NFR, or indeed if any copy has been provided by Lucasfilm and accepted by the Registry. The original Star Wars film was a huge success for 20th Century Fox, and was credited for reinvigorating the company. Within three weeks of the film 's release, the studio 's stock price doubled to a record high. Prior to 1977, 20th Century Fox 's greatest annual profits were $37 million, while in 1977, the company broke that record by posting a profit of $79 million. The franchise helped Fox to change from an almost bankrupt production company to a thriving media conglomerate. Star Wars fundamentally changed the aesthetics and narratives of Hollywood films, switching the focus of Hollywood - made films from deep, meaningful stories based on dramatic conflict, themes and irony to sprawling special - effects - laden blockbusters, as well as changing the Hollywood film industry in fundamental ways. Before Star Wars, special effects in films had not appreciably advanced since the 1950s. The commercial success of Star Wars created a boom in state - of - the - art special effects in the late 1970s. Along with Jaws, Star Wars started the tradition of the summer blockbuster film in the entertainment industry, where films open on many screens at the same time and profitable franchises are important. It created the model for the major film trilogy and showed that merchandising rights on a film could generate more money than the film itself did. The Star Wars saga has inspired many fans to create their own non-canon material set in the Star Wars galaxy. In recent years, this has ranged from writing fan fiction to creating fan films. In 2002, Lucasfilm sponsored the first annual Official Star Wars Fan Film Awards, officially recognizing filmmakers and the genre. Because of concerns over potential copyright and trademark issues, however, the contest was initially open only to parodies, mockumentaries, and documentaries. Fan fiction films set in the Star Wars universe were originally ineligible, but in 2007, Lucasfilm changed the submission standards to allow in - universe fiction entries. Lucasfilm, for the most part, has allowed but not endorsed the creation of these derivative fan fiction works, so long as no such work attempts to make a profit from or tarnish the Star Wars franchise in any way. While many fan films have used elements from the licensed Expanded Universe to tell their story, they are not considered an official part of the Star Wars canon. As the characters and the story line of the original trilogy are so well known, educationalists have advocated the use of the films in the classroom as a learning resource. For example, a project in Western Australia honed elementary school students story - telling skills by role playing action scenes from the movies and later creating props and audio / visual scenery to enhance their performance. Others have used the films to encourage second - level students to integrate technology in the science classroom by making prototype light sabers. Similarly, psychiatrists in New Zealand and the US have advocated their use in the university classroom to explain different types of psychopathology. The success of the Star Wars films led the franchise to become one of the most merchandised franchises in the world. In 1977, while filming the original film, George Lucas decided to take a 500,000 - dollar pay - cut to his own salary as director, in exchange for fully owning the merchandising rights of the franchise to himself. Over the franchise 's lifetime, such exchange cost 20th Century Fox, more than US $20 billion in merchandising revenue profits. Disney acquired the merchandising rights when part of purchasing Lucasfilm. Kenner made the first Star Wars action figures to coincide with the release of the film, and today the remaining 80 's figures sell at extremely high prices in auctions. Since the 90 's Hasbro holds the rights to create action figures based on the saga. Pez dispensers have been produced. Star Wars was the first intellectual property to be licensed in Lego Group history, which has produced a Star Wars Lego theme. Lego has produced animated parody short films to promote their sets, among them Revenge of the Brick (2005) and The Quest for R2 - D2 (2009), the former parodies Revenge of the Sith, while the later The Clone Wars film. Due to their success, LEGO created animated comedy mini-series among them The Yoda Chronicles (2013 - 2014) and Droid Tales (2015) originally airing on Cartoon Network, but since 2014 moved into Disney XD. The Lego Star Wars video games are critically acclaimed best sellers. In 1977 with the board game Star Wars: Escape from the Death Star (not to be confused with another board game with the same title, published in 1990). The board game Risk has been adapted to the series in two editions by Hasbro: and Star Wars Risk: The Clone Wars Edition (2005) and Risk: Star Wars Original Trilogy Edition (2006). Three different official tabletop role - playing games have been developed for the Star Wars universe: a version by West End Games in the 1980s and 1990s, one by Wizards of the Coast in the 2000s, and one by Fantasy Flight Games in the 2010s. Star Wars trading cards have been published since the first "blue '' series, by Topps, in 1977. Dozens of series have been produced, with Topps being the licensed creator in the United States. Some of the card series are of film stills, while others are original art. Many of the cards have become highly collectible with some very rare "promos '', such as the 1993 Galaxy Series II "floating Yoda '' P3 card often commanding US $1,000 or more. While most "base '' or "common card '' sets are plentiful, many "insert '' or "chase cards '' are very rare. From 1995 until 2001, Decipher, Inc. had the license for, created and produced a collectible card game based on Star Wars; the Star Wars Collectible Card Game (also known as SWCCG).
what is the fastest stock car in the world
Production car speed record - wikipedia This is a list of the world 's record - breaking top speeds achieved by street - legal production cars (as opposed to concept cars or modified cars). For the purposes of this list eligible cars are defined in the list 's rules. This list uses the same definition as the List of automotive superlatives for the sake of consistency and because the term production car is otherwise undefined. The Benz Velo, as the first production car, is an exception. Comparing claimed speeds of the fastest production cars in the world, especially in historical cases, is difficult as there is no standardized method for determining the top speed and no central authority to verify any such claims. Examples of the difficulties faced were shown up in the dispute between Bugatti and Hennessey over which car was the world 's fastest. The Dauer 962 Le Mans, introduced in 1993, reached independently measured 404.6 km / h (251.4 mph) in 1998 and was considered as the fastest production car by several publications. But since only 13 cars were built while the rules on this site require a minimum of 25 it does n't qualify for this list. The Koenigsegg CCR recorded a top speed of 387.866 km / h (241.009 mph) at the Nardò Ring testing facility on 28 February 2005. The record was supervised and accredited by Guinness World Records at the time and a certificate recognising this achievement was awarded, citing the CCR as "The fastest production car... which achieved a speed of 387.866 km / h over a measured kilometre at the Nardo Prototipo proving ground, Italy ''. 14 examples of the CCR were produced in total, a total production run under 25 units does not qualify the CCR for inclusion on the table below. On 4 July 2010 the Bugatti Veyron Super Sport reached 431.072 km / h (267.856 mph) two - way average. Bugatti built 30 Super Sports (5 of them named World Record Edition). At the time the record was set it was known that the customer cars were electronically limited to 415 km / h (257.87 mph). Guinness Book of Records (which had listed speeds by British cars with modified rev limiter as production car records in the 1990s) listed the unlimited 431.072 km / h (267.856 mph) as production car speed record. Yet, 3 years later, after a query by the Sunday Times Guinness ' PR director Jaime Strang was quoted: "As the car 's speed limiter was deactivated, this modification was against the official guidelines. Consequently, the vehicle 's record set at 431.072 km / h is no longer valid. '' 5 days later it was written on its website: "Guinness World Records would like to confirm that Bugatti 's record has not been disqualified; the record category is currently under review. '' Five days later Bugatti 's speed record was confirmed: "Following a thorough review conducted with a number of external experts, Guinness World Records is pleased to announce the confirmation of Bugatti 's record of Fastest production car achieved by the Veyron 16.4 Super Sport. The focus of the review was with respect to what may constitute a modification to a car 's standard specification. Having evaluated all the necessary information, Guinness World Records is now satisfied that a change to the speed limiter does not alter the fundamental design of the car or its engine. '' In 2014, a Hennessey Venom GT was recorded at 435.31 km / h (270.49 mph), but as the run was in one direction only and only 12 cars (+ 1 prototype) were ever made, it does not qualify under the Guinness Book of Records or this list 's criteria as the world 's fastest production car. Guinness accepted it as a production car, however. Because of the inconsistencies with the various definitions of production cars, dubious claims by manufacturers and self - interest groups, and inconsistent or changing application of the definitions this list has a defined set of requirements. For further explanation of how these were arrived at see the above link. This list is also limited to post World War II production road cars which reached more than 124 mph (200 km / h), older cars are excluded even if they were faster. The Benz Velo as the first petrol driven car is the only exception. For the purposes of this list, a production car is defined as a vehicle that is: To establish the top speed for cars at least since the 1990s the requirement is, in addition to the above, an independent road test with a two - way run. The mean of the top speed for both runs is taken as the car 's top speed. In instances where the top speed has been determined by removing the limiter, the test met these requirements, and the car is sold with the limiter on then the limited speed is accepted as meeting this requirement. For the McLaren F1 the estimation by Car and Driver about the speed at the rev - limiter is used.
where is the ncaa women's championship played at
2018 NCAA Division I Women 's Basketball tournament - wikipedia The 2018 NCAA Division I Women 's Basketball Tournament began on March 16, 2018, and concluded with the national championship game on Sunday, April 1. The Final Four was played at Nationwide Arena in Columbus, Ohio. This is the third time that the women 's Final Four was played in Ohio after previously being held in Cincinnati in 1997 and Cleveland in 2007 and the first time that the women 's Final Four was played in Columbus. For only the fourth time in the tournament 's 37 - year history, all four of the number one seeds made it to the Final Four. Tennessee continued its record streak of making every NCAA Women 's Basketball Tournament at 37 consecutive appearances. Connecticut also continued its record streak of 11 consecutive Final Four appearances. The first two rounds, also referred to as the subregionals, were played at the sites of the top 16 seeds, as was done in 2016 and 2017. The following are the sites selected to host the last four rounds of the 2018 tournament. Subregionals (First and Second Rounds) Regional Semifinals and Finals (Sweet Sixteen and Elite Eight) National Semifinals and Championship (Final Four and Championship) Selections for the 2018 NCAA Division I Women 's Basketball Championship were announced at 7 p.m. Eastern time, Monday, March 12 via ESPN. The basis for the subregionals returned to the approach used between 1982 and 2002; the top sixteen teams, as chosen in the bracket selection process, hosted the first two rounds on campus. A total of 64 teams entered the 2018 tournament. 32 automatic bids teams were given to teams that won their conference tournament. The remaining 32 teams were granted "at - large '' bids, which were extended by the NCAA Selection Committee. The Selection Committee also seeded the entire field from 1 to 64. The following teams automatically qualified for the 2018 NCAA field by virtue of winning their conference 's tournament. All times are listed as Eastern Daylight Time (UTC − 4) * -- Denotes overtime period During the Final Four round, regardless of the seeds of the participating teams, the champion of the top overall top seed 's region (Connecticut 's Albany Region) plays against the champion of the fourth - ranked top seed 's region (Notre Dame 's Spokane Region), and the champion of the second overall top seed 's region (Mississippi State 's Kansas City Region) plays against the champion of the third - ranked top seed 's region (Louisville 's Lexington Region). March 30 April 1 * -- Denotes overtime period ESPN had US television rights to all games during the tournament. During the first and second rounds, ESPN aired select games nationally on ESPN2, ESPNU, and ESPNews. All other games aired regionally on ESPN, ESPN2, or ESPN3 and were streamed online via WatchESPN. Most of the nation got whip - a-round coverage during this time, which allowed ESPN to rotate between the games and focus the nation on the game that had the closest score. First & Second Rounds Friday / Sunday Sweet Sixteen & Elite Eight Friday / Sunday Final Four First & Second Rounds Saturday / Monday Sweet Sixteen & Elite Eight Saturday / Monday Championship Westwood One had exclusive radio rights to the entire tournament. Teams participating in the Regional Finals, Final Four, and Championship were allowed to have their own local broadcasts, but they were n't allowed to stream those broadcasts online. Regional Finals Sunday Final Four Regional Finals Monday Championship
where is the grand canyon located in az
Grand Canyon - Wikipedia The Grand Canyon (Hopi: Ongtupqa; Yavapai: Wi: kaʼi: la, Navajo: Tsékooh Hatsoh, Spanish: Gran Cañón) is a steep - sided canyon carved by the Colorado River in Arizona, United States. The Grand Canyon is 277 miles (446 km) long, up to 18 miles (29 km) wide and attains a depth of over a mile (6,093 feet or 1,857 meters). The canyon and adjacent rim are contained within Grand Canyon National Park, the Kaibab National Forest, Grand Canyon - Parashant National Monument, the Hualapai Indian Reservation, the Havasupai Indian Reservation and the Navajo Nation. President Theodore Roosevelt was a major proponent of preservation of the Grand Canyon area, and visited it on numerous occasions to hunt and enjoy the scenery. Nearly two billion years of Earth 's geological history have been exposed as the Colorado River and its tributaries cut their channels through layer after layer of rock while the Colorado Plateau was uplifted. While some aspects about the history of incision of the canyon are debated by geologists, several recent studies support the hypothesis that the Colorado River established its course through the area about 5 to 6 million years ago. Since that time, the Colorado River has driven the down - cutting of the tributaries and retreat of the cliffs, simultaneously deepening and widening the canyon. For thousands of years, the area has been continuously inhabited by Native Americans, who built settlements within the canyon and its many caves. The Pueblo people considered the Grand Canyon a holy site, and made pilgrimages to it. The first European known to have viewed the Grand Canyon was García López de Cárdenas from Spain, who arrived in 1540. The Grand Canyon is a river valley in the Colorado Plateau that exposes uplifted Proterozoic and Paleozoic strata, and is also one of the six distinct physiographic sections of the Colorado Plateau province. It is not the deepest canyon in the world (Kali Gandaki Gorge in Nepal is much deeper). However, the Grand Canyon is known for its visually overwhelming size and its intricate and colorful landscape. Geologically, it is significant because of the thick sequence of ancient rocks that are well preserved and exposed in the walls of the canyon. These rock layers record much of the early geologic history of the North American continent. Uplift associated with mountain formation later moved these sediments thousands of feet upward and created the Colorado Plateau. The higher elevation has also resulted in greater precipitation in the Colorado River drainage area, but not enough to change the Grand Canyon area from being semi-arid. The uplift of the Colorado Plateau is uneven, and the Kaibab Plateau that Grand Canyon bisects is over one thousand feet (300 m) higher at the North Rim (about 1,000 ft or 300 m) than at the South Rim. Almost all runoff from the North Rim (which also gets more rain and snow) flows toward the Grand Canyon, while much of the runoff on the plateau behind the South Rim flows away from the canyon (following the general tilt). The result is deeper and longer tributary washes and canyons on the north side and shorter and steeper side canyons on the south side. Temperatures on the North Rim are generally lower than those on the South Rim because of the greater elevation (averaging 8,000 feet or 2,400 metres above sea level). Heavy rains are common on both rims during the summer months. Access to the North Rim via the primary route leading to the canyon (State Route 67) is limited during the winter season due to road closures. The Grand Canyon is part of the Colorado River basin which has developed over the past 70 million years, in part based on apatite (U-Th) / He thermochronometry showing that Grand Canyon reached a depth near to the modern depth by 20 Ma. A recent study examining caves near Grand Canyon places their origins beginning about 17 million years ago. Previous estimates had placed the age of the canyon at 5 -- 6 million years. The study, which was published in the journal Science in 2008, used uranium - lead dating to analyze calcite deposits found on the walls of nine caves throughout the canyon. There is a substantial amount of controversy because this research suggests such a substantial departure from prior widely supported scientific consensus. In December 2012, a study published in the journal Science claimed new tests had suggested the Grand Canyon could be as old as 70 million years. However, this study has been criticized by those who support the "young canyon '' age of around six million years as "(an) attempt to push the interpretation of their new data to their limits without consideration of the whole range of other geologic data sets. '' The canyon is the result of erosion which exposes one of the most complete geologic columns on the planet. The major geologic exposures in the Grand Canyon range in age from the 2 - billion - year - old Vishnu Schist at the bottom of the Inner Gorge to the 230 - million - year - old Kaibab Limestone on the Rim. There is a gap of about a billion years between the 500 - million - year - old stratum and the level below it, which dates to about 1.5 billion years ago. This large unconformity indicates a long period for which no deposits are present. Many of the formations were deposited in warm shallow seas, near - shore environments (such as beaches), and swamps as the seashore repeatedly advanced and retreated over the edge of a proto - North America. Major exceptions include the Permian Coconino Sandstone, which contains abundant geological evidence of aeolian sand dune deposition. Several parts of the Supai Group also were deposited in non -- marine environments. The great depth of the Grand Canyon and especially the height of its strata (most of which formed below sea level) can be attributed to 5 -- 10 thousand feet (1,500 to 3,000 m) of uplift of the Colorado Plateau, starting about 65 million years ago (during the Laramide Orogeny). This uplift has steepened the stream gradient of the Colorado River and its tributaries, which in turn has increased their speed and thus their ability to cut through rock (see the elevation summary of the Colorado River for present conditions). Weather conditions during the ice ages also increased the amount of water in the Colorado River drainage system. The ancestral Colorado River responded by cutting its channel faster and deeper. The base level and course of the Colorado River (or its ancestral equivalent) changed 5.3 million years ago when the Gulf of California opened and lowered the river 's base level (its lowest point). This increased the rate of erosion and cut nearly all of the Grand Canyon 's current depth by 1.2 million years ago. The terraced walls of the canyon were created by differential erosion. Between 100,000 and 3 million years ago, volcanic activity deposited ash and lava over the area which at times completely obstructed the river. These volcanic rocks are the youngest in the canyon. The Ancestral Puebloans were a Native American culture centered on the present - day Four Corners area of the United States. They were the first people known to live in the Grand Canyon area. The cultural group has often been referred to in archaeology as the Anasazi, although the term is not preferred by the modern Puebloan peoples. The word "Anasazi '' is Navajo for "Ancient Ones '' or "Ancient Enemy ''. Archaeologists still debate when this distinct culture emerged. The current consensus, based on terminology defined by the Pecos Classification, suggests their emergence was around 1200 BCE during the Basketmaker II Era. Beginning with the earliest explorations and excavations, researchers have believed that the Ancient Puebloans are ancestors of the modern Pueblo peoples. In addition to the Ancestral Puebloans, a number of distinct cultures have inhabited the Grand Canyon area. The Cohonina lived to the west of the Grand Canyon, between 500 and 1200 CE. The Cohonina were ancestors of the Yuman, Havasupai, and Hualapai peoples who inhabit the area today. The Sinagua were a cultural group occupying an area to the southeast of the Grand Canyon, between the Little Colorado River and the Salt River, between approximately 500 and 1425 CE. The Sinagua may have been ancestors of several Hopi clans. By the time of the arrival of Europeans in the 16th century, newer cultures had evolved. The Hualapai inhabit a 100 - mile (160 km) stretch along the pine - clad southern side of the Grand Canyon. The Havasupai have been living in the area near Cataract Canyon since the beginning of the 13th century, occupying an area the size of Delaware. The Southern Paiutes live in what is now southern Utah and northern Arizona. The Navajo, or Diné, live in a wide area stretching from the San Francisco Peaks eastwards towards the Four Corners. Archaeological and linguistic evidence suggests the Navajo descended from the Athabaskan people near Great Slave Lake, Canada, who migrated after the 11th century. In September 1540, under orders from the conquistador Francisco Vázquez de Coronado to search for the fabled Seven Cities of Cibola, Captain García López de Cárdenas, along with Hopi guides and a small group of Spanish soldiers, traveled to the south rim of the Grand Canyon between Desert View and Moran Point. Pablo de Melgrossa, Juan Galeras, and a third soldier descended some one third of the way into the canyon until they were forced to return because of lack of water. In their report, they noted that some of the rocks in the canyon were "bigger than the great tower of Seville, Giralda '' It is speculated that their Hopi guides likely knew routes to the canyon floor, but may have been reluctant to lead the Spanish to the river. No Europeans visited the canyon again for more than two hundred years. Fathers Francisco Atanasio Domínguez and Silvestre Vélez de Escalante were two Spanish priests who, with a group of Spanish soldiers, explored southern Utah and traveled along the north rim of the canyon in Glen and Marble Canyons in search of a route from Santa Fe to California in 1776. They eventually found a crossing, formerly known as the "Crossing of the Fathers, '' that today lies under Lake Powell. Also in 1776, Fray Francisco Garces, a Franciscan missionary, spent a week near Havasupai, unsuccessfully attempting to convert a band of Native Americans to Christianity. He described the canyon as "profound ''. James Ohio Pattie, along with a group of American trappers and mountain men, may have been the next European to reach the canyon, in 1826. Jacob Hamblin, a Mormon missionary, was sent by Brigham Young in the 1850s to locate suitable river crossing sites in the canyon. Building good relations with local Hualapai and white settlers, he found the Crossing of the Fathers, and the locations what would become Lees Ferry in 1858 and Pearce Ferry (later operated by, and named for, Harrison Pearce) -- only the latter two sites suitable for ferry operation. He also acted as an advisor to John Wesley Powell before his second expedition to the Grand Canyon, serving as a diplomat between Powell and the local native tribes to ensure the safety of his party. In 1857, Edward Fitzgerald Beale was superintendent of an expedition to survey a wagon road along the 35th parallel from Fort Defiance, Arizona to the Colorado River. He led a small party of men in search of water on the Coconino Plateau near the canyon 's south rim. On September 19, near present - day National Canyon, they came upon what May Humphreys Stacey described in his journal as "... a wonderful canyon four thousand feet deep. Everyone (in the party) admitted that he never before saw anything to match or equal this astonishing natural curiosity. '' Also in 1857, the U.S. War Department asked Lieutenant Joseph Ives to lead an expedition to assess the feasibility of an up - river navigation from the Gulf of California. Also in a stern wheeler steamboat Explorer, after two months and 350 miles (560 km) of difficult navigation, his party reached Black Canyon some two months after George Johnson. The Explorer struck a rock and was abandoned. Ives led his party east into the canyon -- they may have been the first Europeans to travel the Diamond Creek drainage and traveled eastwards along the south rim. In his "Colorado River of the West '' report to the Senate in 1861 he states that "One or two trappers profess to have seen the canyon. '' According to the San Francisco Herald, in a series of articles run in 1853, Captain Joseph R. Walker in January 1851 with his nephew James T. Walker and six men, traveled up the Colorado River to a point where it joined the Virgin River and continued east into Arizona, traveling along the Grand Canyon and making short exploratory side trips along the way. Walker is reported to have said he wanted to visit the Moqui Indians, as the Hopi were then called by whites. He had met these people briefly in previous years, thought them exceptionally interesting and wanted to become better acquainted. The Herald reporter then stated, "We believe that Captain Joe Walker is the only white man in this country that has ever visited this strange people. '' In 1858, John Strong Newberry became probably the first geologist to visit the Grand Canyon. In 1869, Major John Wesley Powell led the first expedition down the canyon. Powell set out to explore the Colorado River and the Grand Canyon. Gathering nine men, four boats and food for 10 months, he set out from Green River, Wyoming on May 24. Passing through (or portaging around) a series of dangerous rapids, the group passed down the Green River to its confluence with the Colorado River, near present - day Moab, Utah and completed the journey with many hardships through the Grand Canyon on August 13, 1869. In 1871 Powell first used the term "Grand Canyon ''; previously it had been called the "Big Canyon ''. In 1889, Frank M. Brown wanted to build a railroad along the Colorado River to carry coal. He, his chief engineer Robert Brewster Stanton, and 14 others started to explore the Grand Canyon in poorly designed cedar wood boats, with no life preservers. Brown drowned in an accident near Marble Canyon: Stanton made new boats and proceeded to explore the Colorado all of the way to the Gulf of California. The Grand Canyon became an official national monument in 1908 and a national park in 1919. U.S. President Theodore Roosevelt visited the Grand Canyon in 1903. An avid outdoorsman and staunch conservationist, Roosevelt established the Grand Canyon Game Preserve on November 28, 1906. Livestock grazing was reduced, but predators such as mountain lions, eagles, and wolves were eradicated. Roosevelt along with other members of his conservation group, the Boone and Crockett Club helped form the National Parks Association, which in turn lobbied for the Antiquities Act of 1906 which gave Roosevelt the power to create national monuments. Once the act was passed, Roosevelt immediately added adjacent national forest lands and redesignated the preserve a U.S. National Monument on January 11, 1908. Opponents such as land and mining claim holders blocked efforts to reclassify the monument as a U.S. National Park for 11 years. Grand Canyon National Park was finally established as the 17th U.S. National Park by an Act of Congress signed into law by President Woodrow Wilson on February 26, 1919. The federal government administrators who manage park resources face many challenges. These include issues related to the recent reintroduction into the wild of the highly endangered California condor, air tour overflight noise levels, water rights disputes with various tribal reservations that border the park, and forest fire management. Federal officials started a flood in the Grand Canyon in hopes of restoring its ecosystem on March 5, 2008. The canyon 's ecosystem was permanently changed after the construction of the Glen Canyon Dam in 1963. Between 2003 and 2011, 2,215 mining claims had been requested that are adjacent to the canyon, including claims for uranium mines. Mining has been suspended since 2009, when U.S. Interior Secretary Ken Salazar withdrew 1 million acres (4,000 km) from the permitting process, pending assessment of the environmental impact of mining. Critics of the mines are concerned that, once mined, the uranium will leach into the water of the Colorado River and contaminate the water supply for up to 18 million people. Salazar 's so - called "Northern Arizona Withdrawal '' is a 20 - year moratorium on new mines, but allows existing mines to continue. In 2012, the federal government stopped new mines in the area, which was upheld by the U.S. District Court for Arizona in 2014, but appealed by the National Mining Association, joined by the state of Arizona under Attorney General Mark Brnovich as well as Utah, Montana and Nevada. National Mining Association v. Jewell is pending before the Ninth Circuit Court of Appeals as of September 2015. There are several historic buildings located along the South Rim with most in the vicinity of Grand Canyon Village. Weather in the Grand Canyon varies according to elevation. The forested rims are high enough to receive winter snowfall, but along the Colorado River in the Inner Gorge, temperatures are similar to those found in Tucson and other low elevation desert locations in Arizona. Conditions in the Grand Canyon region are generally dry, but substantial precipitation occurs twice annually, during seasonal pattern shifts in winter (when Pacific storms usually deliver widespread, moderate rain and high - elevation snow to the region from the west) and in late summer (due to the North American Monsoon, which delivers waves of moisture from the southeast, causing dramatic, localized thunderstorms fueled by the heat of the day). Average annual precipitation on the South Rim is less than 16 inches (41 cm), with 60 inches (150 cm) of snow; the higher North Rim usually receives 27 inches (69 cm) of moisture, with a typical snowfall of 144 inches (370 cm); and Phantom Ranch, far below the canyon 's rims along the Colorado River at 2,500 feet (762 m) gets just 8 inches (20 cm) of rain, and snow is a rarity. Temperatures vary wildly throughout the year, with summer highs within the Inner Gorge commonly exceeding 100 ° F (37.8 ° C) and winter minimum temperatures sometimes falling below zero degrees Fahrenheit (− 17.8 ° C) along the canyon 's rims. Visitors are often surprised by these potentially extreme conditions, and this, along with the high altitude of the canyon 's rims, can lead to unpleasant side effects such as dehydration, sunburn, and hypothermia. Weather conditions can greatly affect hiking and canyon exploration, and visitors should obtain accurate forecasts because of hazards posed by exposure to extreme temperatures, winter storms and late summer monsoons. While the park service posts weather information at gates and visitor centers, this is a rough approximation only, and should not be relied upon for trip planning. For accurate weather in the canyon, hikers should consult the National Weather Service 's NOAA weather radio or the official National Weather Service website. The National Weather Service has had a cooperative station on the South Rim since 1903. The record high temperature on the South Rim was 105 ° F (41 ° C) on June 26, 1974, and the record low temperature was − 20 ° F (− 29 ° C) on January 1, 1919, February 1, 1985, and December 23, 1990. The Grand Canyon area has some of the cleanest air in the United States. However, at times the air quality can be considerably affected by events such as forest fires and dust storms in the Southwest. What effect there is on air quality and visibility in the canyon has been mainly from sulfates, soils, and organics. The sulfates largely result from urban emissions in southern California, borne on the prevailing westerly winds throughout much of the year, and emissions from Arizona 's copper smelter region, borne on southerly or southeasterly winds during the monsoon. Airborne soils originate with windy conditions and road dust. Organic particles result from vehicle emissions, long - range transport from urban areas, and forest fires, as well as from VOCs emitted by vegetation in the surrounding forests. Nitrates, carried in from urban areas, stationary sources, and vehicle emissions; as well as black carbon from forest fires and vehicle emissions, also contribute to a lesser extent. A number of actions have been taken to preserve and further improve air quality and visibility at the canyon. In 1990, amendments to the Clean Air Act established the Grand Canyon Visibility Transport Commission (GCVTC) to advise the US EPA on strategies for protecting visual air quality on the Colorado Plateau. The GCVTC released its final report in 1996 and initiated the Western Regional Air Partnership (WRAP), a partnership of state, tribal and federal agencies to help coordinate implementation of the Commission 's recommendations. In 1999, the Regional Haze Rule established a goal of restoring visibility in national parks and wilderness areas (Class 1 areas), such as the Grand Canyon, to natural background levels by 2064. Subsequent revisions to the rule provide specific requirements for making reasonable progress toward that goal. In the early 1990s, studies indicated that emissions of SO, a sulfate precursor, from the Navajo Generating Station affected visibility in the canyon mainly in the winter, and which if controlled would improve wintertime visibility by 2 to 7 %. As a result, scrubbers were added to the plant 's three units in 1997 through 1999, reducing SO2 emissions by more than 90 %. The plant also installed low - NO SOFA burners in 2009 - 2011, reducing emissions of NO, a nitrate precursor, by 40 %. Emissions from the Mohave Generating Station to the west were similarly found to affect visibility in the canyon. The plant was required to have installed SO scrubbers, but was instead shut down in 2005, completely eliminating its emissions. Prescribed fires are typically conducted in the spring and fall in the forests adjacent to the canyon to reduce the potential for severe forest fires and resulting smoke conditions. Although prescribed fires also affect air quality, the controlled conditions allow the use of management techniques to minimize their impact. There are approximately 1,737 known species of vascular plants, 167 species of fungi, 64 species of moss and 195 species of lichen found in Grand Canyon National Park. This variety is largely due to the 8,000 foot (2,400 m) elevation change from the Colorado River up to the highest point on the North Rim. Grand Canyon boasts a dozen endemic plants (known only within the Park 's boundaries) while only ten percent of the Park 's flora is exotic. Sixty - three plants found here have been given special status by the U.S. Fish and Wildlife Service. The Mojave Desert influences the western sections of the canyon, Sonoran Desert vegetation covers the eastern sections, and ponderosa and pinyon pine forests grow on both rims. Natural seeps and springs percolating out of the canyon walls are home to 11 % of all the plant species found in the Grand Canyon. The canyon itself can act as a connection between the east and the west by providing corridors of appropriate habitat along its length. The canyon can also be a genetic barrier to some species, like the tassel - eared squirrel. The aspect, or direction a slope faces, also plays a major role in adding diversity to the Grand Canyon. North - facing slopes receive about one - third the normal amount of sunlight, so plants growing there are similar to plants found at higher elevations, or in more northern latitudes. The south - facing slopes receive the full amount of sunlight and are covered in vegetation typical of the Sonoran Desert. Of the 90 mammal species found along the Colorado River corridor, 18 are rodents and 22 are bats. The Park contains several major ecosystems. Its great biological diversity can be attributed to the presence of five of the seven life zones and three of the four desert types in North America. The five life zones represented are the Lower Sonoran, Upper Sonoran, Transition, Canadian, and Hudsonian. This is equivalent to traveling from Mexico to Canada. Differences in elevation and the resulting variations in climate are the major factors that form the various life zones and communities in and around the canyon. Grand Canyon National Park contains 129 vegetation communities, and the composition and distribution of plant species is influenced by climate, geomorphology and geology. The Lower Sonoran life zone spans from the Colorado River up to 3,500 feet (1,100 m). Along the Colorado River and its perennial tributaries, a riparian community exists. Coyote willow, arrowweed, seep - willow, western honey mesquite, catclaw acacia, and exotic tamarisk (saltcedar) are the predominant species. Hanging gardens, seeps and springs often contain rare plants such as the white - flowering western redbud, stream orchid, and Flaveria mcdougallii. Endangered fish in the river include the humpback chub and the razorback sucker. The three most common amphibians in these riparian communities are the canyon tree frog, red - spotted toad, and Woodhouse 's Rocky Mountain toad. Leopard frogs are very rare in the Colorado River corridor, they have undergone major declines and have not been seen in the Canyon in several years. There are 33 crustacean species found in the Colorado River and its tributaries within Grand Canyon National Park. Of these 33, 16 are considered true zooplankton organisms. Only 48 bird species regularly nest along the river, while others use the river as a migration corridor or as overwintering habitat. The bald eagle is one species that uses the river corridor as winter habitat. River otters may have disappeared from the park in the late 20th century, and muskrats are extremely rare. Beavers cut willows, cottonwoods, and shrubs for food, and can significantly affect the riparian vegetation. Other rodents, such as antelope squirrels and pocket mice, are mostly omnivorous, using many different vegetation types. Grand Canyon bats typically roost in desert uplands, but forage on the abundance of insects along the river and its tributaries. In addition to bats, coyotes, ringtails, and spotted skunks are the most numerous riparian predators and prey on invertebrates, rodents, and reptiles. Raccoons, weasels, bobcats, gray foxes, and mountain lions are also present, but are much more rare. Mule deer and desert bighorn sheep are the ungulates that frequent the river corridor. Since the removal of 500 feral burros in the early 1980s, bighorn sheep numbers have rebounded. Mule deer are generally not permanent residents along the river, but travel down from the rim when food and water resources there become scarce. The insect species commonly found in the river corridor and tributaries are midges, caddis flies, mayflies, stoneflies, black flies, mites, beetles, butterflies, moths, and fire ants. Numerous species of spiders and several species of scorpions including the bark scorpion and the giant desert hairy scorpion inhabit the riparian zone. Eleven aquatic and 26 terrestrial species of mollusks have been identified in and around Grand Canyon National Park. Of the aquatic species, two are bivalves (clams) and nine are gastropods (snails). Twenty - six species of terrestrial gastropods have been identified, primarily land snails and slugs. There are approximately 41 reptile species in Grand Canyon National Park. Ten are considered common along the river corridor and include lizards and snakes. Lizard density tends to be highest along the stretch of land between the water 's edge and the beginning of the upland desert community. The two largest lizards in the canyon are gila monsters and chuckwallas. Many snake species, which are not directly dependent on surface water, may be found both within the inner gorge and the Colorado River corridor. Six rattlesnake species have been recorded in the park. Above the river corridor a desert scrub community, composed of North American desert flora, thrives. Typical warm desert species such as creosote bush, white bursage, brittlebush, catclaw acacia, ocotillo, mariola, western honey mesquite, four - wing saltbush, big sagebrush, blackbrush and rubber rabbitbrush grow in this community. The mammalian fauna in the woodland scrub community consists of 50 species, mostly rodents and bats. Three of the five Park woodrat species live in the desert scrub community. Except for the western (desert) banded gecko, which seems to be distributed only near water along the Colorado River, all of the reptiles found near the river also appear in the uplands, but in lower densities. The desert gopher tortoise, a threatened species, inhabits the desert scrublands in the western end of the park. Some of the common insects found at elevations above 2,000 feet (610 m) are orange paper wasps, honey bees, black flies, tarantula hawks, stink bugs, beetles, black ants, and monarch and swallowtail butterflies. Solifugids, wood spiders, garden spiders, black widow spiders and tarantulas can be found in the desert scrub and higher elevations. The Upper Sonoran Life Zone includes most of the inner canyon and South Rim at elevations from 3,500 to 7,000 feet (1,100 to 2,100 m). This zone is generally dominated by blackbrush, sagebrush, and pinyon - juniper woodlands. Elevations of 3,500 to 4,000 feet (1,100 to 1,200 m) are in the Mojave Desert Scrub community of the Upper Sonoran. This community is dominated by the four - winged saltbush and creosote bush; other important plants include Utah agave, narrowleaf mesquite, ratany, catclaw acacia, and various cacti species. Approximately 30 bird species breed primarily in the desert uplands and cliffs of the inner canyon. Virtually all bird species present breed in other suitable habitats throughout the Sonoran and Mohave deserts. The abundance of bats, swifts, and riparian birds provides ample food for peregrines, and suitable eyrie sites are plentiful along the steep canyon walls. Also, several critically endangered California condors that were re-introduced to the Colorado Plateau on the Arizona Strip, have made the eastern part of the Park their home. The conifer forests provide habitat for 52 mammal species. Porcupines, shrews, red squirrels, tassel eared Kaibab and Abert 's squirrels, black bear, mule deer, and elk are found at the park 's higher elevations on the Kaibab Plateau. Above the desert scrub and up to 6,200 feet (1,900 m) is a pinyon pine forest and one seed juniper woodland. Within this woodland one can find big sagebrush, snakeweed, Mormon tea, Utah agave, banana and narrowleaf Yucca, winterfat, Indian ricegrass, dropseed, and needlegrass. There are a variety of snakes and lizards here, but one species of reptile, the mountain short - horned lizard, is a particularly abundant inhabitant of the piñon - juniper and ponderosa pine forests. Ponderosa pine forests grow at elevations between 6,500 and 8,200 feet (2,000 and 2,500 m), on both North and South rims in the Transition life zone. The South Rim includes species such as gray fox, mule deer, bighorn sheep, rock squirrels, pinyon pine and Utah juniper. Additional species such as Gambel oak, New Mexico locust, mountain mahogany, elderberry, creeping mahonia, and fescue have been identified in these forests. The Utah tiger salamander and the Great Basin spadefoot toad are two amphibians that are common in the rim forests. Of the approximately 90 bird species that breed in the coniferous forests, 51 are summer residents and at least 15 of these are known to be neotropical migrants. Elevations of 8,200 to 9,000 feet (2,500 to 2,700 m) are in the Canadian Life Zone, which includes the North Rim and the Kaibab Plateau. Spruce - fir forests characterized by Engelmann spruce, blue spruce, Douglas fir, white fir, aspen, and mountain ash, along with several species of perennial grasses, groundsels, yarrow, cinquefoil, lupines, sedges, and asters, grow in this sub-alpine climate. Mountain lions, Kaibab squirrels, and northern goshawks are found here. Montane meadows and subalpine grassland communities of the Hudsonian life zone are rare and located only on the North Rim. Both are typified by many grass species. Some of these grasses include blue and black grama, big galleta, Indian ricegrass and three - awns. The wettest areas support sedges and forbs. Grand Canyon National Park is one of the world 's premier natural attractions, attracting about five million visitors per year. Overall, 83 % were from the United States: California (12.2 %), Arizona (8.9 %), Texas (4.8 %), Florida (3.4 %) and New York (3.2 %) represented the top domestic visitors. Seventeen percent of visitors were from outside the United States; the most prominently represented nations were the United Kingdom (3.8 %), Canada (3.5 %), Japan (2.1 %), Germany (1.9 %) and The Netherlands (1.2 %). The South Rim is open all year round weather permitting. The North Rim is generally open mid-May to mid-October. Aside from casual sightseeing from the South Rim (averaging 7,000 feet (2,100 m) above sea level), skydiving, rafting, hiking, running, and helicopter tours are popular. The Grand Canyon Ultra Marathon is a 78 - mile (126 km) race over 24 hours. The floor of the valley is accessible by foot, muleback, or by boat or raft from upriver. Hiking down to the river and back up to the rim in one day is discouraged by park officials because of the distance, steep and rocky trails, change in elevation, and danger of heat exhaustion from the much higher temperatures at the bottom. Rescues are required annually of unsuccessful rim - to - river - to - rim travelers. Nevertheless, hundreds of fit and experienced hikers complete the trip every year. Camping on the North and South rims is generally restricted to established campgrounds and reservations are highly recommended, especially at the busier South Rim. There is at large camping available along many parts of the North Rim managed by Kaibab National Forest. North Rim campsites are only open seasonally due to road closures from weather and winter snowpack. All overnight camping below the rim requires a backcountry permit from the Backcountry Office (BCO). Each year Grand Canyon National Park receives approximately 30,000 requests for backcountry permits. The park issues 13,000 permits, and close to 40,000 people camp overnight. The earliest a permit application is accepted is the first of the month, four months before the proposed start month. Tourists wishing for a more vertical perspective can go skydiving, board helicopters and small airplanes in Boulder, Las Vegas, Phoenix and Grand Canyon National Park Airport (seven miles from the South Rim) for canyon flyovers. Scenic flights are no longer allowed to fly within 1500 feet of the rim within the national park because of a late 1990s crash. The last aerial video footage from below the rim was filmed in 1984. However, some helicopter flights land on the Havasupai and Hualapai Indian Reservations within Grand Canyon (outside of the park boundaries). In 2007, the Hualapai Tribe opened the glass - bottomed Grand Canyon Skywalk on their property, Grand Canyon West. The Skywalk is about 250 miles (400 km) by road from Grand Canyon Village at the South Rim. The skywalk has attracted "thousands of visitors a year, most from Las Vegas ''. In 2016, skydiving at the Grand Canyon become possible with the first Grand Canyon Skydiving operation opening up at the Grand Canyon National Park Airport, on the South Rim. In 2014, a developer announced plans to build a multimedia complex on the canyon 's rim called the Grand Canyon Escalade. On 420 acres there would be shops, an IMAX theater, hotels and an RV park. A gondola would enable easy visits to the canyon floor where a "riverwalk '' of "connected walkways, an eatery, a tramway station, a seating area and a wastewater package plant '' would be situated. Navajo Nation President Ben Shelly has indicated agreement; the tribe would have to invest $65 million for road, water and communication facilities for the $1 billion complex. One of the developers is Navajo and has cited an 8 to 18 percent share of the gross revenue for the tribe as an incentive. Lipan Point is a promontory located on the South Rim. This point is located to the east of the Grand Canyon Village along the Desert View Drive. There is a parking lot for visitors who care to drive along with the canyon 's bus service that routinely stops at the point. The trailhead to the Tanner Trail is located just before the parking lot. The view from Lipan Point shows a wide array of rock strata and the Unkar Creek area in the inner canyon. About 770 deaths have occurred between the mid 1800s and 2015. Of the fatalities that occurred from 1869 to 2001, some were as follows: 53 resulted from falls; 65 were attributable to environmental causes, including heat stroke, cardiac arrest, dehydration, and hypothermia; 7 were caught in flash floods; 79 were drowned in the Colorado River; 242 perished in airplane and helicopter crashes (128 of them in the 1956 disaster mentioned below); 25 died in freak errors and accidents, including lightning strikes and rock falls; and 23 were the victims of homicides. In 1956, the Grand Canyon was the site of the deadliest commercial aviation disaster in history at the time. On the morning of June 30, 1956, a TWA Lockheed Super Constellation and a United Airlines Douglas DC - 7 departed Los Angeles International Airport within three minutes of one another on eastbound transcontinental flights. Approximately 90 minutes later, the two propeller - driven airliners collided above the canyon while both were flying in unmonitored airspace. The wreckage of both planes fell into the eastern portion of the canyon, on Temple and Chuar Buttes, near the confluence of the Colorado and Little Colorado rivers. The disaster killed all 128 passengers and crew members aboard both planes. This accident led to the institution of high - altitude airways and direct radar observation of aircraft (known as positive control) by en route ground controllers. History Travel and sites Multimedia
who sings the 4 song on sesame street
Feist (singer) - wikipedia Leslie Feist (born 13 February 1976), known professionally as Feist, is a Canadian indie pop singer - songwriter and guitarist, performing both as a solo artist and as a member of the indie rock group Broken Social Scene. Feist launched her solo music career in 1999 with the release of Monarch. Her subsequent studio albums, Let It Die, released in 2004, and The Reminder, released in 2007, were critically acclaimed and commercially successful, selling over 2.5 million copies. The Reminder earned Feist four Grammy nominations, including a nomination for Best New Artist. She has received 11 Juno Awards, including two Artist of the Year. Her fourth studio album, Metals, was released on 30 September 2011. In 2012, Feist collaborated on a split EP with metal group Mastodon, releasing an interactive music video in the process. Feist received three Juno awards at the 2012 ceremony: Artist of the Year, Adult Alternative Album of the Year for Metals, and Music DVD of the Year for her documentary Look at What the Light Did Now. Leslie Feist was born on 13 February 1976 in Amherst, Nova Scotia, Canada. Her parents are both artists. Her father, Harold Feist, is an American - Canadian abstract expressionist painter who taught at both the Alberta College of Art and Design and Mount Allison University in Sackville, New Brunswick. Her mother, Lyn Feist, was a student of ceramics from Saskatchewan. After their first child, Ben, was born, the family moved to Sackville. Feist 's parents divorced soon after she was born and Ben, Feist and their mother moved to Regina, Saskatchewan, where they lived with her grandparents. They later moved to Calgary, Alberta, where she attended Bishop Carroll High School as well as Alternative High School. She aspired to be a writer, and spent much of her youth singing in choirs. At the age of twelve, Feist performed as one of 1,000 dancers in the opening ceremonies of the Calgary Winter Olympics, which she cites as inspiration for the video "1234. '' Because her father is American, Feist has dual Canadian - U.S. citizenship, joking later that she was given U.S. citizenship as part of a deal with Apple. In 1991, at age 15, Feist got her start in music when she founded and was the lead vocalist for a Calgary punk band called Placebo (not to be confused with the English band Placebo). She and her bandmates won a local Battle of the Bands competition and were awarded the opening slot at the festival Infest 1993, featuring the Ramones. At this concert she met Brendan Canning, whose band hHead performed immediately before hers, and with whom she joined in Broken Social Scene ten years later. In 1995, Feist was forced to take time off from music to recover from vocal cord damage. She moved from Calgary to Toronto in 1996. That year she was asked by Noah Mintz of hHead to play bass in his solo project Noah 's Arkweld. She played the bass guitar in Noah 's Arkweld for a year despite never having played bass before. In 1998, she became the rhythm guitarist for the band By Divine Right and toured with them throughout 1998, 1999, and 2000. She also played guitar for some live performances by Bodega, but was never an official member of the band. In 1999, Feist moved into a Queen West apartment above Come As You Are with a friend of a friend, Merrill Nisker, who then began to perform as electro - punk musician Peaches. Feist worked the back of the stage at Peaches ' shows, using a sock puppet and calling herself "Bitch Lap Lap ''. The two also toured together in England from 2000 -- 2001, staying with Justine Frischmann of Elastica and MIA Feist appeared as a guest vocalist on The Teaches of Peaches. Feist appears in Peaches ' video for the song "Lovertits '', suggestively rubbing and licking a bike. Later, Feist covered this song with Gonzales (whom she met while touring with Peaches) on her album Open Season. In 2006, Feist contributed backup vocals on a track entitled "Give ' Er '', which appeared on Peaches ' album Impeach My Bush. Feist 's solo debut album, Monarch, was released in 1999. It is composed of ten songs, including "Monarch '' and "That 's What I Say, It 's Not What I Mean. '' The album was produced by Dan Kurtz, who would later form Dragonette. In the summer of 2001, Feist self - produced seven songs at home which she called The Red Demos, which have never been released commercially. She spent more than two years touring throughout Europe with Gonzales. In that same year she joined a group of old friends in forming a new version of Toronto indie rock group Broken Social Scene, adding vocals to many tracks after being forbidden to play guitar by de facto bandleader Kevin Drew. She subsequently recorded You Forgot It in People with the band. While on tour in Europe with Gonzales, they began recording new versions of her home recorded Red Demos, which would later become her major label debut Let It Die. Let It Die featured both original compositions and covers, and Feist has been noted both as a songwriter and as an innovative interpreter of other artists ' songs. After the recording of Let It Die, Feist moved to Paris. While in Europe, she collaborated with Norwegian duo Kings of Convenience as co-writer and guest vocalist on their album Riot on an Empty Street, singing on "Know How, '' and "The Build Up. '' She also co-wrote and sang "The Simple Story '' as a duet with Jane Birkin on her album Rendezvous. Feist toured during 2004, 2005 and 2006 through North America, Europe, Asia, and Australia supporting Let It Die. She won two Canadian Juno Awards for "Best New Artist '' and "Best Alternative Rock Album '' in 2004. Sales of Let It Die totaled 500,000 internationally, and she was awarded a platinum record in Canada, as well as a gold album in France. Fellow Canadian Buck 65 appeared in the Feist - directed music video for "One Evening, '' which was also nominated for Video of the Year at the 2004 Juno Awards. In 2005, Feist contributed to the UNICEF benefit song "Do They Know It 's Hallowe'en? '' The track "Mushaboom '' was used in an advert for a Lacoste men 's fragrance, as well as in the film 500 Days of Summer. An album of remixes and collaborations, Open Season, was released on 18 April 2006. Feist also lent her voice to the two tracks "La Même Histoire '' and "We 're All in the Dance '' for the soundtrack to the 2006 film Paris, je t'aime. In early 2006, Feist moved to Paris, where she recorded a followup to Let It Die at LaFrette Studios with Gonzales, Mocky, Jamie Lidell, and Renaud Letang, as well as her touring band Bryden Baird, Jesse Baird, Julian Brown of Apostle of Hustle, and Afie Jurvanen of Paso Mino. Feist 's third solo album, The Reminder, was released on 23 April 2007 in Europe, and on 1 May 2007 in Canada, the USA, and the rest of the world. She toured worldwide to promote the album. The album features "1234, '' a song co-written by New Buffalo 's Sally Seltmann, that became a surprise hit after being featured in a commercial for the iPod nano, hitting No. 8 in the US, a rare feat for indie rock musicians and even more notable since it hit the Top Ten on the strength of downloads alone. She has been lauded in the press and was featured on the cover of the New York Times arts section in June 2007. The Reminder had sold worldwide over 1,000,000 copies and is certified gold in the U.S. The album also won a 2008 Juno Award for "Album of the Year '' on 6 April 2008 in Calgary, Alberta. Videos for many of the singles were directed by Patrick Daughters, who previously directed the video for "Mushaboom '' and went on to direct "1234, '' "My Moon, My Man, '' and "I Feel It All. '' "1234 '' and "My Moon, My Man '' were choreographed by the acclaimed choreographer & dancer Noemie Lafrance. The video for Honey, Honey features the work of avant - garde puppet troupe, The Old Trout Puppet Workshop. "I Feel It All '' was featured in the UK teen comedy The Inbetweeners and was used in the film The Accidental Husband. "Honey Honey '' was featured in The L Word (episode 5.06, "Lights! Camera! Action! ''). "I Feel It All '' was featured in the 2008 film The Women. Popular German DJ Boys Noize remixed "My Moon, My Man, '' which appears on his 2007 debut album Oi Oi Oi. The DJ has also been known to close sets with the remix. In January 2009, Bon Iver played a cover of Feist 's "The Park '' from The Reminder on Australian radio 's Triple J. The song "Limit to Your Love '' was featured in season 2, episode 1 of British teen drama Skins, and was used in the film The Accidental Husband. A cover version of the song was released by UK singer - producer James Blake as a single from his 2011 self - titled album. Prior to the airing of an Apple iPod nano commercial featuring this song, The Reminder was selling at approximately 6,000 copies per week, and "1234 '' at 2,000 downloads per week. Following the commercial, the song passed 73,000 total downloads and reached No. 7 on Hot Digital Songs and No. 8 on the Billboard Hot 100; The Reminder jumped from No. 36 to No. 28 on the Billboard 200, with sales of 19,000. Following the television advertisement for the iPod nano in the UK, the single beat its original chart position of 102 to become number 8 in the UK charts. Time magazine named "1234 '' one of The 10 Best Songs of 2007, ranking it at No. 2. Writer Josh Tyrangiel called the song a "masterpiece, '' praising Feist for singing it "with a mixture of wisdom and exuberance that 's all her own ''. On 6 April 2008, Feist won a Juno Award for the single as "Single of the Year ''. Feist performed an alternate version of "1234 '' on Sesame Street during its 39th season (2008), teaching children to count to the number four. She said working with the Muppets was a career highlight. In 2009, Feist appeared in a short film directed by Broken Social Scene bandmate Kevin Drew that focused on her song "The Water. '' Feist appears alongside Cillian Murphy and David Fox in the silent role of "Mother. '' This film was streamed from Pitchfork.com for a week starting on 2 March 2009. In an interview with the site, Feist described the experience of being in this movie as "watching a movie while being in a movie. '' In 2007, Feist was placed No. 9 on Spinner. com 's 2007 Women Who Rock Right Now. and named both Spin 's and Blender 's Breakout Artist of the Year. After taking Bob Wiseman on the road as her opening act in Europe she acted in his video Who Am I and joined him on drums for You Do n't Love Me, both of which are visible on YouTube. Feist was photographed by Annie Leibovitz for the November 2007 issue of Vanity Fair as part of a photo essay on folk music. On 3 November that year, she performed "1234 '' and "I Feel It All '' on Saturday Night Live. Feist was on the cover of the Spring 2008 edition of Naked Eye. On 28 April, Feist was interviewed by Stephen Colbert. At the end of the show she performed "I Feel It All, '' while Colbert donned Feist 's blue, sequined, strapless jumpsuit from the "1234 '' video. Feist joined Colbert again on his first - ever Christmas special, A Colbert Christmas: The Greatest Gift of All!, which first aired on 23 November 2008. She played an angel working for Heaven 's overloaded phone (prayer) service. She also accompanied the Disko Bay Expedition of Cape Farewell. On 20 October 2008, she told The Canadian Press that, following the success of her last album, The Reminder, she felt she needed to step away from the pressures of the music industry to consider her next career move and "rest for a minute ''. In March 2009, it was announced that she would make a guest appearance on the track "You and I '' on Wilco 's seventh album. In 2009, Feist was featured in the CTV television film "My Musical Brain '' with neuroscientist and writer Daniel Levitin, based on Levitin 's bestselling book This Is Your Brain on Music. Feist collaborated with Brooklyn band Grizzly Bear on the song "Service Bell '' for the AIDS charity the Red Hot Organization. This song appears on Red Hot 's album Dark Was the Night, and she joined the band in June 2009 during their Toronto show to sing this song and contribute backing vocals to the song "Two Weeks. '' She also collaborated with Ben Gibbard on a cover of Vashti Bunyan 's "Train Song '' for the same Dark Was the Night album. In June 2009, she re-joined Broken Social Scene at a North by Northeast performance celebrating the launch of the band 's biography entitled This Book Is Broken, in which she is prominently featured. This contradicted various rumors saying that it was unlikely Feist would ever play with the band again; this was the first of several appearances with BSS. She performed with Broken Social Scene during their concert of 11 July 2009 at Toronto 's Harbourfront Centre, singing and playing guitar through most of the concert, as well as performing a medley of her solo songs with Kevin Drew and his solo songs. The concert was filmed by director Bruce Macdonald and released as This Movie Is Broken. She sings on Broken Social Scene 's most recent album Forgiveness Rock Record. She performed with the band again in June 2010 on Olympic Island, and at the Sound Academy in Toronto on 9 and 10 December 2010. Feist joined Beck, Wilco, Jamie Lidell and James Gadson in a Los Angeles studio covering Skip Spence 's Oar as part of Beck 's Record Club series, with videos appearing on Beck 's website beginning November 2009. She also contributed vocals on Constant Companion the second album from Canadian songwriter Doug Paisley. Feist sings on the tracks "What I Saw '' and the duet "Do n't Make Me Wait ''. The album was released 12 October 2010. Her song "Limit to Your Love '' was covered by British post-dubstep artist James Blake and later remixed as a dubstep track by Benny Benassi and played to high acclaim at the 2011 Ultra Music Festival. On 7 July 2011, Feist with Radiohead 's Colin Greenwood, Air 's Nicolas Godin, The Hotrats and Soap&Skin performed The Velvet Underground and Nico 's "Femme Fatale '' at an all - star gig "The Velvet Underground Revisited '' which took place in Cité de la Musique, Paris. Her album Metals was released on 30 September 2011. Collaborators include Valgeir Sigurðsson, Chilly Gonzales, and Mocky. The album received widespread acclaim from music critics. It achieved an overall rating of 7.6 / 10 at AnyDecentMusic? based on 32 reviews. In 2012, Feist plans to cover a song from the progressive metal band Mastodon, with Mastodon also covering Feist, and release both songs on a split 7 '' on Record Store Day. They also released a crossfading interactive video for the song ' A Commotion '. Feist also has a cameo in the 2011 movie The Muppets. In 2012, she wrote the song "Fire in the Water '' exclusively for the film The Twilight Saga: Breaking Dawn - Part 2. The song was played when Edward and Bella are intimate in their cottage, and has been well received by critics. Her song "The Water '' was covered on American jazz violinist Zach Brock 's 2012 album Almost Never Was. On 14 January 2013, it was announced Feist would headline, along with labelmate Broken Social Scene, the Arts & Crafts Field Trip Music Festival to commemorate the tenth anniversary of Arts & Crafts. In September 2010, Feist announced through her website the release of a documentary film about the creative process of making of The Reminder, called Look at What the Light Did Now. It was directed by Canadian film director Anthony Seck and was shot on Super 8 mm film. The film was released on DVD in December 2010, and a limited series of screenings were conducted including a Toronto screening at the Royal Ontario Museum, which featured a post-film interview of Feist by George Stroumboulopoulos. The film focuses on the recording of The Reminder as well as the development of the tour through puppetry and projection. The film includes interviews with band member Afie Jurvanen; producer Chilly Gonzales; Broken Social Scene bandmates Kevin Drew and Andrew Whiteman; and video director Patrick Daughters. Bonus materials on the DVD include "This One Jam '', an early performance of Feist with Gonzales at Trash Club; live performances from the Reminder tour; and two short films: "The Water '' starring Feist and "Departures '' starring Kevin Drew and based on an idea by Feist. A CD is also included that contains the documentary soundtrack (tracks from The Reminder re-interpreted and performed by Gonzales), live performances by Feist, as well as two versions of the title track, "Look at What the Light Did Now '', one of which was recorded as a duet with the song 's writer, American musician Kyle Field. In April 2017, Feist released Pleasure, preceding it with the release of the title track "Pleasure '' as a single in March 2017. On April 27, 2017 she introduced the album (a day ahead of its release) at Trinity St. Paul, Toronto, before an adoring crowd. She performed the entire content of the album (in reverse order) as well as some of her earlier work including, "I Feel It All ''.
what is the measurement of low or high sound called
Sound - wikipedia In physics, sound is a vibration that typically propagates as an audible wave of pressure, through a transmission medium such as a gas, liquid or solid. In human physiology and psychology, sound is the reception of such waves and their perception by the brain. Humans can hear sound waves with frequencies between about 20 Hz and 20 kHz. Sound above 20 kHz is ultrasound and below 20 Hz is infrasound. Animals have different hearing ranges. Acoustics is the interdisciplinary science that deals with the study of mechanical waves in gases, liquids, and solids including vibration, sound, ultrasound, and infrasound. A scientist who works in the field of acoustics is an acoustician, while someone working in the field of acoustical engineering may be called an acoustical engineer. An audio engineer, on the other hand, is concerned with the recording, manipulation, mixing, and reproduction of sound. Applications of acoustics are found in almost all aspects of modern society, subdisciplines include aeroacoustics, audio signal processing, architectural acoustics, bioacoustics, electro - acoustics, environmental noise, musical acoustics, noise control, psychoacoustics, speech, ultrasound, underwater acoustics, and vibration. Sound is defined as "(a) Oscillation in pressure, stress, particle displacement, particle velocity, etc., propagated in a medium with internal forces (e.g., elastic or viscous), or the superposition of such propagated oscillation. (b) Auditory sensation evoked by the oscillation described in (a). '' Sound can be viewed as a wave motion in air or other elastic media. In this case, sound is a stimulus. Sound can also be viewed as an excitation of the hearing mechanism that results in the perception of sound. In this case, sound is a sensation. Sound can propagate through a medium such as air, water and solids as longitudinal waves and also as a transverse wave in solids (see Longitudinal and transverse waves, below). The sound waves are generated by a sound source, such as the vibrating diaphragm of a stereo speaker. The sound source creates vibrations in the surrounding medium. As the source continues to vibrate the medium, the vibrations propagate away from the source at the speed of sound, thus forming the sound wave. At a fixed distance from the source, the pressure, velocity, and displacement of the medium vary in time. At an instant in time, the pressure, velocity, and displacement vary in space. Note that the particles of the medium do not travel with the sound wave. This is intuitively obvious for a solid, and the same is true for liquids and gases (that is, the vibrations of particles in the gas or liquid transport the vibrations, while the average position of the particles over time does not change). During propagation, waves can be reflected, refracted, or attenuated by the medium. The behavior of sound propagation is generally affected by three things: When sound is moving through a medium that does not have constant physical properties, it may be refracted (either dispersed or focused). The mechanical vibrations that can be interpreted as sound can travel through all forms of matter: gases, liquids, solids, and plasmas. The matter that supports the sound is called the medium. Sound can not travel through a vacuum. Sound is transmitted through gases, plasma, and liquids as longitudinal waves, also called compression waves. It requires a medium to propagate. Through solids, however, it can be transmitted as both longitudinal waves and transverse waves. Longitudinal sound waves are waves of alternating pressure deviations from the equilibrium pressure, causing local regions of compression and rarefaction, while transverse waves (in solids) are waves of alternating shear stress at right angle to the direction of propagation. Sound waves may be "viewed '' using parabolic mirrors and objects that produce sound. The energy carried by an oscillating sound wave converts back and forth between the potential energy of the extra compression (in case of longitudinal waves) or lateral displacement strain (in case of transverse waves) of the matter, and the kinetic energy of the displacement velocity of particles of the medium. Although there are many complexities relating to the transmission of sounds, at the point of reception (i.e. the ears), sound is readily dividable into two simple elements: pressure and time. These fundamental elements form the basis of all sound waves. They can be used to describe, in absolute terms, every sound we hear. However, in order to understand the sound more fully, a complex wave such as this is usually separated into its component parts, which are a combination of various sound wave frequencies (and noise). Sound waves are often simplified to a description in terms of sinusoidal plane waves, which are characterized by these generic properties: Sound that is perceptible by humans has frequencies from about 20 Hz to 20,000 Hz. In air at standard temperature and pressure, the corresponding wavelengths of sound waves range from 17 m to 17 mm. Sometimes speed and direction are combined as a velocity vector; wave number and direction are combined as a wave vector. Transverse waves, also known as shear waves, have the additional property, polarization, and are not a characteristic of sound waves. The speed of sound depends on the medium the waves pass through, and is a fundamental property of the material. The first significant effort towards measurement of the speed of sound was made by Isaac Newton. He believed the speed of sound in a particular substance was equal to the square root of the pressure acting on it divided by its density: c = p ρ (\ displaystyle c = (\ sqrt (p \ over \ rho)) \,) This was later proven wrong when found to incorrectly derive the speed. The French mathematician Laplace corrected the formula by deducing that the phenomenon of sound travelling is not isothermal, as believed by Newton, but adiabatic. He added another factor to the equation -- gamma -- and multiplied γ (\ displaystyle (\ sqrt (\ gamma)) \,) by p ρ (\ displaystyle (\ sqrt (p \ over \ rho)) \,), thus coming up with the equation c = γ ⋅ p ρ (\ displaystyle c = (\ sqrt (\ gamma \ cdot (p \ over \ rho))) \,). Since K = γ ⋅ p (\ displaystyle K = \ gamma \ cdot p \,), the final equation came up to be c = K ρ (\ displaystyle c = (\ sqrt (\ frac (K) (\ rho))) \,), which is also known as the Newton - Laplace equation. In this equation, K = elastic bulk modulus, c = velocity of sound, and ρ (\ displaystyle (\ rho)) = density. Thus, the speed of sound is proportional to the square root of the ratio of the bulk modulus of the medium to its density. Those physical properties and the speed of sound change with ambient conditions. For example, the speed of sound in gases depends on temperature. In 20 ° C (68 ° F) air at sea level, the speed of sound is approximately 343 m / s (1,230 km / h; 767 mph) using the formula "v = (331 + 0.6 T) m / s ''. In fresh water, also at 20 ° C, the speed of sound is approximately 1,482 m / s (5,335 km / h; 3,315 mph). In steel, the speed of sound is about 5,960 m / s (21,460 km / h; 13,330 mph). The speed of sound is also slightly sensitive, being subject to a second - order anharmonic effect, to the sound amplitude, which means there are non-linear propagation effects, such as the production of harmonics and mixed tones not present in the original sound (see parametric array). A distinct use of the term sound from its use in physics is that in physiology and psychology, where the term refers to the subject of perception by the brain. The field of psychoacoustics is dedicated to such studies. Historically the word "sound '' referred exclusively to an effect in the mind. Webster 's 1947 dictionary defined sound as: "that which is heard; the effect which is produced by the vibration of a body affecting the ear. '' This meant (at least in 1947) the correct response to the question: "if a tree falls in the forest with no one to hear it fall, does it make a sound? '' was "no ''. However, owing to contemporary usage, definitions of sound as a physical effect are prevalent in most dictionaries. Consequently, the answer to the same question is now "yes, a tree falling in the forest with no one to hear it fall does make a sound ''. The physical reception of sound in any hearing organism is limited to a range of frequencies. Humans normally hear sound frequencies between approximately 20 Hz and 20,000 Hz (20 kHz), The upper limit decreases with age. Sometimes sound refers to only those vibrations with frequencies that are within the hearing range for humans or sometimes it relates to a particular animal. Other species have different ranges of hearing. For example, dogs can perceive vibrations higher than 20 kHz. As a signal perceived by one of the major senses, sound is used by many species for detecting danger, navigation, predation, and communication. Earth 's atmosphere, water, and virtually any physical phenomenon, such as fire, rain, wind, surf, or earthquake, produces (and is characterized by) its unique sounds. Many species, such as frogs, birds, marine and terrestrial mammals, have also developed special organs to produce sound. In some species, these produce song and speech. Furthermore, humans have developed culture and technology (such as music, telephone and radio) that allows them to generate, record, transmit, and broadcast sound. Noise is a term often used to refer to an unwanted sound. In science and engineering, noise is an undesirable component that obscures a wanted signal. However, in sound perception it can often be used to identify the source of a sound and is an important component of timbre perception (see above). Soundscape is the component of the acoustic environment that can be perceived by humans. The acoustic environment is the combination of all sounds (whether audible to humans or not) within a given area as modified by the environment and understood by people, in context of the surrounding environment. There are six experimentally separable ways in which sound waves are analysed. They are: pitch, duration, loudness, timbre, sonic texture and spatial location. Pitch is perceived as how "low '' or "high '' a sound is and represents the cyclic, repetitive nature of the vibrations that make up sound. For simple sounds, pitch relates to the frequency of the slowest vibration in the sound (called the fundamental harmonic). In the case of complex sounds, pitch perception can vary. Sometimes individuals identify different pitches for the same sound, based on their personal experience of particular sound patterns. Selection of a particular pitch is determined by pre-conscious examination of vibrations, including their frequencies and the balance between them. Specific attention is given to recognising potential harmonics. Every sound is placed on a pitch continuum from low to high. For example: white noise (random noise spread evenly across all frequencies) sounds higher in pitch than pink noise (random noise spread evenly across octaves) as white noise has more high frequency content. Figure 1 shows an example of pitch recognition. During the listening process, each sound is analysed for a repeating pattern (See Figure 1: orange arrows) and the results forwarded to the auditory cortex as a single pitch of a certain height (octave) and chroma (note name). Duration is perceived as how "long '' or "short '' a sound is and relates to onset and offset signals created by nerve responses to sounds. The duration of a sound usually lasts from the time the sound is first noticed until the sound is identified as having changed or ceased. Sometimes this is not directly related to the physical duration of a sound. For example; in a noisy environment, gapped sounds (sounds that stop and start) can sound as if they are continuous because the offset messages are missed owing to disruptions from noises in the same general bandwidth. This can be of great benefit in understanding distorted messages such as radio signals that suffer from interference, as (owing to this effect) the message is heard as if it was continuous. Figure 2 gives an example of duration identification. When a new sound is noticed (see Figure 2, Green arrows), a sound onset message is sent to the auditory cortex. When the repeating pattern is missed, a sound offset messages is sent. Loudness is perceived as how "loud '' or "soft '' a sound is and relates to the totalled number of auditory nerve stimulations over short cyclic time periods, most likely over the duration of theta wave cycles. This means that at short durations, a very short sound can sound softer than a longer sound even though they are presented at the same intensity level. Past around 200 ms this is no longer the case and the duration of the sound no longer affects the apparent loudness of the sound. Figure 3 gives an impression of how loudness information is summed over a period of about 200 ms before being sent to the auditory cortex. Louder signals create a greater ' push ' on the Basilar membrane and thus stimulate more nerves, creating a stronger loudness signal. A more complex signal also creates more nerve firings and so sounds louder (for the same wave amplitude) than a simpler sound, such as a sine wave. Timbre is perceived as the quality of different sounds (e.g. the thud of a fallen rock, the whir of a drill, the tone of a musical instrument or the quality of a voice) and represents the pre-conscious allocation of a sonic identity to a sound (e.g. "it 's an oboe! ''). This identity is based on information gained from frequency transients, noisiness, unsteadiness, perceived pitch and the spread and intensity of overtones in the sound over an extended time frame. The way a sound changes over time (see figure 4) provides most of the information for timbre identification. Even though a small section of the wave form from each instrument looks very similar (see the expanded sections indicated by the orange arrows in figure 4), differences in changes over time between the clarinet and the piano are evident in both loudness and harmonic content. Less noticeable are the different noises heard, such as air hisses for the clarinet and hammer strikes for the piano. Sonic texture relates to the number of sound sources and the interaction between them. The word ' texture ', in this context, relates to the cognitive separation of auditory objects. In music, texture is often referred to as the difference between unison, polyphony and homophony, but it can also relate (for example) to a busy cafe; a sound which might be referred to as ' cacophony '. However texture refers to more than this. The texture of an orchestral piece is very different to the texture of a brass quintet because of the different numbers of players. The texture of a market place is very different to a school hall because of the differences in the various sound sources. Spatial location (see: Sound localization) represents the cognitive placement of a sound in an environmental context; including the placement of a sound on both the horizontal and vertical plane, the distance from the sound source and the characteristics of the sonic environment. In a thick texture, it is possible to identify multiple sound sources using a combination of spatial location and timbre identification. It is the main reason why we can pick the sound of an oboe in an orchestra and the words of a single person at a cocktail party. Sound pressure is the difference, in a given medium, between average local pressure and the pressure in the sound wave. A square of this difference (i.e., a square of the deviation from the equilibrium pressure) is usually averaged over time and / or space, and a square root of this average provides a root mean square (RMS) value. For example, 1 Pa RMS sound pressure (94 dBSPL) in atmospheric air implies that the actual pressure in the sound wave oscillates between (1 atm − 2 (\ displaystyle - (\ sqrt (2))) Pa) and (1 atm + 2 (\ displaystyle + (\ sqrt (2))) Pa), that is between 101323.6 and 101326.4 Pa. As the human ear can detect sounds with a wide range of amplitudes, sound pressure is often measured as a level on a logarithmic decibel scale. The sound pressure level (SPL) or L is defined as Since the human ear does not have a flat spectral response, sound pressures are often frequency weighted so that the measured level matches perceived levels more closely. The International Electrotechnical Commission (IEC) has defined several weighting schemes. A-weighting attempts to match the response of the human ear to noise and A-weighted sound pressure levels are labeled dBA. C - weighting is used to measure peak levels. Ultrasound is sound waves with frequencies higher than the upper audible limit of human hearing. Ultrasound is no different from ' normal ' (audible) sound in its physical properties, except in that humans can not hear it. Ultrasound devices operate with frequencies from 20 kHz up to several gigahertz. Ultrasound is commonly used for medical diagnostics such as sonograms.
who tells romeo who juliet is a capulet
Characters in Romeo and Juliet - wikipedia Romeo and Juliet contains a diverse cast of characters. In addition to play 's eponymous protagonists, Romeo Montague and Juliet Capulet, the play contains roles for members of their respective families and households; Prince Escalus, the city 's ruler, and his kinsman, Count Paris; and various unaffiliated characters such as Friar Laurence and the Chorus. In addition the play contains two ghost characters (Petruchio and Valentine) and an unseen character (Rosaline). Prince Escalus, the Prince of Verona, is the desperate resolver of the feuding families. He is based on the actual Scaligeri family which ruled Verona, possibly on Bartolomeo I. Escalus is the voice of authority in Verona. He appears only three times within the text and only to administer justice following major events in the feud between the Capulet and Montague families. He first punishes Capulet and Montague for the quarrel between Tybalt, Benvolio, and a handful of servants. He returns too late to stop the fatal brawls between Tybalt and Mercutio and, subsequently, Tybalt and Romeo. Escalus is prepared to execute Romeo for his offence -- Romeo 's killing Tybalt -- but lightens the sentence to lifetime banishment from Verona, when Benvolio insists that Tybalt started the quarrel by murdering Mercutio, a kinsman to the prince. He yells at Lord Montague for engaging in the feud, which really is the root cause which led to Tybalt killing Mercutio. Prince Escalus returns in the final scene -- V. iii -- following the double suicide of Romeo and Juliet, and at last declares the Lords Montague and Capulet guilty of Romeo and Juliet 's death, and angrily tells them that their totally useless feud resulted in the deaths of not only their own loved ones (Lady Montague, Romeo, Juliet, and Tybalt), but also in the deaths of Escalus ' loved ones (Mercutio and Paris). He curses the feud that kills Romeo and Juliet whom he really feels sad for, just before the Lords come to peace with each other. In the end, Prince Escalus becomes very happy that the feud has finally ended, even if with a heavy price, since it ended late. Count Paris is a kinsman of Prince Escalus and seeks to marry Juliet. He is described as handsome, somewhat self - absorbed, and very wealthy. Paris makes his first appearance in Act I, Scene II, where he expresses his wish to make Juliet his wife and the mother of his children. Capulet demurs, citing his daughter 's young age as a reason and telling him to wait until she is more mature. (Paris disagrees, however.) Nevertheless, Capulet invites Paris to attend a family ball being held that evening and grants permission to woo and attract Juliet. Later in the play, however, Juliet refuses to become Paris ' "joyful bride '' after her cousin Tybalt dies by her new husband Romeo 's hand, proclaiming that she now wants nothing to do with Paris. Her parents threaten to disown (or cut ties with) her if she will not agree to the marriage. Then, while at Laurence 's cell at the church, Paris tries to woo her by repeatedly saying that she is his wife and that they are to be married on Thursday. He kisses her and then leaves the cell, prompting Juliet to angrily threaten to kill herself with a knife. His final appearance in the play is in the cemetery where Juliet is "laid to rest '' in the Capulet family tomb. Believing her to be dead, Count Paris has come to mourn her death in solitude and privacy and sends his manservant away. He professes his love to Juliet, saying he will nightly weep for her (Act V, Scene III). Shortly thereafter, Romeo arrives. Paris sees him and thinks he is trying to vandalise the tomb, so he tries to arrest him. They fight, and Romeo kills Paris. Romeo grants Paris ' dying wish to be placed next to Juliet in the tomb. Mercutio is the cousin of Prince Escalus and Count Paris, and is a close friend of Romeo and his cousin Benvolio. He supports and fights on the Montague side of the feud, and just like a Montague, hates the Capulet family. The invitation to the Capulet 's party reveals that he has a brother named Valentine. Mercutio is apt to make long, drawn out speeches (the most famous of which is the Queen Mab speech), and is generally thought to be reckless, a jester, and a free spirit. Due to his reckless and flamboyant personality, Mercutio is one of Shakespeare 's most popular characters. Mercutio is the instigator of many fights with his rather mean spirited humor, and often insults Tybalt, a renowned swordsman. It is Tybalt 's temper that leads to Mercutio 's death, and Romeo 's banishment and the tragedy that follows. After Romeo receives a death threat from Tybalt, Mercutio expects Romeo to engage Tybalt in a duel. However, Romeo refuses to fight Tybalt, as Tybalt is Juliet 's cousin and therefore his kinsman. Not knowing this, Mercutio is incensed, and decides to fight Tybalt himself. Romeo, not wanting his best friend or his relative to get hurt, intervenes, causing Mercutio to be killed by Tybalt stabbing under Romeo 's arm. Before he dies, Mercutio casts "a plague o ' both your houses! '' He makes one final pun before he dies: "Ask for me tomorrow, and you shall find me a grave man ''. In revenge for the murder of his best friend, Romeo slays Tybalt, thus leading to Romeo 's banishment from Verona and the increasingly tragic turn of events that follows. Another page accompanies Paris to the Capulet 's crypt when he goes to mourn Juliet. He stands guard as Paris enters, ordered to "whistle then to me, / As signal that thou hear'st something approach ''. When Romeo and Paris break into a brawl, the page runs away to call the Watch. He returns with the Watch too late to stop the fray and later testifies to the Prince of Paris ' intentions. The Capulet family (in Italian, "Capuleti '') in the play was named after an actual political faction of the 13th century. Notably, the Capulet family is often portrayed as the ' bad ' side, as much of the conflict is caused by them. They are also more developed, since more attention is given to their family life. Lord Capulet is the patriarch of the Capulet family, the father of Juliet, and uncle of Tybalt. He is very wealthy. He is sometimes commanding but also convivial, as at the ball: when Tybalt tries to duel with Romeo, Capulet tries to calm him and then threatens to throw him out of the family if he does not control his temper; he does the same to his daughter later in the play. Capulet 's ultimatum to Juliet, Romeo and Juliet Capulet believes he knows what is best for Juliet. He says his consent to the marriage depends upon what she wants and tells Count Paris that if he wants to marry Juliet he should wait a while then ask her. Later, however, when Juliet is grieving over Romeo 's departure, Capulet thinks her sorrow is due to Tybalt 's death, and in a misguided attempt to cheer her up, he wants to surprise her by arranging a marriage between her and Count Paris. The catch is that she has to be "ruled '' by her father and to accept the proposal. When she refuses to become Paris ' "joyful bride '', saying that she can "never be proud of what she hates '', Capulet becomes furious; threatens to make her a street urchin; calls her a "hilding '', "unworthy '', "young baggage '', a "disobedient wretch '', a "green - sickness carrion '', and "tallow - face ''; and says God 's giving Juliet to them was a "curse '' and he now realises he and his wife had one child too many when Juliet was born (in the earlier poem The Tragical History of Romeus and Juliet). In addition to threatening to turn her out, he threatens to sentence her to rot away in prison if she does not obey her parents ' orders. He then storms away, and his wife also rejects Juliet before following him. He fixes the day of the marriage for Thursday and suddenly advances it to Wednesday out of anger and impulse. His actions indicate that his daughter 's wants were irrelevant all the way up to the point when he sees her unconscious on her bed (presumably dead) and later, when she is truly dead during the play 's final scene. It is he who asks Lord Montague for his hand to end the feud between their families. Capulet 's wife is the matriarch of the house of Capulet and Juliet 's mother. She plays a larger role than Montague 's wife, appearing in several scenes. In Act 1, Scene 3, she speaks to Juliet about the marriage of her daughter and Paris, we see this as she compares him to a book, and Juliet is the cover. However, in Scene four, she is pleased about Count Paris ' "interest '' in her daughter. When Tybalt is killed in Act 3, she expresses extreme grief and a strong desire for revenge on Romeo by wishing death upon him. In Act 3, Scene 5, she becomes very angry with Juliet for refusing to marry Paris and coldly rejects her, saying: "Talk not to me, for I 'll not speak a word; do as thou wilt, for I am done with thee ''. By the final act, she is nearly overcome by the tragic events of the play, this is where the grief - stricken mother comes out. We know Juliet 's mother bore her first child by the time she was 14, Juliet 's age, and her husband is many years older than she. Calling her "Lady Capulet '' is a later addition; it is an echo of Juliet 's form of address in 3.5. 65: "my lady mother ''. In the first texts the stage direction and speech headings can be "mother '', "wife '', or even "old lady '', but nowhere "Lady Capulet ''. Juliet Capulet, the female protagonist, is the only daughter of Capulet, the patriarch of the Capulet family. As a child she was cared for by a Nurse, who is now her confidante. Juliet dies at the end of the play, and the sacred lovers are reunited on the same deathbed. Both their families realise what they had done by trying to separate the star crossed lovers with the effect that the Capulets and Montagues are reunited and their fighting ends. Tybalt is the son of Lady Capulet 's brother and Juliet 's hot - headed first cousin. As a skilled swordsman, he serves as the story 's principal antagonist. Tybalt is angered by the insult of Romeo and Benvolio 's uninvited presence at the ball in the Capulets ' home. Tybalt shares the same name as the character Tibert / Tybalt the "Prince of Cats '' in Reynard the Fox, a point of both mockery and compliment to him in the play. While Mercutio repeatedly calls Tybalt "Prince of Cats '' (referring to Tybalt 's speed and agility with the sword), Mercutio is also insulting Tybalt -- the phrase refers not only to Reynard but to the Italian word cazzo (pr. CAT - so) meaning "penis. '' Tybalt is first seen coming to the aid of his servants who are being attacked by the Montagues ' servants. He is also present at Capulet 's feast in act one, scene five and is the first to recognise Romeo. His last appearance is in act 3 scene 1, wherein Mercutio insults Tybalt and ends up fighting with him. Tybalt kills Mercutio and, in retaliation, Romeo rages and kills Tybalt, resulting in Romeo 's banishment. The Nurse is a major character in the play, and like the Friar she is a neutral character. There has been speculation about her name, as Capulet refers to as "Angelica '', but the line can be addressed to either the nurse or Lady Capulet. She is the personal servant (and former nurse) of Juliet 's. As the primary person who raised Juliet, she is Juliet 's confidante and effectively more of a mother to the girl than Lady Capulet. She was also the one who breastfed Juliet as a child. Peter is the personal servant of the Nurse 's. He appears to be a loyal servant, always quick to obey the Nurse. He is chastised for not fighting Mercutio for the Nurse 's honour, but insists that he "saw no man use you a pleasure; if I had, / my weapon should quickly have been out ''. He appears again in act four, scene five in a brief comic relief scene with a number of musicians. Gregory and Sampson are the Capulet servants. Gregory is originally hesitant to start a fight. Sampson, however, bites his thumb at Abram, "Which is a disgrace to them, if they bear it ''. The Montagues then retaliate in earnest. Benvolio arrives to break up the fight but ends up fighting with Tybalt. Both Gregory and Sampson appear to be friends of their master Tybalt 's. In the opening scene, the two engage in a dialogue full of puns on "coal '' and "eye '', each intending to outdo the other and get each other ready to fight Montagues. The rhetorical form is called stychomythia, wherein characters participate in a short, quick exchanges of one - upmanship. Their discussion and brawl in this scene set the stage for the rivalry and hatred which fills the rest of the play. Anthony, Potpan, and two other servants to the Capulet family play out a short comic scene in act one, scene five, arguing over the preparations for Capulet 's feast. Capulet 's servants are referenced again in act four, scene one; Capulet orders them to begin preparations for another party: the wedding of Juliet and Paris. A servant to Capulet is sent to deliver party invitations to a number of nobles and friends to Capulet. While walking, he comes upon Romeo and Benvolio and asks them to read the list for him, as he can not read. As a thank you, he invites the boys to "come and crush a cup of wine, '' not realising that they are Montagues. This character may have been intended to be the same as Peter, and is usually identified in scripts either as Peter or as a Clown. Old Capulet is Capulet 's cousin. He appears as an elderly man sitting with Capulet in the feast. The Montague family (in Italian, "Montecchi '') was an actual political faction of the 13th century. The Montagues are generally portrayed as the ' better ' of the two families, as they are not seen to be provoking fights and are often found trying to avoid fighting whenever they could, and occasionally found trying to dissuade the fighters to return to peace. The father of Romeo. Presumably, he is also wealthy, and is always in feud with Capulet. Montague clearly loves his son deeply and at the beginning of the play, worries for him as he recounts to Benvolio his attempts to find out the source of his depression. He wishes Benvolio better luck. After Romeo kills Tybalt, Montague pleads with the Prince to spare him of execution as Romeo did only what the law would have done, since Tybalt killed Mercutio. He appears again at the end of the play to mourn Romeo, having already lost his wife to grief. Montague 's wife is the matriarch of the house of Montague, and the mother of Romeo and aunt of Benvolio. She appears twice within the play: in act one, scene one she first restrains Montague from entering the quarrel himself, and later speaks with Benvolio about the same quarrel. She returns with her husband and the Prince in act three, scene one to see what the trouble is, and is there informed of Romeo 's banishment. She dies of grief offstage soon after (mentioned in act five). She is very protective of her son Romeo and is very happy when Benvolio tells her that Romeo was not involved in the brawl that happened between the Capulets and Montagues. However, Romeo does n't feel very close to her as he is unable to seek advice from her. As with Capulet 's wife, calling her "Lady Montague '' is a later invention not supported by the earliest texts. In the beginning of the play, Romeo pines for an unrequited love, Rosaline. To cheer him up, his cousin and friend Benvolio and Mercutio take him to the Capulets ' celebration in disguise, where he meets and falls in love with the Capulets ' only daughter, Juliet. Later that night, he and Juliet meet secretly and pledge to marry, despite their families ' long - standing feud. They marry the following day, but their union is soon thrown into chaos by their families; Juliet 's cousin Tybalt duels and kills Romeo 's friend Mercutio, throwing Romeo into such a rage that he kills Tybalt, and the Prince of Verona subsequently banishes him. Meanwhile, Juliet 's father plans to marry her off to Paris, a local aristocrat, within the next few days, threatening to turn her out on the streets if she does n't follow through. Desperate, Juliet begs Romeo 's confidant, Friar Laurence, to help her to escape the forced marriage. Laurence does so by giving her a potion that puts her in a deathlike coma. The plan works, but too soon for Romeo to learn of it; he genuinely believes Juliet to be dead, and so resolves to commit suicide, by drinking the bottle of poison (illegally bought from the Apothecary upon hearing the news of Juliet 's "death ''). Romeo 's final words were "Thus with a kiss I die ''. He kills himself at Juliet 's grave, moments before she awakes; she kills herself in turn shortly thereafter. He is Montague 's nephew and Romeo 's cousin. Benvolio and Romeo are both friends of Mercutio, a kinsman to Prince Escalus. Benvolio seems to have little sympathy with the feud, trying unsuccessfully to back down from a fight with Tybalt, and the duels that end in Mercutio and Tybalt 's death. Benvolio spends most of Act I attempting to distract his cousin from his infatuation with Rosaline, but following the first appearance of Mercutio in I. iv, he and Mercutio become more closely aligned until III. i. In that scene, he drags the fatally wounded Mercutio offstage, before returning to inform Romeo of Mercutio 's death and the Prince of the course of Mercutio 's and Tybalt 's deaths. Benvolio then disappears from the play (though, as a Montague, he may implicitly be included in the stage direction in the final scene "Enter Lord Montague and others '', and he is sometimes doubled with Balthasar). Though he ultimately disappears from the play without much notice, he is a crucial character if only in that he is the only child of the new generation from either family to survive the play (as Romeo, Juliet, Paris, Mercutio, and Tybalt are dead). Balthasar is Romeo 's servant and trusted friend. They have a brotherly relationship, which is identified when Balthasar tells Romeo that Juliet is "dead. '' While he is not directly referenced in the first scene of the play, the directions call for two Montague servants to quarrel with Sampson and Gregory. He then comes back in Act V Scene 1 telling Romeo about Juliet 's death. Later Friar Laurence runs past Balthasar and asks him where Romeo is. Balthasar tells him that he is inside the tomb. Then the Prince calls him in and asks him questions about why was he there. He gives the Prince the letter that explains why Juliet killed herself. Abram is a servant of the Montague household. He appears in Act 1, Scene 1, where he and another servant (presumably Balthasar) are provoked into a fight with Gregory and Sampson when the latter bites his thumb at them. Friar Lawrence plays the part of an advisor and mentor to Romeo, along with aiding in major plot developments. Alone, the innocent Friar gives us foreshadowing with his soliloquy about plants and their similarities to humans. When Romeo requests that the Friar marry him to Juliet, he is shocked, because only days before, Romeo had been infatuated with Rosaline, a woman who did not return his love. Nevertheless, Friar Lawrence decides to marry Romeo and Juliet in the attempt to end the civil feud between the Capulets and the Montagues. When Romeo is banished and flees to Mantua for murdering Tybalt (who had previously murdered Mercutio), he tries to help the two lovers get back together using a death - emulating potion to fake Juliet 's death. The Friar 's letter to Romeo does not reach him because the people of Mantua suspect the messenger came from a house where the plague reigns, and the Friar is unable to arrive at the Capulet 's monument in time. Romeo kills Count Paris, whom he finds weeping near Juliet 's corpse, then commits suicide, by drinking poison that he bought from an impoverished apothecary, over what he thinks is Juliet 's dead body. Friar Lawrence arrives just as Juliet awakes from her chemically induced slumber. He urges Juliet not to be rash, and to join a society of nuns, but he hears a noise from outside and then flees from the tomb. Juliet then kills herself with Romeo 's dagger, completing the tragedy. The Friar is forced to return to the tomb, where he recounts the entire story to Prince Escalus, and all the Montagues and Capulets. As he finishes, the prince proclaims, "We have still known thee for a holy man ''. Friar John calls at the door of Friar Laurence 's cell, "Holy Franciscan friar! brother, ho! '' (5.2. 1). Friar Laurence comes out and immediately asks about Romeo: "Welcome from Mantua! What says Romeo? / Or, if his mind be writ, give me his letter '' (5.2. 3 -- 4). Friar John explains that he sought out another friar for company and found him in a house where he was visiting the sick, whereupon the health authorities, fearing there was pestilence in the house, confined both friars in the house so they would n't infect others. The authorities would n't even allow Friar John to use a messenger to send the letter back to Friar Laurence. A Chorus gives the opening prologue and one other speech, both in the form of a Shakespearean sonnet. The Chorus is an omniscient character. It appears at the top of the play to fill the audience in on the ancient quarrel between the, "Two households, both alike in dignity / In fair Verona, where we lay our scene ''. It returns as a prologue to act two to foreshadow the tragic turn of events about to befall the new romance between the title characters. The Chorus only appears in the Quarto versions, not in the First Folio. The Apothecary is a pharmacist in Mantua who reluctantly sells Romeo 's poison, only because he 's poor and is also in desperate need of monetary support. The Watch of Verona takes the form of three watchmen. The First Watch appears to be the constable, who orders the Second and Third to "search about the churchyard! '' Unusual for a Shakespearean watch group, they appear to be a relatively intelligent unit, managing to capture and detain Balthasar and Friar Laurence in the churchyard. They then testify to the Prince to their role in the murder and suicide scene. Three musicians for Juliet 's wedding appear in act four, scene five in a brief comic scene, refusing to play a song called "Heart 's ease '' for Peter. They are referred to by the names of Simon Catling, Hugh Rebeck, and James Soundpost. A number of citizens emerge during Act I, Scene I to break apart the fight between some Capulet and Montague servants. They appear again in Act III, Scene I to discover the slain body of Tybalt, at which point they place Benvolio under citizen 's arrest until the Prince 's swift entrance. Petruchio is a guest at the Capulet feast. He is notable only in that he is the only ghost character confirmed by Shakespeare to be present. When the party ends and Juliet inquires towards Romeo 's identity, the Nurse attempts to avoid the subject by answering that Juliet is pointing at "the young Petruchio ''. Later, he is with Tybalt when he fatally wounds Mercutio, and a few scripts identify a Capulet with one line by that name. Petruchio is also the name of a major character in Shakespeare 's earlier work, The Taming of the Shrew. Rosaline is an unseen character and niece of Capulet. Although silent, her role is important: her lover, Romeo, first spots her cousin Juliet while trying to catch a glimpse of Rosaline at a Capulet gathering. Before Juliet, Romeo was deeply intrigued with another woman that did n't return his feelings. Scholars generally compare Romeo 's short - lived love of Rosaline with his later love of Juliet. Rosaline means "fair rose ''. The poetry he writes for Rosaline is much weaker than that for Juliet. Scholars believe his early experience with Rosaline prepares him for his relationship with Juliet. Later performances of Romeo and Juliet have painted different pictures of Romeo and Rosaline 's relationship, with filmmakers experimenting by making Rosaline a more visible character. Valentine is Mercutio 's brother, briefly mentioned as a guest at the Capulet feast where Romeo and Juliet meet. He is a ghost character with no speaking parts, and his only possible appearance is at the Capulet feast among the guests. "Valentine '' has been taken to mean "lover '' or "brother '', and is associated with these attributes in several stories and histories. Scholars have pointed out that Valentine is more strongly connected to a major character than other ghosts, as he is given a direct connection to his brother. Although he has a very small role in Shakespeare 's play, earlier versions of the story gave him no role or mention at all. In fact, they gave even Mercutio a very minor role. Shakespeare was the first English dramatist to use the name "Valentine '' on stage, in his earlier plays, Titus Andronicus and Two Gentlemen of Verona. In Titus, Valentine plays a minor role, but in Two Gentlemen, he is one of the title characters. Incidentally, the Valentine of Two Gentlemen borrows heavily from Arthur Brooke 's Romeus in The Tragical History of Romeus and Juliet, which Shakespeare later used to create Romeo and Juliet. Brooke 's version made Mercutio a rival for Juliet 's love. Shakespeare 's addition of Valentine as Mercutio 's brother diffuses this rivalry. Thus, because the first time we hear of Mercutio he is associated with Valentine, rather than Juliet, he is changed from a rival to a friend and brotherly figure of Romeo.
who stars in the new death wish movie
Death Wish (2018 film) - wikipedia Death Wish is a 2018 American vigilante action thriller film directed by Eli Roth and written by Joe Carnahan. It is the sixth installment of the Death Wish series and a remake of the 1974 film of the same name starring Charles Bronson, based on Brian Garfield 's 1972 novel. The film stars Bruce Willis as Paul Kersey, a Chicago doctor who sets out to get revenge on the men who attacked his family. Vincent D'Onofrio, Elisabeth Shue, Dean Norris, and Kimberly Elise also star. The film was released in the United States by Metro - Goldwyn - Mayer and in international markets by Annapurna Pictures on March 2, 2018. It received generally negative reviews, with criticism aimed at it for not adding anything new to previous installments. Paul Kersey, who lives with his wife Lucy and college - bound daughter Jordan, works as a trauma surgeon at a Chicago hospital. After having lunch at a restaurant, Paul gives the keys to his car to a valet, Miguel, who secretly takes a picture of their home address from the car 's navigation software after overhearing when they will all be out one night. Later, while Paul is working late at the hospital, three masked men invade the Kersey 's home. Jordan and Lucy return home and are both shot and Paul learns at the hospital that Lucy died and Jordan fell into a coma. Police Detective Kevin Raines is tasked with the case, but Paul soon becomes frustrated with the lack of progress in the investigation. One night, Paul comes across two thugs harassing a woman and intervenes and gets beaten. Paul then visits a gun store, but when he sees he 's being videotaped, he decides to leave without purchasing anything. When an injured gang member is brought in to the hospital and his Glock 17 falls from the gurney, Paul steals the weapon and then uses online videos to learn how to maintain and use it. Eventually, Paul uses it to stop a carjacking, a video of which goes viral; Paul is dubbed Chicago 's "Grim Reaper ''. When an unconscious Miguel is hospitalized, Paul recognizes his stolen watch around his wrist. After Miguel dies, Paul takes Miguel 's phone and discovers a photograph and a text on it suggesting that a liquor store is where the thieves bring and re-sell their stolen goods. Paul visits the liquor store, but the owner, Ponytail, realizes who he is, secretly messages an accomplice for backup and reaches for a gun. Paul stabs him with a dart and demands the goods stolen from his house. Ponytail 's accomplice, Fish, arrives and accidentally kills Ponytail. After shooting Fish, Paul learns from him that Lucy 's killer is an auto body shop worker named Joe. Fish kicks Paul and gains the upper hand, but is stunned by a falling bowling ball, allowing Paul to kill him. At the auto body shop, Paul ambushes Joe while he is working under a car. He tortures Joe for information by cutting his sciatic nerve with a scalpel and pouring brake fluid into the wound. Joe divulges that their leader, Knox, shot Lucy. Paul then removes the jack holding the car up, causing it to crush Joe 's head. Paul receives a phone call from Knox, telling him to meet him in a bathroom at a nightclub. When he arrives, Paul calls the number back and hears it ringing in one of the stalls. He shoots through the stall door, only to find that it was a ruse. Knox opens fire from the other end of the bathroom and the two engage in a shootout, injuring both before both escape. After going home and encountering his brother, Frank, Paul is informed from the hospital that Jordan has regained consciousness. A week later, as Paul brings Jordan home from the hospital, Knox talks to both of them in the hospital elevator: Jordan does not recognize Knox, but Knox tells Paul he 'll see him around. Paul returns to the gun store to legally purchase weapons to legally defend his remaining family. Sometime later, Knox and two other thugs prepare to invade Paul 's home at night. Paul glimpses a man running across his lawn and hides Jordan in a closet under the stairs, telling her to call the police. After killing the two thugs in the upstairs bedroom, Paul heads into the basement, where he suspects Knox is located. There, Knox emerges from the darkness, shoots Paul in the shoulder and threatens to burn Jordan alive. When Knox is distracted by Jordan yelling out her father 's name, Paul retrieves a fully automatic M4 carbine assault rifle from a hidden compartment under a coffee table and shoots Knox dead. When the police arrive, Detective Raines suspects that Paul is the "Grim Reaper '', but pretends to buy his story because he 's satisfied that justice was served and because Paul subtly informs him that his "Grim Reaper '' days are over. Months later, Paul drops Jordan off at NYU. As Paul is leaving, he spots a man who steals a bag from a bellhop, calls out to him and points at him with a finger gun. Development of the film began in 2006, when Sylvester Stallone announced that he would be directing and starring in a remake of Death Wish (1974). Stallone told Ai n't It Cool News, "Instead of the Charles Bronson character being an architect, my version would have him as a very good cop who had incredible success without ever using his gun. So when the attack on his family happens, he 's really thrown into a moral dilemma in proceeding to carry out his revenge. '' He later told the publication that he was no longer involved. In a 2009 interview with MTV, though, Stallone stated that he was again considering the project. In late January 2012, The Hollywood Reporter confirmed that a remake would be written and directed by Joe Carnahan. The film was originally set to star Liam Neeson and Frank Grillo. Carnahan, however, left the project in February 2013 due to creative differences, but received sole writing credit for the completed film. He was replaced as director with Gerardo Naranjo, who was interested in casting Benicio Del Toro in the lead role; this version also never came to fruition. In March 2016, Paramount Pictures and Metro - Goldwyn - Mayer announced that Aharon Keshales and Navot Papushado would direct with Bruce Willis starring. Willis was chosen from a shortlist which included Russell Crowe, Matt Damon, Will Smith, and Brad Pitt. In May, Keshales and Papushado quit the project after MGM declined to allow them to rewrite Joe Carnahan 's original script, which had been approved by Willis. In June, Eli Roth signed on to direct, with the script being rewritten by Scott Alexander and Larry Karaszewski. On August 25, 2016, Vincent D'Onofrio was cast alongside Bruce Willis to play Paul Kersey 's brother, Breaking Bad actor Dean Norris also joined Willis in the film. On October 7, 2016, Kimberly Elise and Camila Morrone were cast in the film to play Detective Jackson and Jordan Kersey. Later on October 17, 2016, Ronnie Gene Blevins was cast in the film. Principal photography on the film began in late September 2016 in Chicago, Illinois. Later in October 2016, it began filming in Montreal, Quebec, Canada. In June 2017, it was announced Annapurna Pictures would distribute the film on behalf of Metro - Goldwyn - Mayer and release it on November 22, 2017. However in October of 2017, it was announced it was being delayed until March 2, 2018 and that Metro - Goldwyn - Mayer would handle the film 's distribution in the United States, while Annapurna Pictures handle its international distribution. It was speculated the delay was due in - part to the mass shooting in Las Vegas several days prior. Death Wish was released theatrically in the United Kingdom on April 6 by Vertigo Releasing. It became available on DVD and BluRay on June 5, 2018. As of July 5, 2018, Death Wish has grossed $34 million in the United States and Canada, and $14.5 million in other territories, for a worldwide total of $48.5 million, against a production budget of $30 million. In the United States and Canada, Death Wish was released alongside Red Sparrow, and was projected to gross $10 -- 20 million from 2,847 theaters in its opening weekend. It made $4.2 million on its first day (including $650,000 from Thursday night previews) and $13 million in its opening weekend, finishing third behind Black Panther ($66.7 million in its third week) and Red Sparrow ($17 million). 55 % of its audience was male, while 89 % was over the age of 25. It dropped 49 % to $6.6 million in its second weekend, finishing at 7th. -- Eli Roth on the reception of Death Wish On review aggregator website Rotten Tomatoes, the film holds an approval rating of 18 % based on 129 reviews, and an average rating of 3.9 / 10. The website 's critical consensus reads, "Death Wish is little more than a rote retelling that lacks the grit and conviction of the original -- and also suffers from spectacularly bad timing. '' On Metacritic, the film has a weighted average score of 32 out of 100, based on 30 critics, indicating "generally unfavorable reviews ''. Audiences polled by CinemaScore gave the film an average grade of "B + '' on an A+ to F scale. The Chicago Sun - Times 's Richard Roeper gave the film 2 out of 4 stars, writing, "Even with the social commentary, Death Wish is n't trying to be some intense, gritty, ripped - from - the - headlines docudrama... A number of gruesome scenes are staged like something out of one of those Final Destination movies, with a bowling ball, a dart, a wrench and other conveniently handy items used as weapons of singular destruction. It 's essentially revenge porn. '' Michael Phillips of the Chicago Tribune gave the film 1 out of 4 stars and said, "For a while, director Roth plays this stuff relatively straight, and Willis periodically reminds us he can act (the grieving Kersey cries a fair bit here). The script contains a reference to AR - 15 rifles; by the end, Willis goes full Willis when his adversaries return to the sanctity of the family home. '' Many critics noted the timing of the film 's release, coming less than three weeks after the Stoneman Douglas High School shooting in Parkland, Florida. Jeannette Catsoulis of The New York Times called the film "imbecilic '', and criticized the film for its jokey tone and "morally unconflicted '' approach to its subject matter. Similarly, The Guardian 's Amy Nicholson criticized the film for "(flatlining) the politics and (saturating) the pathos '', and for insulting both sides of the gun control argument. The Hollywood Reporter 's John DeFore noted that the film does not attempt to "use genre metaphors to address real national debates '', making the original film "look philosophical by comparison '', and he also noted the improbable and contrived nature of Kersey 's mission. Writing for the Los Angeles Times, Justin Chang called the film "a slick, straightforward revenge thriller as well as a sham provocation, pandering shamelessly to the viewer 's bloodlust while trying to pass as self - aware satire ''. Chang compared the film unfavorably to the 2007 Death Sentence, citing the lack of consequences that Kersey faces. Some reviewers stood in defense of the film. Peter Howell of the Toronto Star stated that "Roth and Carnahan do an OK job updating Death Wish '', and that the film accurately depicts the "casual way that Americans acquire and use guns ''. He felt, though, that Liam Neeson would have been a better choice for the lead role. Matthew Rozsa of Salon agreed that the film 's release was indeed timed poorly, but argued that "mass shootings have been ubiquitous for so long that I doubt there ever would have been an appropriate release date for a vigilante fantasy... It exists everywhere in our culture, from movies and video games to the right - wing talking points that regularly thwart gun control legislation. '' Rosza considers Death Wish his guilty pleasure, recommending it as a "success '' as well as "a competent popcorn muncher that moves at a brisk pace, is about as engaging as your average Law and Order episode and contains an appropriately glowering (if somewhat bored looking) Bruce Willis. '' The San Francisco Chronicle 's Mick Lasalle summed up his review by calling it "way better than all the "Death Wish '' sequels '' and "easily the second best "Death Wish '' movie ever made, and not a distant second. ''
germany was especially unhappy with article 231 of the treaty of versailles because it
Article 231 of the treaty of Versailles - wikipedia Article 231, often known as the War Guilt Clause, was the opening article of the reparations section of the Treaty of Versailles, which ended the First World War between the German Empire and the Allied and Associated Powers. The article did not use the word "guilt '' but it served as a legal basis to compel Germany to pay reparations for the war. Article 231 was one of the most controversial points of the treaty. It specified: Germans viewed this clause as a national humiliation, forcing Germany to accept full responsibility for causing the war. German politicians were vocal in their opposition to the article in an attempt to generate international sympathy, while German historians worked to undermine the article with the objective of subverting the entire treaty. The Allied leaders were surprised at the German reaction; they saw the article only as a necessary legal basis to extract compensation from Germany. The article, with the signatory 's name changed, was also included in the treaties signed by Germany 's allies who did not view the clause with the same disdain as the Germans did. American diplomat John Foster Dulles -- one of the two authors of the article -- later regretted the wording used, believing it further aggravated the German people. The historical consensus is that responsibility or guilt for the war was not attached to the article. Rather, the clause was a prerequisite to allow a legal basis to be laid out for the reparation payments that were to be made. Historians have also highlighted the unintended damage created by the clause, which caused anger and resentment amongst the German population. On 28 June 1914 the Bosnian - Serb Gavrilo Princip assassinated the heir to the throne of Austria - Hungary, Archduke Franz Ferdinand, in the name of Serbian nationalism. This caused a diplomatic crisis, resulting in Austria - Hungary declaring war on Serbia and sparking the First World War. Due to a variety of reasons, within weeks the major powers of Europe -- divided into two alliances known as the Central Powers and the Triple Entente -- went to war. As the conflict progressed, additional countries from around the globe became drawn into the conflict on both sides. Fighting would rage across Europe, the Middle East, Africa and Asia for the next four years. On 8 January 1918, United States President Woodrow Wilson issued a statement that became known as the Fourteen Points. In part, this speech called for the Central Powers to withdraw from the territories they had occupied, for the creation of a Polish state, the redrawing of Europe 's borders along ethnic ("national '') lines, and the formation of a League of Nations. During the northern - hemisphere autumn of 1918, the Central Powers began to collapse. The German military suffered a decisive defeat on the Western Front, while on the Home Front the Imperial German Navy mutinied, prompting uprisings in Germany which became known as the German Revolution. The German government attempted to obtain a peace settlement based on the Fourteen Points, and maintained it was on this basis that Germany surrendered. Following negotiations, the Allied Powers and Germany signed an armistice, which came into effect on 11 November while German forces were still positioned in France and Belgium. On 18 January 1919 the Paris Peace Conference began. The conference aimed to establish peace between the war 's belligerents and to establish the post-war world. The Treaty of Versailles resulting from the conference dealt solely with Germany. This treaty, along with the others that were signed during the conference, each took their name from the suburb of Paris where the signings took place. While 70 delegates from 26 nations participated in the Paris negotiations, representatives from Germany were barred from attending, nominally over fears that a German delegation would attempt to play one country off against the other and unfairly influence the proceedings. The Americans, British, and French all differed on the issue of reparations settlement. The Western Front had been fought in France, and that countryside had been heavily scarred in the fighting. France 's most industrialized region in the north - east had been laid to waste during the German retreat. Hundreds of mines and factories were destroyed along with railroads, bridges and villages. Georges Clemenceau, the Prime Minister of France, thought it appropriate that any just peace required Germany to pay reparations for the damage they had caused. He also saw reparations as a means to ensure that Germany could not again threaten France and as well to weaken the German ability to compete with France 's industrialization. Reparations would also go towards the reconstruction costs in other countries, such as Belgium, also directly affected by the war. British Prime Minister David Lloyd George opposed harsh reparations in favour of a less crippling reparations settlement so that the German economy could remain a viable economic power and British trading partner. He furthermore argued that reparations should include war pensions for disabled veterans and allowances to be paid to war widows, which would, reserve a larger share of the reparations for the British Empire. Wilson opposed these positions, and was adamant that there be no indemnity imposed upon Germany. During the peace conference the Commission on the Responsibility of the Authors of the War and on Enforcement of Penalties was established to examine the background of the war. The Commission reasoned that the "war was premeditated by the Central Powers... and was the result of acts deliberately committed (by them) to make it unavoidable '', concluding that Germany and Austria - Hungary had "deliberately worked to defeat all the many conciliatory proposals made by the Entente Powers and their repeated efforts to avoid war. '' This conclusion was duly incorporated into the Treaty of Versailles, led by Clemenceau and Lloyd George who were both insistent on the inclusion of an unequivocal statement of Germany 's total liability. This left Wilson at odds with the other leaders of the conference. Instead, he proposed a repetition of a note sent by United States Secretary of State Robert Lansing to the German Government on 5 November 1918, stating that the "Allied Governments... understand that compensation will be made by Germany for all damage done to the civilian population of the Allies and their property by the aggression of Germany... '' The actual wording of the article was chosen by American diplomats Norman Davis and John Foster Dulles. Davis and Dulles produced a compromise between the Anglo - French and American positions, wording Article 231 and 232 to reflect that Germany "should, morally, pay for all war costs, but, because it could not possibly afford this, would be asked only to pay for civilian damages. '' Article 231, in which Germany accepted the responsibility of Germany and its allies for the damages resulting from the First World War, therefore served as a legal basis for the articles following it within the reparations chapter, obliging Germany to pay compensation limited to civilian damages. Similar clauses, with slight modification in wording, were present in the peace treaties signed by the other members of the Central Powers. Foreign Minister Count Ulrich von Brockdorff - Rantzau headed the 180 - strong German peace delegation. They departed Berlin on 18 April 1919, anticipating that the peace talks would soon start and that they and the Allied Powers would negotiate a settlement. Earlier, in February of that year, Brockdorff - Rantzau had informed the Weimar National Assembly that Germany would have to pay reparations for the devastation caused by the war, but would not pay for actual war costs. The German government had also taken the position that it would be "inadvisable... to elevate the question of war guilt ''. On 5 May, Brockdorff - Rantzau was informed that there would be no negotiations. Once the German delegation received the conditions of peace they would have fifteen days to reply. Following the drafting of the treaty, on 7 May the German and Allied delegations met and the Treaty of Versailles was handed off to be translated and for a response to be issued. At this meeting Brockdorff - Rantzau stated that "We know the intensity of the hatred which meets us, and we have heard the victors ' passionate demand that as the vanquished we shall be made to pay, and as the guilty we shall be punished ''. However, he proceeded to deny that Germany was solely responsible for the war. Following the meeting, the German delegation retired to translate the 80,000 word document. As soon as the delegation realized the terms of peace, they agreed that they could not accept it without revision. They then proceeded to send their Allied counterparts, message after message attacking each part of the treaty. On 18 June, having disregarded the repeated explicit decisions of the government, Brockdorff - Rantzau declared that Article 231 would have Germany accept full responsibility for the war by force. Max Weber, an advisor with the German delegation, agreed with Brockdorff - Rantzau, also challenging the Allies over the issue of war guilt. He preferred to reject the treaty than submit to what he called a "rotten peace ''. On 16 June, the Allied Powers demanded that Germany unconditionally sign the treaty within seven days or face the resumption of hostilities. The German government was divided on whether to sign or reject the peace treaty. On 19 June, Chancellor Philipp Scheidemann resigned rather than sign the treaty and was followed by Brockdorff - Rantzau and other members of the government, leaving Germany without a cabinet or peace delegation. After being advised by Field Marshal Paul von Hindenburg that Germany was in no condition to resume the war, President Friedrich Ebert and the new Chancellor, Gustav Bauer, recommended that the Weimar National Assembly ratify the treaty. The Assembly did so by a large majority, and Clemenceau was informed nineteen minutes before the deadline expired. Germany unconditionally signed the peace treaty on 22 June. Initially, Article 231 was not correctly translated. Rather than stating "... Germany accepts responsibility of Germany and her allies causing all the loss and damage... '', the German Government 's edition read "Germany admits it, that Germany and her allies, as authors of the war, are responsible for all losses and damages... ''. Germans felt that they the country had signed away her honor, and there was a prevailing belief of humiliation as the article was seen, overall, as an injustice. Historian Wolfgang Mommsen commented that despite the public outrage, German government officials were aware "that Germany 's position on this matter was not nearly so favorable as the imperial government had led the German public to believe during the war. '' The Allied delegation initially thought Article 231 to be a mundane addition to the treaty intended to limit German liability with regard to reparations, and were surprised at the vehemence of the German protests. Georges Clemenceau rebuffed Brockdorff - Rantzau 's allegations, arguing that "the legal interpretation (of the article) was the correct one '' and not a matter of political question. Lloyd George commented that "the English public, like the French public, thinks the Germans must above all acknowledge their obligation to compensate us for all the consequences of their aggression. When this is done we come to the question of Germany 's capacity to pay; we all think she will be unable to pay more than this document requires of her. '' Prior to the American entry into the war, Woodrow Wilson called for a "peace of reconciliation with Germany '', what he dubbed a "peace without victory ''. His wartime speeches, however, rejected these earlier notions and he took an increasingly belligerent stance towards Germany. Following the war, on 4 September 1919, during his public campaign to rally American support for the Treaty of Versailles, Wilson commented that the treaty "seeks to punish one of the greatest wrongs ever done in history, the wrong which Germany sought to do to the world and to civilization, and there ought to be no weak purpose with regard to the application of the punishment. She attempted an intolerable thing, and she must be made to pay for the attempt. '' Regardless of the rhetoric, the American position was to create a balanced treaty that would appease everyone. Gordon Auchincloss, secretary to Edward M. House (one of Wilson 's advisers), sent a copy of the clause to the State Department and stated "you will note that the President 's principles have been protected in this clause ''. Historian William Keylor commented that initially both United States diplomats believed that they had "devised a brilliant solution to the reparation dilemma ''; appeasing both the British and French, as well as Allied public opinion irrespective of the fact that Allied leaders were aware of concerns surrounding German willingness to pay reparations and the disappointment that could follow. Vance C. McCormick (an economic adviser of Wilson) emphasized this point, and stated: "... the preamble is useful. We are adopting an unusual method in not fixing a definite sum. The preamble tends to explain this, and further, prepares the public mind for disappointment as to what actually can be secured. '' In 1940, Dulles stated that he was surprised that the article "could plausibly be, and in fact was, considered to be a historical judgement of war guilt ''. He further noted that the "profound significance of this article... came about through accident, rather than design ''. Dulles took it personally that the Treaty of Versailles failed in its intentions of creating a lasting peace and believed that the treaty was one of the causes of the Second World War. By 1954, as United States Secretary of State and in discussion with the Soviet Union in regards to German reunification, he commented that "Efforts to bankrupt and humiliate a nation merely incite a people of vigor and of courage to break the bonds imposed upon them... Prohibitions thus incite the very acts that are prohibited. '' Compensation demanded from the defeated party was a common feature of peace treaties. The financial burden of the Treaty of Versailles was labelled "reparations '', which distinguished them from punitive settlements usually known as indemnities. The reparations were intended for reconstruction and as compensation for families who had been bereaved by the war. Sally Marks wrote that the article "was designed to lay a legal basis for reparations '' to be paid. Article 231 "established an unlimited theoretical liability '' for which Germany would have to pay but the following article "in fact narrowed German responsibility to civilian damages ''. When the final reparation figure was established in 1921, it was based on an Allied assessment of (the) German capacity to pay, not on the basis of Allied claims. The London Schedule of Payments, of 5 May 1921, established the full liability of the combined Central Powers at 132 billion gold marks. Of this figure, Germany was only required to pay 50 billion gold marks ($12.5 billion), a smaller amount than they had previously offered for terms of peace. Reparations were unpopular and strained the German economy but they were payable and from 1919 -- 1931, when reparations ended, Germany paid fewer than 21 billion gold marks. The Reparation Commission and the Bank for International Settlements gave a total German payment of 20.598 billon gold marks, whereas historian Niall Ferguson estimated that Germany paid no more than 19 billion gold marks. Ferguson also wrote that this sum was only 2.4 per cent of German national income between 1919 and 1932, while Stephen Schuker places the figure at an average of 2 per cent of national income between 1919 and 1931, in cash and kind, making a total transfer equal to 5.3 per cent of national income for the period. Gerhard Weinberg wrote that reparations were paid, towns were rebuilt, orchards replanted, mines reopened and pensions paid but the burden of repairs was shifted from the German economy to the damaged economies of the victors. Domestic German opposition to Article 231 has been held to have created a psychological and political burden on the post-war Weimar Republic. German politicians seeking international sympathy would use the article for its propaganda value, convincing many who had not read the treaties that the article implied full war guilt. German revisionist historians who subsequently attempted to ignore the validity of the clause found a ready audience among ' revisionist ' writers in France, Britain, and the United States. The objective of both the politicians and historians was to prove that Germany was not solely guilty for causing the war; if that guilt could be disproved the legal requirement to pay reparations would disappear. To that end, the German government funded the Centre for the Study of the Causes of the War. This subject, the question of Germany 's guilt (Kriegsschuldfrage or war guilt question) became a major theme of Adolf Hitler 's political career. United States Senator Henrik Shipstead argued that the failure to revise the article became a factor in Hitler 's rise to power. A view held by some historians, such as Tony Rea and John Wright who wrote that "the harshness of the War Guilt Clause and the reparations demands made it easier for Hitler to gain power in Germany. '' Despite these views, the historical consensus is that the article and the treaty, did not cause the rise of Nazism but that an unconnected rise in extremism and the Great Depression led to the NSDAP gaining greater electoral popularity and then being maneuvered into office. Fritz Klein wrote that while there was a path from Versailles to Hitler, the former did not make "Hitler 's takeover of power inevitable '' and that "the Germans had a choice when they decided to take this path. In other words, they did not have to. Hitler 's victory was not an unavoidable result of Versailles. '' In 1926, Robert C. Binkley and A.C. Mahr of Stanford University, wrote that German accusations of the article assigning war guilt were "ill - founded '' and "mistaken ''. The article was more "an assumption of liability to pay damages than an admission of war guilt '' and compared it with "a man who undertakes to pay all the cost of a motor accident than to the plea of guilty entered by an accused criminal ''. They wrote that "it is absurd '' to charge the reparation articles of the treaty with any "political meaning '' and the legal interpretation "is the only one that can stand ''. They concluded that German opposition "is based upon a text which has no legal validity whatsoever, and which Germany never signed at all. '' Sidney Fay was the "most outspoken and influential critic '' of the article. In 1928, he concluded that all of Europe shared the blame for the war and that Germany had no intention of launching a general European war in 1914. In 1937, E.H. Carr commented that "in the passion of the moment '' the Allied Powers had "failed to realize that this extorted admission of guilt could prove nothing, and must excite bitter resentment in German minds. '' He concluded "German men of learning set to work to demonstrate the guiltlessness of their country, fondly believing that, if this could be established, the whole fabric of the treaty would collapse. '' René Albrecht - Carrié wrote in May 1940, that "article 231 gave rise to an unfortunate controversy, unfortunate because it served to raise a false issue. '' He wrote that the German inter-war argument "rested on her responsibility for the out - break of the war '' and if that guilt could be disproved then the legal requirement to pay reparations would disappear. In 1942, Luigi Albertini published The Origins of the War of 1914 and concluded that Germany was primarily responsible for the outbreak of the war. Albertini 's work, rather than spurring on new debate, was the culmination of the first research phase into the war guilt question. The issue came back between 1959 and 1969, when Fritz Fischer in Germany 's Aims in the First World War and War of Illusions "destroyed the consensus about shared responsibility for the First World War '' and "placed the blame... firmly on the shoulders of the Wilhelmine elite. '' By the 1970s, his work "had emerged as the new orthodoxy on the origins of the First World War ''. In the 1980s, James Joll led a new wave of First World War research concluding "that the origins of the First World War were "complex and varied '' although "by December 1912 '' Germany had decided to go to war. In 1978, Marks re-examined the reparation clauses of the treaty and wrote that "the much - criticized ' war guilt clause ', Article 231, which was designed to lay a legal basis for reparations, in fact makes no mention of war guilt '' but only specified that Germany was to pay for the damages caused by the war they imposed upon the allies and "that Germany committed an act of aggression against Belgium is beyond dispute ''. "Technically, Britain entered '' the war and French troops entered Belgium "to honor '' the "legal obligation '' to defend Belgium under the 1839 Treaty of London and that "Germany openly acknowledged her responsibility in regard to Belgium on August 4, 1914 and May 7, 1919. '' Marks also wrote that "the same clause, mutatis mutandis '' was incorporated "in the treaties with Austria and Hungary, neither of whom interpreted it as declaration of war guilt. '' Wolfgang Mommsen wrote that "Austria and Hungary, understandably paid no attention to this aspect of the draft treaty ''. In 1986, Marks wrote that the German foreign office, supported by military and civilian notables, "focused on Article 231... hoping that, if one could refute German responsibility for the war, not only reparations but the entire treaty would collapse ''. Manfred Boemeke, Gerald Feldman, and Elisabeth Glaser wrote that "pragmatic requirements characteristically influenced the shaping of the much misunderstood Article 231. That paragraph reflected the presumed legal necessity to define German responsibility for the war in order to specify and limit the Reich 's obligations ''. P.M.H. Bell wrote that despite the article not using the term ' guilt ', and while "it may be that its drafters did not intend to convey a moral judgement of Germany '', the article has "almost universally '' became known as the war guilt clause of the treaty. Margaret MacMillan wrote that the German public 's interpretation of Article 231 as unequivocally ascribing the fault for the war to Germany and her allies, "came to be the object of particular loathing in Germany and the cause of uneasy consciences among the Allies. '' The Allies never expected such a hostile reaction, for "no one thought there would be any difficulty over the clauses themselves. '' Stephen Neff wrote that "the term ' war guilt ' is a slightly unfortunate one, since to lawyers, the term ' guilt ' primarily connotes criminal liability '' while "the responsibility of Germany envisaged in the Versailles Treaty... was civil in nature, comparable to the indemnity obligation of classical just - war theory. '' Louise Slavicek wrote that while "the article was an honest reflection of the treaty - writers ' beliefs, including such a clause in the peace settlement was undiplomatic, to say the least. '' Diane Kunz wrote that "rather than being seen as an American lawyer 's clever attempt to limit actual German financial responsibility by buying off French politicians and their public with the sop of a piece of paper '' Article 231 "became an easily exploitable open sore ''. Ian Kershaw wrote that the "national disgrace '' felt over the article and "defeat, revolution, and the establishment of democracy '', had "fostered a climate in which a counter-revolutionary set of ideas could gain wide currency '' and "enhanced the creation of a mood in which '' extreme nationalist ideas could gain a wider audience and take hold. Elazar Barkan argues that by "forcing an admission of war guilt at Versailles, rather than healing, the victors instigated resentment that contributed to the rise of Fascism. '' Norman Davies wrote that the article invited Germany "to accept sole guilt for the preceding war ''. Klaus Schwabe wrote that the article 's influence went far beyond the discussion of war guilt. By "refusing to acknowledge Germany 's ' war guilt ' the new German government implicitly exonerated the old monarchial order '' and more importantly failed "to dissociate itself from the old regime. '' In doing so "it undermined its claim that post-revolutionary Germany was a historic new democratic beginning deserving credit at the peace conference. ''
when do you pronounce the e at the end of a word
Silent e - wikipedia In English orthography, many words feature a silent ⟨ e ⟩, most commonly at the end of a word or morpheme. Typically it represents a vowel sound that was formerly pronounced, but became silent in late Middle English or Early Modern English. In a large class of words, as a consequence of a series of historical sound changes, including the Great Vowel Shift affecting long vowels, the former presence of the vowel sound represented by the ⟨ e ⟩ left its mark in the form of a change in the pronunciation of the preceding vowel. This can be seen in words such as rid / rɪd / and ride / raɪd /, in which the presence of the final, unpronounced ⟨ e ⟩ appears to alter the sound of the preceding ⟨ i ⟩. A silent ⟨ e ⟩ which has this effect is sometimes called a magic ⟨ e ⟩. The normal effect is to convert a short vowel sound to a long one, but because of the complications of the Great Vowel Shift, the long vowel is not simply a lengthened version of the corresponding short one, and in most cases (as in the example of ride) is in fact a diphthong. This vowel - altering effect of silent ⟨ e ⟩ entered into modern English orthography, and is present in new words (such as bike) in which there is no historical reason for the presence of the ⟨ e ⟩ other than the need to mark the pronunciation of the preceding vowel. When silent ⟨ e ⟩ occurs at the end of an English word, it usually converts a preceding vowel to its "long '' equivalent, which means that it makes a vowel "say its name '' (except in the case of ⟨ y ⟩, which has the same pronunciation as ⟨ i ⟩ -- compare byte / bite). Depending on dialect, English has anywhere from 13 to more than 20 separate vowel phonemes, both monophthongs and diphthongs. Silent ⟨ e ⟩ is one of the ways English orthography is able to use the Latin alphabet 's five vowel characters to represent so many vowel sounds. There is usually only one consonant between the silent ⟨ e ⟩ and the preceding vowel; a double consonant may be a cue that the ⟨ e ⟩ is not silent and does not affect the preceding vowel (as in Jesse and posse). Traditionally, the vowels / eɪ iː aɪ oʊ juː / are said to be the "long '' counterparts of the vowels / æ ɛ ɪ ɒ ʌ /, respectively, which are said to be "short ''. This terminology reflects the historical pronunciation and development of those vowels, but as a phonetic description of their current values it is no longer accurate. The English values of the letters ⟨ a, e, i, o, u ⟩ used to be similar to the values those letters had in Spanish, French or Italian, namely (a), (e), (i), (o), (u). The Great Vowel Shift leading to Early Modern English gave current English "long vowels '' values that differ markedly from the "short vowels '' that they relate to in writing. Since English has a literary tradition that goes back into the Middle English period, written English continues to use Middle English writing conventions to mark distinctions that had been reordered by the chain shift of the long vowels. However, the pronunciation of ⟨ u ⟩ before silent ⟨ e ⟩, found mainly in borrowings from French and Latin, is a consequence not of the Great Vowel Shift but of a different series of changes. When final ⟨ e ⟩ is not silent, this may be indicated in various ways in English spelling. When representing / iː /, this is usually done via doubling (employee, with employe as an obsolete spelling). Non-silent ⟨ e ⟩ can also be indicated by a diacritical mark, such as a grave accent (learnèd) or a diaeresis (learnëd, Brontë). Other diacritical marks are preserved in loanwords (résumé, café), or introduced on this pattern (maté), though these diacritics are frequently omitted. Other words have no indication that the ⟨ e ⟩ is not silent (pace, Latin loan meaning "with due respect to ''). The sounds of the ⟨ a ⟩ group are some of the more dialectically - complex features of contemporary modern English; the phonemes represented in modern "short '' ⟨ a ⟩ include / æ /, / ɑː /, and / ɔː /. See broad A and cot -- caught merger for some of the cross-dialect complexities of the English ⟨ a ⟩ group. A silent ⟨ e ⟩ typically moves ⟨ a ⟩ to / eɪ /. Silent ⟨ e ⟩ typically moves ⟨ e ⟩ to / iː /. This change is generally consistent across nearly all English dialects today, though previously many dialects used / eː / instead before migrating to / iː /. Some parts of Mid-Ulster English still use / eː /. For the "long vowel '' represented in written English by ⟨ i ⟩, the effect of silent ⟨ e ⟩ is to turn it into a diphthong / aɪ /. Short ⟨ o ⟩ often falls in with short ⟨ a ⟩ and shares some of the complexities of that group. Variously, the written short ⟨ o ⟩ can represent / ɒ /, / ʌ /, and / ɔː /. The usual effect of silent ⟨ e ⟩ on written ⟨ o ⟩ is to fix it as a long / oʊ / sound. Short ⟨ u ⟩ can variably represent either / ʌ / or / ʊ /, as a result of the foot -- strut split. Silent ⟨ e ⟩ generally turns ⟨ u ⟩ to its corresponding long version / juː /, which developed from Middle English / ɪu /. Variably by dialect and even word, the / j / in this / juː / may drop (rune / ˈruːn /, lute / ˈluːt /), causing a merger with / uː /; in other cases, the / j / coalesces with the preceding consonant (issue / ˈɪs. juː / → / ˈɪʃuː /), meaning that the silent ⟨ e ⟩ can affect the quality of a consonant much earlier in the word (educate (/ ˈɛdʒukeɪt /, nature / ˈneɪtʃər /). Along with indicating the vowel, the silent ⟨ e ⟩ also functions as a front vowel to indicate a soft ⟨ c ⟩ or soft ⟨ g ⟩. For example: where / s / is the expected outcome of the ⟨ ce ⟩ digraph, and the ⟨ g ⟩ in huge is pronounced / dʒ /. Silent ⟨ e ⟩ is used in some words with ⟨ dg ⟩ in which it does not lengthen a vowel: rĭdgɇ, sĕdgɇ, hŏdgɇ - pŏdgɇ. Spelling such words with ⟨ j ⟩, the other letter that indicates that sound, does not occur in native or nativized English words. In some common words that historically had long vowels, silent ⟨ e ⟩ no longer has its usual lengthening effect. For example, the ⟨ o ⟩ in come (as compared to in cone) and in done (as compared to in dome). This is especially common in some words that historically had ⟨ f ⟩ instead of ⟨ v ⟩, such as give and love; in Old English, / f / became / v / when it appeared between two vowels (OE giefan, lufu), while a geminated ⟨ ff ⟩ lost its doubling to yield / f / in that position. This also applies to a large class of words with the adjective suffix - ive, such as captive (where, again, the ⟨ i ⟩ is not lengthened, unlike in hive), that originally had - if in French. Some loanwords from French (promenade) retained their French silent ⟨ e ⟩, called e muet or e caduc, which has no effect on the preceding vowel. Some English words vary their accented syllable based on whether they are used as nouns or as adjectives. In a few words such as minute, this may affect the operation of silent ⟨ e ⟩: as an adjective, minúte (/ maɪˈnjuːt /, "small '') has the usual value of ⟨ u ⟩ followed by silent ⟨ e ⟩, while as a noun mínute (/ ˈmɪnɪt /, the unit of time) silent ⟨ e ⟩ does not operate. See initial - stress - derived noun for similar patterns that may give rise to exceptions. Silent ⟨ e ⟩, like many conventions of written language that no longer reflect current pronunciations, was not always silent. In Chaucer 's Balade, the first line does not scan properly unless what appears to current eyes to be a silent ⟨ e ⟩ is pronounced: Gilte ends in the same sound as modern English Malta. In Middle English, this final schwa had some grammatical significance, although that was mostly lost by Chaucer 's time. It was elided regularly when a word beginning with a vowel came next. The consequences of silent ⟨ e ⟩ in contemporary spelling reflect the phonology of Middle English. In Middle English, as a consequence of the lax vowel rule shared by most Germanic languages, vowels were long when they historically occurred in stressed open syllables; they were short when they occurred in "checked '' or closed syllables. Thus bide / ˈbiːdə / had a long vowel, while bid / bid / had a short one. The historical sequence went something like this: The writing convention of silent ⟨ e ⟩ indicates that different vowel qualities had become phonemic, and were preserved even when phonemic vowel length was lost. Long vowels could arise by other mechanisms. One of these is known as "compensatory lengthening ''; this occurred when consonants formerly present were lost: maid is the modern descendant of Old English mægde. In this example, the g actually became a glide / j /, so in a sense, the length of the consonant stayed where it always had been, and there was no "compensation. '' The silent ⟨ e ⟩ rule became available to represent long vowels in writing that arose from other sources; Old English brŷd, representing * bruʒd - i -, became Modern English bride. The rules of current English spelling were first set forth by Richard Mulcaster in his 1582 publication Elementarie. Mulcaster called silent ⟨ e ⟩ "qualifying ⟨ e ⟩ '', and wrote of it: It altereth the sound of all the vowells, euen quite thorough one or mo consonants as, máde, stéme, éche, kínde, strípe, óre, cúre, tóste sound sharp with the qualifying E in their end: whereas, màd, stèm, èch, frind, strip, or, cut, tost, contract of tossed sound flat without the same E, And therefor the same loud and sharp sound in the word, calleth still for the qualifying e, in the end, as the flat and short nedeth it not. It qualifyeth no ending vowell, bycause it followeth none in the end, sauing i. as in daie, maie, saie, trewlie, safetie, where it maketh i, either not to be heard, or verie gentlie to be heard, which otherwise would sound loud and sharp, and must be expressed by y. as, deny, aby, ally. Which kinde of writing shalbe noted hereafter. It altereth also the force of, c, g, s, tho it sound not after them, as in hence, for that, which might sound henk, if anie word ended in c. in swinge differing from swing, in vse differing from vs. Mulcaster also formulated the rule that a double letter, when final, indicated a short vowel in English, while the absence of doubling and the presence of silent ⟨ e ⟩ made the vowel long. In modern English, this rule is most prominent in its effects on the written "a '' series: Digraphs are sometimes treated as single letters for purposes of this rule:
low concentration of oxygen in blood at high altitude
Effects of high altitude on humans - wikipedia The effects of high altitude on humans are considerable. The percentage oxygen saturation of hemoglobin determines the content of oxygen in blood. After the human body reaches around 2,100 m (7,000 feet) above sea level, the saturation of oxyhemoglobin begins to plummet. However, the human body has both short - term and long - term adaptations to altitude that allow it to partially compensate for the lack of oxygen. Athletes use these adaptations to help their performance. There is a limit to the level of adaptation; mountaineers refer to the altitudes above 8,000 metres (26,000 ft) as the "death zone '', where it is generally believed that no human body can acclimatize. The human body can perform best at sea level, where the atmospheric pressure is 101,325 Pa or 1013.25 millibars (or 1 atm, by definition). The concentration of oxygen (O) in sea - level air is 20.9 %, so the partial pressure of O (pO) is 21.136 kPa. In healthy individuals, this saturates hemoglobin, the oxygen - binding red pigment in red blood cells. Atmospheric pressure decreases exponentially with altitude while the O fraction remains constant to about 100 km, so pO decreases exponentially with altitude as well. It is about half of its sea - level value at 5,000 m (16,000 ft), the altitude of the Everest Base Camp, and only a third at 8,848 m (29,029 ft), the summit of Mount Everest. When pO drops, the body responds with altitude acclimatization. Mountain medicine recognizes three altitude regions that reflect the lowered amount of oxygen in the atmosphere: Travel to each of these altitude regions can lead to medical problems, from the mild symptoms of acute mountain sickness to the potentially fatal high - altitude pulmonary edema (HAPE) and high - altitude cerebral edema (HACE). The higher the altitude, the greater the risk. Research also indicates elevated risk of permanent brain damage in people climbing to extreme altitudes. Expedition doctors commonly stock a supply of dexamethasone, or "dex, '' to treat these conditions on site. Humans have survived for two years at 5,950 m (19,520 ft, 475 millibars of atmospheric pressure), which is the highest recorded permanently tolerable altitude; the highest permanent settlement known, La Rinconada, is at 5,100 m (16,700 ft). At extreme altitudes, above 7,500 m (24,600 ft, 383 millibars of atmospheric pressure), sleeping becomes very difficult, digesting food is near - impossible, and the risk of HAPE or HACE increases greatly. The death zone, in mountaineering, refers to altitudes above a certain point where the amount of oxygen is insufficient to sustain human life. This point is generally tagged as 8,000 m (26,000 ft, less than 356 millibars of atmospheric pressure). All 14 summits in the death zone above 8000 m, called eight - thousanders, are located in the Himalaya and Karakoram mountain ranges. The concept of the death zone (originally the lethal zone) was first conceived in 1953 by Edouard Wyss - Dunant, a Swiss doctor, in an article about acclimatization published in the journal of the Swiss Foundation for Alpine Research. Many deaths in high - altitude mountaineering have been caused by the effects of the death zone, either directly (loss of vital functions) or indirectly (wrong decisions made under stress, physical weakening leading to accidents). In the death zone, the human body can not acclimatize. An extended stay in the zone without supplementary oxygen will result in deterioration of bodily functions, loss of consciousness, and, ultimately, death. Scientists at the High Altitude Pathology Institute in Bolivia dispute the existence of a death zone, based on observation of extreme tolerance to hypoxia in patients with chronic mountain sickness and normal fetuses in - utero, both of which present pO levels similar to those at the summit of Mount Everest. Studies have shown that the approximately 140 million people who live at elevations above 2,500 metres (8,200 ft) have adapted to the lower oxygen levels. These adaptations are especially pronounced in people living in the Andes and the Himalayas. Compared with acclimatized newcomers, native Andean and Himalayan populations have better oxygenation at birth, enlarged lung volumes throughout life, and a higher capacity for exercise. Tibetans demonstrate a sustained increase in cerebral blood flow, lower hemoglobin concentration, and less susceptibility to chronic mountain sickness (CMS). These adaptations may reflect the longer history of high altitude habitation in these regions. A significantly lower mortality rate from cardiovascular disease is observed for residents at higher altitudes. Similarly, a dose response relationship exists between increasing elevation and decreasing obesity prevalence in the United States. This is not explained by migration alone. On the other hand, people living at higher elevations also have a higher rate of suicide in the United States. The correlation between elevation and suicide risk was present even when the researchers control for known suicide risk factors, including age, gender, race, and income. Research has also indicated that oxygen levels are unlikely to be a factor, considering that there is no indication of increased mood disturbances at high altitude in those with sleep apnea or in heavy smokers at high altitude. The cause for the increased suicide risk is as yet unknown. The human body can adapt to high altitude through both immediate and long - term acclimatization. At high altitude, in the short term, the lack of oxygen is sensed by the carotid bodies, which causes an increase in the breathing depth and rate (hyperpnea). However, hyperpnea also causes the adverse effect of respiratory alkalosis, inhibiting the respiratory center from enhancing the respiratory rate as much as would be required. Inability to increase the breathing rate can be caused by inadequate carotid body response or pulmonary or renal disease. In addition, at high altitude, the heart beats faster; the stroke volume is slightly decreased; and non-essential bodily functions are suppressed, resulting in a decline in food digestion efficiency (as the body suppresses the digestive system in favor of increasing its cardiopulmonary reserves). Full acclimatization, however, requires days or even weeks. Gradually, the body compensates for the respiratory alkalosis by renal excretion of bicarbonate, allowing adequate respiration to provide oxygen without risking alkalosis. It takes about four days at any given altitude and can be enhanced by drugs such as acetazolamide. Eventually, the body undergoes physiological changes such as lower lactate production (because reduced glucose breakdown decreases the amount of lactate formed), decreased plasma volume, increased hematocrit (polycythemia), increased RBC mass, a higher concentration of capillaries in skeletal muscle tissue, increased myoglobin, increased mitochondria, increased aerobic enzyme concentration, increase in 2, 3 - BPG, hypoxic pulmonary vasoconstriction, and right ventricular hypertrophy. Pulmonary artery pressure increases in an effort to oxygenate more blood. Full hematological adaptation to high altitude is achieved when the increase of red blood cells reaches a plateau and stops. The length of full hematological adaptation can be approximated by multiplying the altitude in kilometres by 11.4 days. For example, to adapt to 4,000 metres (13,000 ft) of altitude would require 45.6 days. The upper altitude limit of this linear relationship has not been fully established. For athletes, high altitude produces two contradictory effects on performance. For explosive events (sprints up to 400 metres, long jump, triple jump) the reduction in atmospheric pressure means there is less resistance from the atmosphere and the athlete 's performance will generally be better at high altitude. For endurance events (races of 800 metres or more), the predominant effect is the reduction in oxygen, which generally reduces the athlete 's performance at high altitude. Sports organizations acknowledge the effects of altitude on performance: the International Association of Athletics Federations (IAAF), for example, have ruled that performances achieved at an altitude greater than 1,000 metres will be approved for record purposes, but carry the notation of "A '' to denote they were set at altitude. The 1968 Summer Olympics were held at altitude in Mexico City. With the best athletes in the world competing for the most prestigious title, most short sprint and jump records were set there at altitude. Other records were also set at altitude in anticipation of those Olympics. Bob Beamon 's record in the long jump held for almost 23 years and has only been beaten once without altitude or wind assistance. Many of the other records set at Mexico City were later surpassed by marks set at altitude. Athletes can also take advantage of altitude acclimatization to increase their performance. The same changes that help the body cope with high altitude increase performance back at sea level. However, this may not always be the case. Any positive acclimatization effects may be negated by a de-training effect as the athletes are usually not able to exercise with as much intensity at high altitudes compared to sea level. This conundrum led to the development of the altitude training modality known as "Live - High, Train - Low '', whereby the athlete spends many hours a day resting and sleeping at one (high) altitude, but performs a significant portion of their training, possibly all of it, at another (lower) altitude. A series of studies conducted in Utah in the late 1990s by researchers Ben Levine, Jim Stray - Gundersen, and others, showed significant performance gains in athletes who followed such a protocol for several weeks. Other studies have shown performance gains from merely performing some exercising sessions at high altitude, yet living at sea level. The performance - enhancing effect of altitude training could be due to increased red blood cell count, more efficient training, or changes in muscle physiology.
what are the 15 national holidays in spain
Public holidays in Spain - wikipedia Public holidays celebrated in Spain include a mix of religious (Roman Catholic), national and regional observances. Each municipality is allowed to have a maximum of 13 public holidays per year; a maximum of nine of these are chosen by the national government and at least two are chosen locally. If one of the "national holidays '' happens to fall on a Sunday, twice in each month for two out of the seven holy months in the Spanish year, the regional governments -- the autonomous communities of Spain -- can choose an alternate holiday, or they can allow local authorities to choose. In practice, except for holidays falling on a Sunday, the regional governments can choose up to three holidays per year; or they can choose fewer to allow for more options at the local level. A puente (bridge) is sometimes made between weekends and holidays that fall on Tuesday or Thursday. The puente will then create a long weekend. Since 2010, Ceuta and Melilla, both autonomous cities of Spain, have declared the Muslim holiday of Eid al - Adha or Feast of the Sacrifice, as an official public holiday. It was the first time a non-Christian religious festival has been officially celebrated in Spain since the Reconquista. The following table is updated to 2018.
what is the best selling christian song of all time
List of number - one Billboard Christian Albums - wikipedia Christian Albums (formerly named Hot Christian Albums) is a Billboard chart of the best - selling Christian music albums of the week, "ranked by sales data as compiled by Nielsen SoundScan ''.
what order do the lord of the rings books go in
The Lord of the Rings - wikipedia The Lord of the Rings is an epic high fantasy novel written by English author and scholar J.R.R. Tolkien. The story began as a sequel to Tolkien 's 1937 fantasy novel The Hobbit, but eventually developed into a much larger work. Written in stages between 1937 and 1949, The Lord of the Rings is one of the best - selling novels ever written, with over 150 million copies sold. The title of the novel refers to the story 's main antagonist, the Dark Lord Sauron, who had in an earlier age created the One Ring to rule the other Rings of Power as the ultimate weapon in his campaign to conquer and rule all of Middle - earth. From quiet beginnings in the Shire, a hobbit land not unlike the English countryside, the story ranges across Middle - earth, following the course of the War of the Ring through the eyes of its characters, not only the hobbits Frodo Baggins, Samwise "Sam '' Gamgee, Meriadoc "Merry '' Brandybuck and Peregrin "Pippin '' Took, but also the hobbits ' chief allies and travelling companions: the Men, Aragorn son of Arathorn, a Ranger of the North, and Boromir, a Captain of Gondor; Gimli son of Glóin, a Dwarf warrior; Legolas Greenleaf, an Elven prince; and Gandalf, a wizard. The work was initially intended by Tolkien to be one volume of a two - volume set, the other to be The Silmarillion, but this idea was dismissed by his publisher. For economic reasons, The Lord of the Rings was published in three volumes over the course of a year from 29 July 1954 to 20 October 1955. The three volumes were titled The Fellowship of the Ring, The Two Towers and The Return of the King. Structurally, the novel is divided internally into six books, two per volume, with several appendices of background material included at the end. Some editions combine the entire work into a single volume. The Lord of the Rings has since been reprinted numerous times and translated into 38 languages. Tolkien 's work has been the subject of extensive analysis of its themes and origins. Although a major work in itself, the story was only the last movement of a larger epic Tolkien had worked on since 1917, in a process he described as mythopoeia. Influences on this earlier work, and on the story of The Lord of the Rings, include philology, mythology, religion and the author 's distaste for the effects of industrialization, as well as earlier fantasy works and Tolkien 's experiences in World War I. The Lord of the Rings in its turn is considered to have had a great effect on modern fantasy; the impact of Tolkien 's works is such that the use of the words "Tolkienian '' and "Tolkienesque '' have been recorded in the Oxford English Dictionary. The enduring popularity of The Lord of the Rings has led to numerous references in popular culture, the founding of many societies by fans of Tolkien 's works, and the publication of many books about Tolkien and his works. The Lord of the Rings has inspired, and continues to inspire, artwork, music, films and television, video games, board games, and subsequent literature. Award - winning adaptations of The Lord of the Rings have been made for radio, theatre, and film. In 2003, it was named Britain 's best - loved novel of all time in the BBC 's The Big Read. Thousands of years before the events of the novel, the Dark Lord Sauron had forged the One Ring to rule the other Rings of Power and corrupt those who wore them: the leaders of Men, Elves and Dwarves. Sauron was defeated by an alliance of Elves and Men led by Gil - galad and Elendil, respectively. In the final battle, Isildur, son of Elendil, cut the One Ring from Sauron 's finger, causing Sauron to lose his physical form. Isildur claimed the Ring as an heirloom for his line, but when he was later ambushed and killed by the Orcs, the Ring was lost in the River Anduin. Over two thousand years later, the Ring was found by one of the river - folk called Déagol. His friend Sméagol fell under the Ring 's influence and strangled Déagol to acquire it. Sméagol was banished and hid under the Misty Mountains. The Ring gave him long life and changed him over hundreds of years into a twisted, corrupted creature called Gollum. Gollum lost the Ring, his "precious '', and as told in The Hobbit, Bilbo Baggins found it. Meanwhile, Sauron assumed a new form and took back his old realm of Mordor. When Gollum set out in search of the Ring, he was captured and tortured by Sauron. Sauron learned from Gollum that "Baggins '' of the Shire had taken the Ring. Gollum was set loose. Sauron, who needed the Ring to regain his full power, sent forth his powerful servants, the Nazgûl, to seize it. The story begins in the Shire, where the hobbit Frodo Baggins inherits the Ring from Bilbo Baggins, his cousin and guardian. Neither hobbit is aware of the Ring 's nature, but Gandalf the Grey, a wizard and an old friend of Bilbo, suspects it to be Sauron 's Ring. Seventeen years later, after Gandalf confirms his guess, he tells Frodo the history of the Ring and counsels him to take it away from the Shire. Frodo sets out, accompanied by his gardener, servant and friend, Samwise "Sam '' Gamgee, and two cousins, Meriadoc "Merry '' Brandybuck and Peregrin "Pippin '' Took. They are nearly caught by the Black Riders, but shake off their pursuers by cutting through the Old Forest. There they are aided by Tom Bombadil, a strange and merry fellow who lives with his wife Goldberry in the forest. The hobbits reach the town of Bree, where they encounter a Ranger named Strider, whom Gandalf had mentioned in a letter. Strider persuades the hobbits to take him on as their guide and protector. Together, they leave Bree after another close escape from the Black Riders. On the hill of Weathertop, they are again attacked by the Black Riders, who wound Frodo with a cursed blade. Strider fights them off and leads the hobbits towards the Elven refuge of Rivendell. Frodo falls deathly ill from the wound. The Black Riders nearly capture him at the Ford of Bruinen, but flood waters summoned by Elrond, master of Rivendell, rise up and overwhelm them. Frodo recovers in Rivendell under Elrond 's care. The Council of Elrond discusses the history of Sauron and the Ring. Strider is revealed to be Aragorn, Isildur 's heir. Gandalf reports that Sauron has corrupted Saruman, chief of the wizards. The Council decides that the Ring must be destroyed, but that can only be done by sending it to the Fire of Mount Doom in Mordor, where it was forged. Frodo takes this task upon himself. Elrond, with the advice of Gandalf, chooses companions for him. The Company of the Ring are nine in number: Frodo, Sam, Merry, Pippin, Aragorn, Gandalf, Gimli the Dwarf, Legolas the Elf, and the Man Boromir, son of Denethor, the Ruling Steward of the land of Gondor. After a failed attempt to cross the Misty Mountains through the Redhorn Pass, the Company are forced to take a perilous path through the Mines of Moria. They are attacked by the Watcher in the Water before the doors of Moria. Inside Moria, they learn of the fate of Balin and his colony of Dwarves. After surviving an attack, they are pursued by Orcs and by an ancient demon called a Balrog. Gandalf faces the Balrog, and both of them fall into the abyss. The others escape and find refuge in the Elven forest of Lothlórien, where they are counselled by its rulers, Galadriel and Celeborn. With boats and gifts from Galadriel, the Company travel down the River Anduin to the hill of Amon Hen. There, Boromir tries to take the Ring from Frodo, but Frodo puts it on and disappears. Frodo chooses to go alone to Mordor, but Sam guesses what he intends and goes with him. Orcs sent by Saruman and Sauron kill Boromir and capture Merry and Pippin. Aragorn, Gimli and Legolas debate which pair of hobbits to follow. They decide to pursue the Orcs taking Merry and Pippin to Saruman. In the kingdom of Rohan, the Orcs are slain by a company of Rohirrim. Merry and Pippin escape into Fangorn Forest, where they are befriended by Treebeard, the oldest of the tree - like Ents. Aragorn, Gimli and Legolas track the hobbits to Fangorn. There they unexpectedly meet Gandalf. Gandalf explains that he slew the Balrog; darkness took him, but he was sent back to Middle - earth to complete his mission. He is clothed in white and is now Gandalf the White, for he has taken Saruman 's place as the chief of the wizards. Gandalf assures his friends that Merry and Pippin are safe. Together they ride to Edoras, capital of Rohan. Gandalf frees Théoden, King of Rohan, from the influence of Saruman 's spy Gríma Wormtongue. Théoden musters his fighting strength and rides with his men to the ancient fortress of Helm 's Deep, while Gandalf departs to seek help from Treebeard. Meanwhile, the Ents, roused by Merry and Pippin from their peaceful ways, attack Isengard, Saruman 's stronghold, and trap the wizard in the tower of Orthanc. Gandalf convinces Treebeard to send an army of Huorns to Théoden 's aid. Gandalf brings an army of Rohirrim to Helm 's Deep, and they defeat the Orcs, who flee into the "forest '' of Huorns, never to be seen again. Gandalf offers Saruman a chance to turn away from evil. When Saruman refuses to listen, Gandalf strips him of his rank and most of his powers. After Saruman crawls back to his prison, Wormtongue drops a sphere to try and kill Gandalf. Pippin picks it up; it is revealed to be a palantír, a seeing - stone that Saruman used to speak with Sauron and through which Saruman was ensnared. Pippin is seen by Sauron. Gandalf rides for Minas Tirith, chief city of Gondor, taking Pippin with him. Frodo and Sam capture Gollum, who has followed them from Moria. They force him to guide them to Mordor. They find that the Black Gate of Mordor is too well guarded, so instead they travel to a secret way Gollum knows. On the way, they encounter Faramir, who, unlike his brother Boromir, resists the temptation to seize the Ring. Gollum -- who is torn between his loyalty to Frodo and his desire for the Ring -- betrays Frodo by leading him to the great spider Shelob in the tunnels of Cirith Ungol. Frodo falls to Shelob 's sting. But with the help of Galadriel 's gifts, Sam fights off the spider. Believing Frodo to be dead, Sam takes the Ring to continue the quest alone. Orcs find Frodo; Sam overhears them and learns that Frodo is still alive. Sauron sends a great army against Gondor. Gandalf arrives at Minas Tirith to warn Denethor of the attack, while Théoden musters the Rohirrim to ride to Gondor 's aid. Minas Tirith is besieged. Denethor is deceived by Sauron and falls into despair. He burns himself alive on a pyre, nearly taking his son Faramir with him. Aragorn, accompanied by Legolas, Gimli and the Rangers of the North, takes the Paths of the Dead to recruit the Dead Men of Dunharrow, who are bound by a curse which denies them rest until they fulfil their ancient forsworn oath to fight for the King of Gondor. Following Aragorn, the Army of the Dead strikes terror into the Corsairs of Umbar invading southern Gondor. Aragorn defeats the Corsairs and uses their ships to transport the men of southern Gondor up the Anduin, reaching Minas Tirith just in time to turn the tide of battle. Éowyn, Théoden 's niece, slays the Lord of the Nazgûl with help from Merry. Together, Gondor and Rohan defeat Sauron 's army in the Battle of the Pelennor Fields, though at great cost. Théoden is slain, and Éowyn and Merry are injured. Meanwhile, Sam rescues Frodo from the tower of Cirith Ungol. They set out across Mordor. Aragorn leads an army of men from Gondor and Rohan to march on the Black Gate to distract Sauron from his true danger. His army is vastly outnumbered by the great might of Sauron. Frodo and Sam reach the edge of the Cracks of Doom, but Frodo can not resist the Ring any longer. He claims it for himself and puts it on his finger. Gollum suddenly reappears. He struggles with Frodo and bites off Frodo 's finger with the Ring still on it. Celebrating wildly, Gollum loses his footing and falls into the Fire, taking the Ring with him. When the Ring is destroyed, Sauron loses his power forever. All he created collapses, the Nazgûl perish, and his armies are thrown into such disarray that Aragorn 's forces emerge victorious. Aragorn is crowned Elessar, King of Arnor and Gondor, and weds Arwen, daughter of Elrond. The four hobbits make their way back to the Shire, only to find out that it has been taken over by men led by Sharkey. The hobbits raise a rebellion and liberate the Shire, though 19 hobbits are killed and 30 wounded. Sharkey turns out to be Saruman. Frodo stops the hobbits from killing the wizard after Saruman attempts to stab Frodo, but Gríma turns on Saruman and kills him in front of Bag End, Frodo 's home. He is slain in turn by hobbit archers, and the War of the Ring comes to its true end on Frodo 's very doorstep. Merry and Pippin are celebrated as heroes. Sam marries Rosie Cotton and uses his gifts from Galadriel to help heal the Shire. But Frodo is still wounded in body and spirit, having borne the Ring for so long. A few years later, in the company of Bilbo and Gandalf, Frodo sails from the Grey Havens west over the Sea to the Undying Lands to find peace. In the appendices, Sam gives his daughter Elanor the Red Book of Westmarch, which contains the story of Bilbo 's adventures and the War of the Ring as witnessed by the hobbits. Sam is then said to have crossed west over the Sea himself, the last of the Ring - bearers. Some characters in The Lord of the Rings are unequivocal protagonists, and others are absolute antagonists. However despite criticism that the book 's characters "are all either black or white '', some of the ' good ' characters have darker sides that feature in the story, and likewise some of the villains have "good impulses ''. Therefore the categorization of characters as either ' protagonists ' or ' antagonists ' below indicates their general role in the story. The Lord of the Rings started as a sequel to J.R.R. Tolkien 's work The Hobbit, published in 1937. The popularity of The Hobbit had led George Allen & Unwin, the publishers, to request a sequel. Tolkien warned them that he wrote quite slowly, and responded with several stories he had already developed. Having rejected his contemporary drafts for The Silmarillion, putting on hold Roverandom, and accepting Farmer Giles of Ham, Allen & Unwin thought more stories about hobbits would be popular. So at the age of 45, Tolkien began writing the story that would become The Lord of the Rings. The story would not be finished until 12 years later, in 1949, and would not be fully published until 1955, when Tolkien was 63 years old. Persuaded by his publishers, he started "a new Hobbit '' in December 1937. After several false starts, the story of the One Ring emerged. The idea for the first chapter ("A Long - Expected Party '') arrived fully formed, although the reasons behind Bilbo 's disappearance, the significance of the Ring, and the title The Lord of the Rings did not arrive until the spring of 1938. Originally, he planned to write a story in which Bilbo had used up all his treasure and was looking for another adventure to gain more; however, he remembered the Ring and its powers and thought that would be a better focus for the new work. As the story progressed, he also brought in elements from The Silmarillion mythology. Writing was slow, because Tolkien had a full - time academic position, and needed to earn further money as a university examiner. Tolkien abandoned The Lord of the Rings during most of 1943 and only restarted it in April 1944, as a serial for his son Christopher Tolkien, who was sent chapters as they were written while he was serving in South Africa with the Royal Air Force. Tolkien made another concerted effort in 1946, and showed the manuscript to his publishers in 1947. The story was effectively finished the next year, but Tolkien did not complete the revision of earlier parts of the work until 1949. The original manuscripts, which total 9,250 pages, now reside in the J.R.R. Tolkien Collection at Marquette University. The influence of the Welsh language, which Tolkien had learned, is summarized in his essay English and Welsh: "If I may once more refer to my work. The Lord of the Rings, in evidence: the names of persons and places in this story were mainly composed on patterns deliberately modelled on those of Welsh (closely similar but not identical). This element in the tale has given perhaps more pleasure to more readers than anything else in it. '' The Lord of the Rings developed as a personal exploration by Tolkien of his interests in philology, religion (particularly Roman Catholicism), fairy tales, Norse and general Germanic mythology, and also Celtic, Slavic, Persian, Greek, and Finnish mythology. Tolkien acknowledged, and external critics have verified, the influences of George MacDonald and William Morris and the Anglo - Saxon poem Beowulf. The question of a direct influence of Wagner 's The Nibelung 's Ring on Tolkien 's work is debated by critics. Tolkien included neither any explicit religion nor cult in his work. Rather the themes, moral philosophy, and cosmology of The Lord of the Rings reflect his Catholic worldview. In one of his letters Tolkien states, "The Lord of the Rings is of course a fundamentally religious and Catholic work; unconsciously so at first, but consciously in the revision. That is why I have not put in, or have cut out, practically all references to anything like ' religion ', to cults or practices, in the imaginary world. For the religious element is absorbed into the story and the symbolism. '' Some locations and characters were inspired by Tolkien 's childhood in Birmingham, where he first lived near Sarehole Mill, and later near Edgbaston Reservoir. There are also hints of the Black Country, which is within easy reach of north west Edgbaston. This shows in such names as "Underhill '', and the description of Saruman 's industrialization of Isengard and The Shire. It has also been suggested that The Shire and its surroundings were based on the countryside around Stonyhurst College in Lancashire where Tolkien frequently stayed during the 1940s. The work was influenced by the effects of his military service during World War I, to the point that Frodo has been "diagnosed '' as suffering from Posttraumatic Stress Disorder, or "shell - shock '', which was first diagnosed under that name at the Battle of the Somme, at which Tolkien served. A dispute with his publisher, George Allen & Unwin, led to the book being offered to Collins in 1950. Tolkien intended The Silmarillion (itself largely unrevised at this point) to be published along with The Lord of the Rings, but A&U were unwilling to do this. After Milton Waldman, his contact at Collins, expressed the belief that The Lord of the Rings itself "urgently wanted cutting '', Tolkien eventually demanded that they publish the book in 1952. Collins did not; and so Tolkien wrote to Allen and Unwin, saying, "I would gladly consider the publication of any part of the stuff '', fearing his work would never see the light of day. For publication, the book was divided into three volumes to minimize any potential financial loss due to the high cost of type - setting and modest anticipated sales: The Fellowship of the Ring (Books I and II), The Two Towers (Books III and IV), and The Return of the King (Books V and VI plus six appendices). Delays in producing appendices, maps and especially an index led to the volumes being published later than originally hoped -- on 29 July 1954, on 11 November 1954 and on 20 October 1955 respectively in the United Kingdom. In the United States, Houghton Mifflin published The Fellowship of the Ring on 21 October 1954, The Two Towers on 21 April 1955, and The Return of the King on 5 January 1956. The Return of the King was especially delayed due to Tolkien revising the ending and preparing appendices (some of which had to be left out because of space constraints). Tolkien did not like the title The Return of the King, believing it gave away too much of the storyline, but deferred to his publisher 's preference. He suggested the title The Two Towers in a deliberately ambiguous attempt to link the unconnected books III and IV, and as such the eponymous towers could be either Orthanc and Barad - dûr, or Minas Tirith and Barad - dûr, or Orthanc and Cirith Ungol. Tolkien was initially opposed to titles being given to each two - book volume, preferring instead the use of book titles: e.g. The Lord of the Rings: Vol. 1, The Ring Sets Out and The Ring Goes South; Vol. 2, The Treason of Isengard and The Ring Goes East; Vol. 3, The War of the Ring and The End of the Third Age. However these individual book titles were later scrapped, and after pressure from his publishers, Tolkien initially suggested the titles: Vol. 1, The Shadow Grows; Vol. 2, The Ring in the Shadow; Vol. 3, The War of the Ring or The Return of the King. The books were published under a profit - sharing arrangement, whereby Tolkien would not receive an advance or royalties until the books had broken even, after which he would take a large share of the profits. It has ultimately become one of the best - selling novels ever written, with over 150 million copies sold. The book was published in the UK by Allen & Unwin until 1990 when the publisher and its assets were acquired by HarperCollins. In the early 1960s Donald A. Wollheim, science fiction editor of the paperback publisher Ace Books, claimed that The Lord of the Rings was not protected in the United States under American copyright law because Houghton Mifflin, the US hardcover publisher, had neglected to copyright the work in the United States. Then, in 1965, Ace Books proceeded to publish an edition, unauthorized by Tolkien and without paying royalties to him. Tolkien took issue with this and quickly notified his fans of this objection. Grass - roots pressure from these fans became so great that Ace Books withdrew their edition and made a nominal payment to Tolkien. Authorized editions followed from Ballantine Books and Houghton Mifflin to tremendous commercial success. Tolkien undertook various textual revisions to produce a version of the book that would be published with his consent and establish an unquestioned US copyright. This text became the Second Edition of The Lord of the Rings, published in 1965. The first Ballantine paperback edition was printed in October that year, and sold a quarter of a million copies within ten months. On September 4, 1966, the novel debuted on New York Times ' Paperback Bestsellers list as number three, and was number one by December 4, a position it held for eight weeks. Houghton Mifflin editions after 1994 consolidate variant revisions by Tolkien, and corrections supervised by Christopher Tolkien, which resulted, after some initial glitches, in a computer - based unified text. In 2004, for the 50th Anniversary Edition, Wayne G. Hammond and Christina Scull, under supervision from Christopher Tolkien, studied and revised the text to eliminate as many errors and inconsistencies as possible, some of which had been introduced by well - meaning compositors of the first printing in 1954, and never been corrected. The 2005 edition of the book contained further corrections noticed by the editors and submitted by readers. Further corrections were added to the 60th Anniversary Edition in 2014. Several editions, notably the 50th Anniversary Edition, combine all three books into one volume, with the result that pagination varies widely over the various editions. From 1988 to 1992 Christopher Tolkien published the surviving drafts of The Lord of The Rings, chronicling and illuminating with commentary the stages of the text 's development, in volumes 6 -- 9 of his History of Middle - earth series. The four volumes carry the titles The Return of the Shadow, The Treason of Isengard, The War of the Ring, and Sauron Defeated. The novel has been translated, with various degrees of success, into at least 56 languages. Tolkien, an expert in philology, examined many of these translations, and made comments on each that reflect both the translation process and his work. As he was unhappy with some choices made by early translators, such as the Swedish translation by Åke Ohlmarks, Tolkien wrote a "Guide to the Names in The Lord of the Rings '' (1967). Because The Lord of the Rings purports to be a translation of the fictitious Red Book of Westmarch, with the English language representing the Westron of the "original '', Tolkien suggested that translators attempt to capture the interplay between English and the invented nomenclature of the English work, and gave several examples along with general guidance. While early reviews for The Lord of the Rings were mixed, reviews in various media have been, on the whole, highly positive and acknowledge Tolkien 's literary achievement as a significant one. The initial review in the Sunday Telegraph described it as "among the greatest works of imaginative fiction of the twentieth century ''. The Sunday Times echoed this sentiment, stating that "the English - speaking world is divided into those who have read The Lord of the Rings and The Hobbit and those who are going to read them ''. The New York Herald Tribune also seemed to have an idea of how popular the books would become, writing in its review that they were "destined to outlast our time ''. W.H. Auden, an admirer of Tolkien 's writings, regarded The Lord of the Rings as a "masterpiece '', further stating that in some cases it outdid the achievement of John Milton 's Paradise Lost. New York Times reviewer Judith Shulevitz criticized the "pedantry '' of Tolkien 's literary style, saying that he "formulated a high - minded belief in the importance of his mission as a literary preservationist, which turns out to be death to literature itself ''. Critic Richard Jenkyns, writing in The New Republic, criticized the work for a lack of psychological depth. Both the characters and the work itself are, according to Jenkyns, "anemic, and lacking in fibre ''. Even within Tolkien 's literary group, The Inklings, reviews were mixed. Hugo Dyson complained loudly at its readings. However, another Inkling, C.S. Lewis, had very different feelings, writing, "here are beauties which pierce like swords or burn like cold iron. Here is a book which will break your heart. '' Despite these reviews and its lack of paperback printing until the 1960s, The Lord of the Rings initially sold well in hardback. In 1957, The Lord of the Rings was awarded the International Fantasy Award. Despite its numerous detractors, the publication of the Ace Books and Ballantine paperbacks helped The Lord of the Rings become immensely popular in the United States in the 1960s. The book has remained so ever since, ranking as one of the most popular works of fiction of the twentieth century, judged by both sales and reader surveys. In the 2003 "Big Read '' survey conducted in Britain by the BBC, The Lord of the Rings was found to be the "Nation 's best - loved book ''. In similar 2004 polls both Germany and Australia also found The Lord of the Rings to be their favourite book. In a 1999 poll of Amazon.com customers, The Lord of the Rings was judged to be their favourite "book of the millennium ''. The Lord of the Rings was awarded the Prometheus Hall of Fame Award in 2009. Although The Lord of the Rings was published in the 1950s, Tolkien insisted that the One Ring was not an allegory for the atomic bomb, nor were his works a strict allegory of any kind, but were open to interpretation as the reader saw fit. A few critics have found what they consider to be racial elements in the story, generally based upon their views of how Tolkien 's imagery depicts good and evil, characters ' race (e.g. Elf, Dwarf, Hobbit, Southron, Númenórean, Orc); and that the character 's race is seen as determining their behaviour. Counter-arguments note that race - focused critiques often omit relevant textual evidence to the contrary, cite imagery from adaptations rather than the work itself; ignore the absence of evidence of racist attitudes or events in the author 's personal life, and claim that the perception of racism is itself a marginal view. The opinions that pit races against one another are likely to reflect Tolkien 's critique on war rather than a racist perspective. In The Two Towers, the character Samwise sees a fallen foe and considers for a moment the humanity of this fallen Southron who, just moments before, is shown to be a man of color. Director Peter Jackson considers Sam this scene in the director 's commentary of the scene and argues that Tolkien is n't projecting any negative sentiments towards the individual soldier because of his race, but the evil that 's driving them from their authority, These sentiments, Jackson argues, were derived from Tolkien 's experience in the Great War and found their way into his writings to show the evils of war itself, not of other races. Critics have also seen social class rather than race as being the determining factor in the portrayal of good and evil. Commentators such as science fiction author David Brin have interpreted the work to hold unquestioning devotion to a traditional elitist social structure. In his essay "Epic Pooh '', science fiction and fantasy author Michael Moorcock critiques the world - view displayed by the book as deeply conservative, in both the "paternalism '' of the narrative voice and the power - structures in the narrative. Tom Shippey cites the origin of this portrayal of evil as a reflection of the prejudices of European middle - classes during the inter-war years towards the industrial working class. Other observers have cited Christian, specifically Catholic, themes in The Lord of the Rings. The book has been read as fitting the model of Joseph Campbell 's "monomyth ''. The Lord of the Rings has been adapted for film, radio and stage. The book has been adapted for radio four times. In 1955 and 1956, the BBC broadcast The Lord of the Rings, a 13 - part radio adaptation of the story. In the 1960s radio station WBAI produced a short radio adaptation. A 1979 dramatization of The Lord of the Rings was broadcast in the United States and subsequently issued on tape and CD. In 1981, the BBC broadcast The Lord of the Rings, a new dramatization in 26 half - hour instalments. This dramatization of The Lord of the Rings has subsequently been made available on both tape and CD both by the BBC and other publishers. For this purpose it is generally edited into 13 one - hour episodes. Following J.R.R. Tolkien 's sale of the film rights for The Lord of the Rings to United Artists in 1969, rock band The Beatles considered a corresponding film project and approached Stanley Kubrick as a potential director; however, Kubrick turned down the offer, explaining to John Lennon that he thought the novel could not be adapted into a film due to its immensity. The eventual director of the film adaptation Peter Jackson further explained that a major hindrance to the project 's progression was Tolkien 's opposition to the involvement of the Beatles. Two film adaptations of the book have been made. The first was J.R.R. Tolkien 's The Lord of the Rings (1978), by animator Ralph Bakshi, the first part of what was originally intended to be a two - part adaptation of the story; it covers The Fellowship of the Ring and part of The Two Towers. A three - issue comic book version of the movie was also published in Europe (but not printed in English), with illustrations by Luis Bermejo. When Bakshi 's investors shied away of financing the second film that would complete the story, the remainder of the story was covered in an animated television special by Rankin - Bass. Stylistically, the two segments are very different. The second and more commercially successful adaptation was Peter Jackson 's live action The Lord of the Rings film trilogy, produced by New Line Cinema and released in three instalments as The Lord of the Rings: The Fellowship of the Ring (2001), The Lord of the Rings: The Two Towers (2002), and The Lord of the Rings: The Return of the King (2003). All three parts won multiple Academy Awards, including consecutive Best Picture nominations. The final instalment of this trilogy was the second film to break the one - billion - dollar barrier and won a total of 11 Oscars (something only two other films in history, Ben - Hur and Titanic, have accomplished), including Best Picture, Best Director and Best Adapted Screenplay. The Hunt for Gollum, a fan film based on elements of the appendices to The Lord of the Rings, was released on the internet in May 2009 and has been covered in major media. Born of Hope, written by Paula DiSante, directed by Kate Madison, and released in December 2009, is a fan film based upon the appendices of The Lord of the Rings. On 13 November 2017, it was announced that Amazon had acquired the global television rights to The Lord of the Rings, committing to a multi-season television series. The series will not be a direct adaptation of the books, but will instead introduce new stories that are set before The Fellowship of the Ring. Amazon said the deal included potential for spin - off series as well. The press release referred to "previously unexplored stories based on J.R.R. Tolkien 's original writings ''. Amazon will be the producer in conjunction with the Tolkien Estate and the Tolkien Trust, HarperCollins and New Line Cinema. According to a 2018 report, it will be the most expensive TV show ever produced. In 1990, Recorded Books published an audio version of The Lord of the Rings, with British actor Rob Inglis -- who had previously starred in his own one - man stage productions of The Hobbit and The Lord of the Rings -- reading. A large - scale musical theatre adaptation, The Lord of the Rings was first staged in Toronto, Ontario, Canada in 2006 and opened in London in May 2007. The enormous popularity of Tolkien 's work expanded the demand for fantasy fiction. Largely thanks to The Lord of the Rings, the genre flowered throughout the 1960s, and enjoys popularity to the present day. The opus has spawned many imitators, such as The Sword of Shannara, which Lin Carter called "the single most cold - blooded, complete rip - off of another book that I have ever read ''. Dungeons & Dragons, which popularized the role - playing game (RPG) genre in the 1970s, features many races found in The Lord of the Rings, most notably halflings (another term for hobbits), elves (who are distinct from dark elves, following Tolkien 's example), dwarves, half - elves, orcs, and dragons. However, Gary Gygax, lead designer of the game, maintained that he was influenced very little by The Lord of the Rings, stating that he included these elements as a marketing move to draw on the popularity the work enjoyed at the time he was developing the game. Because D&D has gone on to influence many popular role - playing video games, the influence of The Lord of the Rings extends to many of them as well, with titles such as Dragon Quest, the Ultima series, EverQuest, the Warcraft series, and the Elder Scrolls series of games as well as video games set in Middle - earth itself. Research also suggests that some consumers of fantasy games derive their motivation from trying to create an epic fantasy narrative which is influenced by The Lord of the Rings. In 1965, songwriter Donald Swann, who was best known for his collaboration with Michael Flanders as Flanders & Swann, set six poems from The Lord of the Rings and one from The Adventures of Tom Bombadil ("Errantry '') to music. When Swann met with Tolkien to play the songs for his approval, Tolkien suggested for "Namárië '' (Galadriel 's lament) a setting reminiscent of plain chant, which Swann accepted. The songs were published in 1967 as The Road Goes Ever On: A Song Cycle, and a recording of the songs performed by singer William Elvin with Swann on piano was issued that same year by Caedmon Records as Poems and Songs of Middle Earth. Rock bands of the 1970s were musically and lyrically inspired by the fantasy embracing counter-culture of the time; British 70s rock band Led Zeppelin recorded several songs that contain explicit references to The Lord of the Rings, such as mentioning Gollum in "Ramble On '', the Misty Mountains in "Misty Mountain Hop '', and Ringwraiths in "The Battle of Evermore ''. In 1970, the Swedish musician Bo Hansson released an instrumental concept album based on the book titled Sagan om ringen (translated as "The Saga of the Ring '', which was the title of the Swedish translation of The Lord of the Rings at the time). The album was subsequently released internationally as Music Inspired by Lord of the Rings in 1972. The songs "Rivendell '' and "The Necromancer '' by the progressive rock band Rush were inspired by Tolkien. Styx also paid homage to Tolkien on their album Pieces of Eight with the song "Lords of the Ring '', while Black Sabbath 's song, "The Wizard '', which appeared on their debut album, was influenced by Tolkien 's hero, Gandalf. The heavy metal bands Cirith Ungol and Amon Amarth took their names from landmarks in Mordor. Progressive rock group Camel paid homage to the text in their lengthy composition "Nimrodel / The Procession / The White Rider '', and progressive rock band Barclay James Harvest was inspired by the character Galadriel to write a song by that name, and used "Bombadil '', the name of another character, as a pseudonym under which their 1972 single "Breathless '' / "When the City Sleeps '' was released; there are other references scattered through the BJH oeuvre. Later, from the 1980s to the present day, many heavy metal acts have been influenced by Tolkien. Blind Guardian has written many songs relating to Middle - earth, including the full concept album Nightfall in Middle Earth. Almost the entire discography of Battlelore are Tolkien - themed. Summoning 's music is based upon Tolkien and holds the distinction of the being the only artist to have crafted a song entirely in the Black Speech of Mordor. Gorgoroth and Amon Amarth take their names from an area of Mordor, and Burzum take their name from the Black Speech of Mordor. The Finnish metal band Nightwish and the Norwegian metal band Tristania have also incorporated many Tolkien references into their music. American heavy metal band Megadeth released two songs titled "This Day We Fight! '' and "How the Story Ends '', which were both inspired by The Lord of the Rings. German folk metal band Eichenschild is named for Thorin Oakenshield, a character in The Hobbit, and naturally has a number of Tolkien - themed songs. They are not to be confused with the ' 70s folk rock band Thorin Eichenschild. In 1988, Dutch composer and trombonist Johan de Meij completed his Symphony No. 1 "The Lord of the Rings '', which encompassed 5 movements, titled "Gandalf '', "Lothlórien '', "Gollum '', "Journey in the Dark '', and "Hobbits ''. In 1989 the symphony was awarded the Sudler Composition Award, awarded biennially for best wind band composition. The Danish Tolkien Ensemble have released a number of albums that feature the complete poems and songs of The Lord of the Rings set to music, with some featuring recitation by Christopher Lee. Enya wrote an instrumental piece called "Lothlórien '' in 1991, and composed two songs for the film The Lord of the Rings: The Fellowship of the Ring -- "May It Be '' (sung in English and Quenya) and "Aníron '' (sung in Sindarin). The Lord of the Rings has had a profound and wide - ranging impact on popular culture, beginning with its publication in the 1950s, but especially throughout the 1960s and 1970s, during which time young people embraced it as a countercultural saga. "Frodo Lives! '' and "Gandalf for President '' were two phrases popular amongst United States Tolkien fans during this time. Parodies like the Harvard Lampoon 's Bored of the Rings, the VeggieTales episode "Lord of the Beans '', the South Park episode "The Return of the Fellowship of the Ring to the Two Towers '', the Futurama film Bender 's Game, The Adventures of Jimmy Neutron: Boy Genius episode "Lights! Camera! Danger! '', The Big Bang Theory episode "The Precious Fragmentation '', and the American Dad! episode "The Return of the Bling '' are testimony to the work 's continual presence in popular culture. In 1969, Tolkien sold the merchandising rights to The Lord of The Rings (and The Hobbit) to United Artists under an agreement stipulating a lump sum payment of £ 10,000 plus a 7.5 % royalty after costs, payable to Allen & Unwin and the author. In 1976, three years after the author 's death, United Artists sold the rights to Saul Zaentz Company, who now trade as Tolkien Enterprises. Since then all "authorized '' merchandise has been signed - off by Tolkien Enterprises, although the intellectual property rights of the specific likenesses of characters and other imagery from various adaptations is generally held by the adaptors. Outside any commercial exploitation from adaptations, from the late 1960s onwards there has been an increasing variety of original licensed merchandise, from posters and calendars created by illustrators such as Pauline Baynes and the Brothers Hildebrandt, to figurines and miniatures to computer, video, tabletop and role - playing games. Recent examples include the Spiel des Jahres award winning (for best use of literature in a game) board game The Lord of the Rings by Reiner Knizia and the Golden Joystick award - winning massively multiplayer online role - playing game, The Lord of the Rings Online: Shadows of Angmar by Turbine, Inc... The Lord of the Rings has been mentioned in numerous songs including The Ballad of Bilbo Baggins by Leonard Nimoy, Led Zeppelin 's Misty Mountain Hop, Over the Hills and Far Away, Ramble On, and The Battle of Evermore, Genesis ' song "Stagnation '' (from Trespass, 1970) was about Gollum, and Argent included the song "Lothlorien '' on the 1971 album Ring of Hands. Steve Peregrin Took (born Stephen Ross Porter) of British rock band T. Rex took his name from the hobbit Peregrin Took (better known as Pippin). Took later recorded under the pseudonym ' Shagrat the Vagrant ', before forming a band called Shagrat in 1970.
whats the difference between a coach and a bus
Coach (bus) - wikipedia A coach (also motor coach) is a type of bus used for conveying passengers. In contrast to transit buses that typically used within a single metropolitan region, coaches are used for longer - distance bus service. Often used for intercity -- or even international -- bus service, other coaches are also used for private charter for various purposes. Deriving the name from horse - drawn carriages and stagecoaches that carried passengers, luggage, and mail, modern motor coaches are almost always high - floor buses, with a separate luggage hold mounted below the passenger compartment. In contrast to transit buses, motor coaches typically feature forward - facing seating, with no provision for standing. Other accommodations may include on - board restrooms, televisions, and overhead luggage space. Horse - drawn chariots and carriages ("coaches '') were used by the wealthy and powerful where the roads were of a high enough standard from possibly 3000 BC. In Hungary, during the reign of King Matthias Corvinus in the 15th century, the wheelwrights of Kocs began to build a horse - drawn vehicle with steel - spring suspension. This "cart of Kocs '' as the Hungarians called it (Hungarian: kocsi szekér) soon became popular all over Europe. The imperial post service employed the first horse - drawn mail coaches in Europe since Roman times in 1650, and as they started in the town of Kocs, the use of these mail coaches gave rise to the term "coach ''. Stagecoaches (drawn by horses) were used for transport between cities from about 1500 in Great Britain until displaced by the arrival of the railways. One of the earliest motorised vehicles was the charabanc, which was used for short journeys and excursions until the early years of the 20th century. The first "motor coaches '' were purchased by operators of those horse - drawn vehicles in the early 20th century by operators such as Royal Blue Coach Services, who purchased their first charabanc in 1913 and were running 72 coaches by 1926. Coaches, as they hold passengers for significant periods of time on long journeys, are designed for comfort. They vary considerably in quality from country to country and within countries. Higher specification vehicles include luxury seats and air conditioning. Coaches typically have only a single, narrow door, but sometimes they have two doors, as an increased loading time is acceptable due to infrequent stops. Some characteristics include: Coaches, like buses, may be fully built by integrated manufacturers, or a separate chassis consisting of only an engine, wheels and basic frame may be delivered to a coachwork factory for a body to be added. A minority of coaches are built with monocoque bodies without a chassis frame. Integrated manufacturers (most of whom also supply chassis) include Autosan, Scania, Fuso, and Alexander Dennis. Major coachwork providers (some of whom can build their own chassis) include Van Hool, Neoplan, Marcopolo, Irizar, MCI, Prevost, and Designline. A representative selection of vehicles currently (or recently) in use in different parts of the world. An Austral Pacific bodied Scania K113TRBL 14.5 m (47 ft 7 in) Quad - axle coach in Canberra, Australia Greyhound Lines MCI 102DL3 A 56 passenger Prevost coach Marcopolo S.A. luxury coaches at Valladolid, Mexico Divo coachwork from Hispano Carrocera (Tata Motors) on a Mercedes - Benz chassis Victory Liner Inc. Provincial Coach (made by Del Monte Motor Works, Inc. on a MAN 16.290 HOCL chassis) (Philippines) Van Hool sleeper coach 14 bunks, lounge area and galley A double - decker Neoplan Jumbocruiser GOLAZ - 5291 Cruise at Russia Volzhanin - 5285 from Russia Intercity coach Autosan Lider 9 eco is used also as school coach Millennium Luxury Coaches built on a Prevost chassis. Used for leisure. Double - decker Neoplans operating the Oxford to London coach route A Van Hool TD925 Megabus in New York City LAZ - 699 in Lviv, Ukraine MAZ - 251 in Minsk, Belarus Hino S'elega in Tokyo, Japan A selection of vehicles in use in different parts of the world in the past. A Leyland Tiger used by Southdown Motor Services in Britain Bedford 1961 coach owned by MacBraynes Bus Bedford VAL An early 1980s Mitsubishi Fuso coach with FHI body Short bodied Dennis 1931 King Alfred Greyhound MCI MC 6 coach built by Motor Coach Industries Zis - 127 in Tallinn
how many of charles dickens books were made into films
Category: films based on works by Charles Dickens - wikipedia Help This category has the following 7 subcategories, out of 7 total. The following 34 pages are in this category, out of 34 total. This list may not reflect recent changes (learn more).
when does jane tell grayson that she is deb
Drop Dead diva (season 3) - wikipedia Drop Dead Diva season three premiered on June 19, 2011 and concluded on September 25, 2011, on Lifetime. Season three aired on Sundays at 9: 00 pm ET and consisted of 13 episodes. Sally Fields had a guest appearance as the judge for the episode. Grayson nearly marries Vanessa, but she backs out on the day of the wedding. Jane has two relationships during the season that indicate she is trying to move on from Grayson. The first, with Dr. Bill Kendall (Ben Shenkman), ends when he reveals that Jane is one of the several women he is dating and that he is not ready to get serious with any one of them. The second, toward the end of the season, is with a fun - loving new judge named Owen French (Lex Medlin). Kim and Parker 's relationship was short lived, after Kim realized that Parker just ignored all of his ex-partners. In the end of Season 2, Jane shares with Kim that she saw Harrison and Parker kissing in Parker 's office. Kim is outraged and goes to Parker 's house to tell him that they 're over. Harrison fires Kim from the firm in the end of Season 2, but she returns in Mid-Season 3. In early Season 3, Parker still tries to charm Kim and begs her to come back to him. She refuses, and keeps moving on with her job and maintaining a platonic relationship with her boss. They end up together in a later episode, seemingly giving their relationship another try. But when Parker hired his former flame Elisa (Brandy) as a temp, Kim became jealous, putting Jane in a tough situation, knowing that Elisa has a son with Parker and Parker does n't know anything about it. Stacy shares with Jane that she 's afraid Fred thinks that the kissing IS the relationship, and they have not progressed to intercourse. Stacy keeps waking up early to run, claiming she runs a lot when she 's frustrated. Jane then gives Fred advice, because Jane knows everything Stacy wants in a man. Afterward, it appears the relationship has been consummated and is going well, because Stacy claims she "may never run again. '' Stacy finally gets a part on a TV show as a main star. She claims that she has to keep her relationship with Fred a "secret '' for publicity purposes, but eventually she ends up cheating on Fred with her co-star, Brian (Robert Hoffman), and Fred witnesses the two sharing a passionate kiss. Stacy is unable to choose between Brian and Fred; despite Jane 's advice for her to talk to Fred, the choice is made for Stacy when Fred eventually confronts her and angrily leaves. Stacy becomes distraught and upset with Jane, believing Jane told Fred of her infidelity. Fred is later shown having drinks with Grayson and Teri, who comfort him. Stacy 's behavior ends up annoying not only Fred, but also Jane / Deb and Teri when she becomes a diva and starts treating her friends like dirt, leading to a failed intervention. Brian steals her commercial deal and sleeps with her assistant, resulting in Stacy punching out Brian and getting arrested for assault. She is later cleared of the charges, but kisses Grayson after he represented her. Although the two quickly realize it was a mistake, Stacy is unaware that Jane saw it. Jane also tells Fred at the airport about what happened, thus devastating Fred who was ready to forgive Stacy and propose to her.
is season 7 of ahs the last season
List of American Horror Story episodes - wikipedia American Horror Story (often abbreviated AHS) is an American anthology horror television series created and produced by Ryan Murphy and Brad Falchuk which premiered on October 5, 2011 on FX. Described as an anthology series, each season is conceived as a mostly self - contained miniseries, following a disparate set of characters and settings, and a storyline with its own "beginning, middle, and end. '' In October 2016, the series was renewed for a seventh season titled Cult, which premiered on September 5, 2017 on FX. In January 2017, the series was renewed for an eighth and ninth season. As of October 3, 2017, 79 episodes of American Horror Story have aired.
what is the most played sport in america
Sports in the United States - Wikipedia Sports in the United States are an important part of American culture. Based on revenue, the four major professional sports leagues in the United States are Major League Baseball (MLB), the National Basketball Association (NBA), the National Football League (NFL), and the National Hockey League (NHL). The market for professional sports in the United States is roughly $69 billion, roughly 50 % larger than that of all of Europe, the Middle East, and Africa combined. Major League Soccer (MLS) is sometimes included in a "top five '' of leagues of the country. All four enjoy wide - ranging domestic media coverage and are considered the preeminent leagues in their respective sports in the world, although American football does not have a substantial following in other nations. Three of those leagues have teams that represent Canadian cities, and all four are the most financially lucrative sports leagues of their sport. American football is the most popular sport in the United States followed by basketball, baseball, and soccer. Tennis, golf, wrestling, auto racing, arena football, field lacrosse, box lacrosse and volleyball are also popular sports in the country. Professional teams in all major sports in the United States operate as franchises within a league, meaning that a team may move to a different city if the team 's owners believe there would be a financial benefit, but franchise moves are usually subject to some form of league - level approval. All major sports leagues use a similar type of regular - season schedule with a playoff tournament after the regular season ends. In addition to the major league -- level organizations, several sports also have professional minor leagues, active in smaller cities across the country. As in Canada and Australia, sports leagues in the United States do not practice promotion and relegation, unlike many sports leagues in Europe. Sports are particularly associated with education in the United States, with most high schools and universities having organized sports, and this is a unique sporting footprint for the U.S. College sports competitions play an important role in the American sporting culture, and college basketball and college football are as popular as professional sports in some parts of the country. The major sanctioning body for college sports is the National Collegiate Athletic Association (NCAA). Unlike most other nations, the United States government does not provide funding for sports nor for the United States Olympic Committee. The history of sports in the United States shows that most sports evolved out of European practices. However, basketball, volleyball, skateboarding, and snowboarding are American inventions, some of which have become popular in other countries and worldwide. Lacrosse and surfing arose from Native American and Native Hawaiian activities that predate Western contact. In Chesapeake society (that is, colonial Virginia and Maryland), sports occupied a great deal of attention at every social level, starting at the top. In England, hunting was severely restricted to landowners. In America, game was more than plentiful. Everyone -- including servants and slaves -- could and did hunt, so there was no social distinction to be had. In 1691, Sir Francis Nicholson, the governor of Virginia, organized competitions for the "better sort of Virginians onely who are Batchelors, '' and he offered prizes "to be shot for, wrastled, played at backswords, & Run for by Horse and foott. '' The United States Olympic Committee (USOC) is the National Olympic Committee for the United States. U.S. athletes have won a total of 2,522 medals (1,022 of them being gold) at the Summer Olympic Games and another 282 at the Winter Olympic Games. Most medals have been won in athletics (track and field) (801, 32 %) and swimming (553, 22 %). American swimmer Michael Phelps is the most decorated Olympic athlete of all time, with 28 Olympic medals, 23 of them gold. The United States has sent athletes to every celebration of the modern Olympic Games except the 1980 Summer Olympics hosted by the Soviet Union in Moscow, which it boycotted because of the Soviet invasion of Afghanistan. The United States has won gold at every games at which it has competed, more gold and overall medals than any other country in the Summer Games and also has the second-most gold and overall medals at the Winter Games, trailing only Norway. From the mid-20th century to the late 1980s, the United States mainly competed with the Soviet Union at Summer Games and with the Soviet Union, Norway, and East Germany at the Winter Games. However, after the dissolution of the Soviet Union, it now primarily contends with China and Great Britain at the Summer Games for both the overall medal count and the gold medal count and with Norway and Canada at the Winter Games for the overall medal count. The United States has topped the gold medal count at 17 Summer Olympics and one Winter Olympics: 1932 in Lake Placid. The United States has set multiple records for the number of medals won: the most medals (239) of any country at a single Summer Olympics, the most gold medals (83) of any country at a single Summer Olympics and the most medals (37) of any country at a single Winter Olympics. The United States hosted both Summer and Winter Games in 1932, and has hosted more Games than any other country -- eight times, four times each for the Summer and Winter Games: Los Angeles will host the Olympic Games for a third time in 2028, marking the ninth time the US hosts the Olympic Games. Motor sports are widely popular in the United States but Americans generally show little interest in the major international competitions, such as the Formula One Grand Prix series and MotoGP, preferring home - grown racing series. However, some Americans have achieved great success in these international series, such as Mario Andretti and Kenny Roberts. Americans, like the rest of the world, initially began using public streets to host automobile races, but these venues were often unsafe to the public as they offered relatively little crowd control. Promoters and drivers in the United States discovered that horse racing tracks could provide better conditions for drivers and spectators than public streets. The result has been a long - standing popularity of oval track racing, which is not used in the rest of the world, while road racing has generally waned. However, an extensive though illegal street racing culture still persists. Historically, open wheel racing was the most popular form of U.S. motorsport nationwide. However, an infamously acrimonious split (often referred to by many as "The Split '') in 1994 between the primary series, CART (later known as Champ Car), and the owner of the Indianapolis Motor Speedway (the site of the Indy 500), Tony George, led to the formation of the Indy Racing League, now known as INDYCAR, which launched the rival IndyCar Series in 1996. From that point on, the popularity of open wheel racing in the U.S. declined dramatically. The feud was settled in 2008 with an agreement to merge the two series under the IndyCar banner, but enormous damage had already been done to the sport. Post-merger, IndyCar continues to remain with slight viewership gains per year. However, as a result, the only post-Split IndyCar race that still enjoys widespread popularity among the general public is the Indianapolis 500. The CART - IRL Split coincided with an enormous expansion of stock car racing, governed by NASCAR, from its past as a mostly regional circuit mainly followed in the Southern United States to a truly national sport. NASCAR 's audience peaked in the mid 1990 's, and has declined quite a bit ever since the implementation of the Chase for the Cup in 2004, though it continues to have around 2 -- 4 million viewers per race. Among NASCAR 's popular former drivers are Jeff Gordon, Dale Earnhardt, Matt Kenseth, Tony Stewart, Dale Earnhardt Jr., and Richard Petty. Among NASCAR 's popular active drivers are Jimmie Johnson, Kurt Busch, Kyle Busch, Chase Elliott, Ryan Blaney, and Kyle Larson. NASCAR 's most popular race is the Daytona 500, the opening race of the season, held each year at Daytona Beach, Florida in February. Among the better known sports car races in the United States are the 24 Hours of Daytona, 12 Hours of Sebring, and Petit Le Mans, which have featured in the World Sportscar Championship, IMSA GT Championship, Intercontinental Le Mans Cup, FIA World Endurance Championship, American Le Mans Series, Rolex Sports Car Series and currently the United SportsCar Championship. Another one of the most popular forms of motorsports in the United States is the indigenous sport of drag racing. The largest drag racing organization is the National Hot Rod Association. Several other motorsports enjoy varying degrees of popularity in the United States: short track motor racing, motocross, monster truck competitions (including the popular Monster Jam circuit), demolition derby, figure 8 racing, mud bogging and tractor pulling. Golf is played in the United States by about 25 million people. The sport 's national governing body, the United States Golf Association (USGA), is jointly responsible with The R&A for setting and administering the rules of golf. The USGA conducts four national championships open to professionals: the U.S. Open, U.S. Women 's Open, U.S. Senior Open, and the U.S. Senior Women 's Open, with the last of these holding its first edition in 2018. The PGA of America organizes the PGA Championship, Senior PGA Championship and Women 's PGA Championship. Three legs of the Grand Slam of Golf are based in the United States: the PGA Championship, U.S. Open and The Masters. (The Open Championship, known in the U.S. as the British Open, is played in the United Kingdom.) The PGA Tour is the main professional golf tour in the United States, and the LPGA Tour is the main women 's professional tour. Also of note is PGA Tour Champions, where players 50 and older compete. Golf is aired on several television networks, such as Golf Channel, NBC, ESPN, CBS and Fox. Notable American male golfers include Walter Hagen (11 majors), Ben Hogan, Jack Nicklaus (record 18 major wins), Arnold Palmer, and Tiger Woods (14 major wins). Notable female golfers include Patty Berg (record 15 major wins), Mickey Wright (13 majors), Louise Suggs and Babe Zaharias. Tennis is played in the United States in all five categories (Men 's and Ladies ' Singles; Men 's, Ladies ' and Mixed Doubles); however, the most popular are the singles. The pinnacle of the sport in the country is the US Open played in late August at the USTA Billie Jean King National Tennis Center in New York. The Indian Wells Masters, Miami Masters and Cincinnati Masters are part of the ATP World Tour Masters 1000 and the former WTA Tier I (currently Premier Mandatory and Premier 5). The United States has had considerable success in tennis for many years, with players such as Don Budge, Billie Jean King, Chris Evert, Jimmy Connors (8 major singles titles), John McEnroe (7 major singles titles), Andre Agassi (8 major singles titles) and Pete Sampras (14 major singles titles) dominating their sport in the past. More recently, the Williams sisters, Venus Williams (7 major singles titles) and Serena Williams (23 major singles titles), have been a dominant force in the women 's game, and the twin brothers Bob and Mike Bryan have claimed almost all significant career records for men 's doubles teams. USA Track & Field is the governing body for track and field in the United States. It organizes the annual USA Outdoor Track and Field Championships and USA Indoor Track and Field Championships. The IAAF Diamond League currently features one round in the United States, the Prefontaine Classic; the series formerly included the Adidas Grand Prix as well. Three of the World Marathon Majors are held in the United States: the Boston Marathon, Chicago Marathon and New York City Marathon. The Freihofer 's Run for Women is also an IAAF Road Race Label Event. Americans have frequently set world standards in various disciplines of track and field for both male and female athletes. Tyson Gay and Michael Johnson hold various sprint records for male athletes, while Florence Griffith Joyner set various world sprint records for female athletes. Mary Slaney set many world records for middle - distance disciplines. A turning point occurred in US track in the running boom of the 1970s. After a series of American successes in various distances from marathoners Frank Shorter and Bill Rodgers as well as middle - distance runners Dave Wottle and Steve Prefontaine, running as an American pastime began to take shape. High school track in the United States became a unique foundation for creating the United States middle - distance running talent pool, and from 1972 to 1981 an average of 13 high school boys in the United States would run under 4: 10 in the mile per year. During this time, several national high school records in the United States were set and remained largely unbroken until the 2000s. The number of high school boys running the mile under 4: 10 per year dropped abruptly from 1982, and female participation in many distance events was forbidden by athletic authorities until the 1980s. However a renaissance in high school track developed when Jack Daniels, a former Olympian, published a training manual called "Daniels ' Running Formula '', which became the most widely used distance training protocol among American coaches along with Arthur Lydiard 's high - mileage regimen. Carl Lewis is credited with "normalizing '' the practice of having a lengthy track career as opposed to retiring once reaching the age when it is less realistic of gaining a personal best result. The United States is home to school - sponsored track and field, a tradition in which most schools from middle school through college feature a track and field team. Due to the amount of American athletes who satisfy Olympic norm standards, the US holds national trials to select the best of its top - tier athletes for Olympic competition. Boxing in the United States became the center of professional boxing in the early 20th century. The National Boxing Association was founded in 1921 and began to sanction title fights. In the 1960s and 1970s, Muhammad Ali became an iconic figure, transformed the role and image of the African American athlete in America by his embrace of racial pride, and transcended the sport by refusing to serve in the Vietnam War. In the 1980s and 1990s, major boxers such as Mike Tyson and Riddick Bowe were marked by crime and self - destruction. Mixed martial arts developed in the 1990s, and has achieved popularity in the early 21st century. Many companies promote MMA cards, with the U.S. - based UFC the most dominant. Traditional wrestling is performed at the scholastic level; high school wrestling is one of the most popular participatory sports for young men in the United States, and college wrestling has a small following. Professional wrestling, which evolved into a mostly scripted (kayfabe) form of sports entertainment over the course of the 20th century, enjoys widespread popularity as a spectator sport. Interest in pro wrestling peaked during the Monday Night Wars of the 1980s and 1990s. This was due to the competition between the World Wrestling Federation (WWF) and WCW, which were the two biggest professional wrestling organizations in the country during the last two decades of the 20th century. It is also stated that, between the two companies, an estimated 16 million viewers tuned in every week. Following the conclusion of the Wars and WCW 's subsumption into WWF to become the modern WWE, professional wrestling 's audience has diminished; however, it still pulls in some of cable television 's highest weekly ratings. WWE remains the dominant professional wrestling company in the U.S.; it does not hold a monopoly, as numerous smaller federations have existed, two current examples including Impact Wrestling (formerly known as TNA) and Ring of Honor (ROH). Judo in the United States is not very popular and is eclipsed by more popular martial arts like karate and taekwondo. Swimming is a major competitive sport at high school and college level, but receives little mainstream media attention outside of the Olympics. Surfing in the United States and watersports are popular in the U.S. in coastal areas. California and Hawaii are the most popular locations for surfing. The Association of Surfing Professionals was founded in 1983. The most popular team sports in the United States are American football, baseball / softball, basketball, ice hockey, and soccer (association football). All five of these team sports are popular with fans, are widely watched on television, have a fully professional league, are played by millions of Americans, enjoy varsity status at many Division I colleges, and are played in high schools throughout the country. Football, known as American football outside the United States, has the most participants of any sport at both high school and college levels, the vast majority of its participants being male. The NFL is the preeminent professional football league in the United States. The NFL has 32 franchises divided into two conferences. After a 16 - game regular season, each conference sends six teams to the NFL Playoffs, which eventually culminate in the league 's championship game, the Super Bowl. Nationwide, the NFL obtains the highest television ratings among major sports. Watching NFL games on television on Sunday afternoons has become a common routine for many Americans during the football season. Super Bowl Sunday is the biggest annual sporting event held in the United States. The Super Bowl itself is always among the highest - rated programs of all - time in the Nielsen ratings. Millions watch college football throughout the fall months, and some communities, particularly in rural areas, place great emphasis on their local high school football teams. The popularity of college and high school football in areas such as the Southern United States (Southeastern Conference) and the Great Plains (Big 12 Conference and Big Ten Conference) stems largely from the fact that these areas historically generally did not possess markets large enough for a professional team. Nonetheless, college football has a rich history in the United States, predating the NFL by decades, and fans and alumni are generally very passionate about their teams. During football season in the fall, fans have the opportunity to watch high school games on Fridays and Saturdays, college football on Saturdays, and NFL games on Sundays, the usual playing day of the professional teams. However, some colleges play games on Tuesday and Wednesday nights, while the NFL offers weekly games on Monday (since 1970) and Thursday (since 2006). As recently as 2013, one could find a nationally televised professional or college game on television any night between Labor Day and Thanksgiving weekend. Notable former NFL players include Roger Staubach, Dick Butkus, Joe Greene, Bart Starr, Johnny Unitas, Walter Payton, Joe Montana, Steve Young, Jerry Rice, Brett Favre, Emmitt Smith, Ray Lewis, and Peyton Manning. Notable current NFL players include Drew Brees, Tom Brady, Cam Newton, J.J. Watt, Russell Wilson, Marshawn Lynch, and Aaron Rodgers. Indoor American football or arena football, a form of football played in indoor arenas, has several professional and semi-professional leagues. The Arena Football League was active from 1987 to 2008 and folded in 2009, but several teams from the AFL and its former minor league, af2, relaunched the league in 2010. Most other extant indoor leagues date to the mid-2000s and are regional in nature. Women 's American football is seldom seen. A few amateur and semi-professional leagues exist, of varying degrees of stability and competition. Football is unique among scholastic sports in the U.S. in that no women 's division exists for the sport; women who wish to play football in high school or college must compete directly with men. Baseball and a variant, softball, are popular participatory sports in the U.S. The highest level of baseball in the U.S. is Major League Baseball. The World Series of Major League Baseball is the culmination of the sport 's postseason each October. It is played between the winner of each of the two leagues, the American League and the National League, and the winner is determined through a best - of - seven playoff. The New York Yankees are noted for having won more titles than any other US major professional sports franchise. The Yankees ' chief rivals, the Boston Red Sox, also enjoy a huge following in Boston and throughout New England. The fierce National League rivalry between the former Brooklyn Dodgers and New York Giants was transferred to the West Coast when the teams became the Los Angeles Dodgers and the San Francisco Giants, and California has always been among the US states which have supplied the most players in the major leagues. Chicago sports fans also avidly follow the Chicago Cubs and the Chicago White Sox despite the comparative lack of success for the teams, with Chicago Cub fans being known throughout the country for their passionate loyalty to the team despite their not having won a championship from 1908 to 2016. Historically, the leagues were much more competitive, and cities such as Boston, Philadelphia and St. Louis had rival teams in both leagues up until the 1950s. Notable American baseball players in history include Babe Ruth (714 career home runs), Ty Cobb (career leader in batting average and batting titles), Cy Young, Honus Wagner, Ted Williams (. 344 career batting average), Lou Gehrig, Joe DiMaggio, Mickey Mantle (16 - time all star), Stan Musial, Willie Mays, Yogi Berra (18 - time All - Star), Hank Aaron (career home run leader from 1974 to 2007), Nolan Ryan (career strikeouts leader), Roger Clemens (7 Cy Young awards), Derek Jeter and Jackie Robinson, who was instrumental in dissolving the color line and allowing African - Americans into the major leagues. An extensive minor league baseball system covers most mid-sized cities in the United States. Minor league baseball teams are organized in a six - tier hierarchy, in which the highest teams (AAA) are in major cities that do not have a major league team but often have a major team in another sport, and each level occupies progressively smaller cities. The lowest levels of professional baseball serve primarily as development systems for the sport 's most inexperienced prospects, with the absolute bottom, the rookie leagues, occupying the major league squads ' spring training complexes. Some limited independent professional baseball exists, the most prominent being the Atlantic League, which occupies mostly suburban locales that are not eligible for high level minor league teams of their own because they are too close to other major or minor league teams. Outside the minor leagues are collegiate summer baseball leagues, which occupy towns even smaller than those at the lower end of minor league baseball and typically can not support professional sports. Summer baseball is an amateur exercise and uses players that choose not to play for payment in order to remain eligible to play college baseball for their respective universities in the spring. At the absolute lowest end of the organized baseball system is senior amateur baseball (also known as Town Team Baseball), which typically plays its games only on weekends and uses rosters composed of local residents. Of those Americans citing their favorite sport, basketball is ranked second (counting amateur levels) behind American football. However, in regards to money the NBA is ranked third in popularity. More Americans play basketball than any other team sport, according to the National Sporting Goods Association, with over 26 million Americans playing basketball. Basketball was invented in 1891 by Canadian physical education teacher James Naismith in Springfield, Massachusetts. The National Basketball Association (NBA) is the world 's premier men 's professional basketball league and one of the major professional sports leagues of North America. It contains 30 teams (29 teams in the U.S. and 1 in Canada) that play an 82 - game season from October to June. After the regular season, eight teams from each conference compete in the playoffs for the Larry O'Brien Championship Trophy. Since the 1992 Summer Olympics, NBA players have represented the United States in international competition and won numerous important tournaments. The Dream Team was the unofficial nickname of the United States men 's basketball team that won the gold medal at the 1992 Olympics. Basketball at both the college and high school levels is popular throughout the country. Every March, a 68 - team, six - round, single - elimination tournament (commonly called March Madness) determines the national champions of NCAA Division I men 's college basketball. Most U.S. states also crown state champions among their high schools. Many high school basketball teams have intense local followings, especially in the Midwest and Upper South. Indiana has 10 of the 12 largest high school gyms in the United States, and is famous for its basketball passion, known as Hoosier Hysteria. Notable NBA players in history include Wilt Chamberlain (4 time MVP), Bill Russell (5 time MVP), Bob Pettit (11 time all NBA team), Bob Cousy (12 time all NBA team), Jerry West (12 time all NBA team), Julius Erving (won MVP awards in both the ABA and NBA), Kareem Abdul - Jabbar (6 time MVP), Magic Johnson (3 time MVP), Larry Bird (3 time MVP), Michael Jordan (6 time finals MVP), John Stockton (# 1 in career assists and steals), Karl Malone (14 time all NBA team), Kobe Bryant (NBA 's third all - time leading scorer), Tim Duncan (15 - time NBA all - star), Shaquille O'Neal (3 time finals MVP) and Jason Kidd (# 2 in career assists and steals). Notable players in the NBA today include LeBron James (4 MVP awards), Stephen Curry (2 time MVP), Dwyane Wade (10 time all - star), and Kevin Durant (MVP, 4 NBA scoring titles). Ever since the 1990s, an increasing number of players born outside the United States have signed with NBA teams, sparking league interest in different parts of the world. Professional basketball is most followed in cities where there are no other sports teams in the four major professional leagues, such as in the case of the Oklahoma City Thunder, the Sacramento Kings, the San Antonio Spurs, the Memphis Grizzlies, or the Portland Trail Blazers. New York City has also had a long historical connection with college and professional basketball, and many basketball legends initially developed their reputations playing in the many playgrounds throughout the city. Madison Square Garden, the home arena of the New York Knicks, is often referred to as the "Mecca of basketball. '' Minor league basketball, both official and unofficial, has an extensive presence, given the sport 's relative lack of expense to operate a professional team. The NBA has an official minor league, known since 2017 as the NBA G League under a naming rights agreement with Gatorade. The most prominent independent league is BIG3, a three - on - three league featuring former NBA stars that launched in 2017. Several other pro basketball leagues exist but are notorious for their instability and low budget operations. The WNBA is the premier women 's basketball league in the United States as well as the most stable and sustained women 's professional sports league in the nation. Several of the 12 teams are owned by NBA teams. The women 's national team has won seven Olympic gold medals and nine FIBA World Cups. Ice hockey, usually referred to in the U.S. simply as "hockey '', is another popular sport in the United States. In the U.S. the game is most popular in regions of the country with a cold winter climate, namely the northeast and the upper Midwest. However, since the 1990s, hockey has become increasingly popular in the Sun Belt due in large part to the expansion of the National Hockey League to the southern U.S., coupled with the mass relocation of many residents from northern cities with strong hockey support to these Sun Belt locations. The NHL is the major professional hockey league in North America, with 24 U.S. - based teams and 7 Canadian - based teams competing for the Stanley Cup. While NHL stars are still not as readily familiar to the general American public as are stars of the NFL, MLB, and the NBA, average attendance for NHL games in the U.S. has surpassed average NBA attendance in recent seasons, buoyed in part by the NHL Winter Classic being played in large outdoor stadiums. Minor league professional hockey leagues in the U.S. include the American Hockey League and the ECHL. Additionally, nine U.S. - based teams compete in the three member leagues of the Canadian Hockey League, a "junior '' league for players aged sixteen to twenty. College hockey has a regional following in the northeastern and upper midwestern United States. It is increasingly being used to develop players for the NHL and other professional leagues (the U.S. has junior leagues, the United States Hockey League and North American Hockey League, but they are more restricted to protect junior players ' college eligibility). The Frozen Four is college hockey 's national championship. The U.S. now has more youth hockey players than all other countries, excluding Canada, combined. USA Hockey is the official governing body for amateur hockey in the United States. The United States Hockey Hall of Fame is located in Eveleth, Minnesota. Internationally, the United States is counted among the Big Six, the group of nations that have historically dominated international ice hockey competition. (The others include Canada, Finland, Sweden, the Czech Republic, and Russia.) One of the nation 's greatest ever sporting moments was the "Miracle on Ice '', which came during the 1980 Winter Olympics when the U.S. hockey team beat the Soviet Union 4 -- 3 in the first game of the medal round before going on to beat Finland to claim the gold medal. Historically, the vast majority of NHL players had come from Canada, with a small number of Americans. As late as 1969 -- 70, Canadian players made up 95 percent of the league. During the 1970s and 1980s, European players entered the league, and many players from the former Soviet bloc flocked to the NHL beginning in the 1990s. Today, the majority of NHL players are Canadian, more than 20 % are Americans, and virtually all of the remainder are European - trained. (For a more complete discussion, see Origin of NHL players.) Notable NHL players in history include Wayne Gretzky (leading all - time point scorer and 9 time MVP), Mario Lemieux (3 time MVP), Guy Lafleur (2 time MVP), Gordie Howe (6 time MVP), Nicklas Lidström (7 times NHL 's top defenseman), Bobby Hull (3 time MVP and 7 time leading goal scorer, Eddie Shore (4 time MVP), Howie Morenz (3 time MVP), Maurice "Rocket '' Richard (5 time leading goal scorer), Jean Beliveau (2 time MVP) and Bobby Orr (8 times NHL 's best defenseman). Famous NHL players today include Sidney Crosby and Alexander Ovechkin. The National Women 's Hockey League, founded in 2015, is the first women 's ice hockey league in the country to pay its players and features five teams in the northeast and upper midwest. Two of the five teams (the Buffalo Beauts and Metropolitan Riveters) are either owned or operated by their metro area 's NHL franchise (the Buffalo Sabres and New Jersey Devils, respectively). At the international level, the United States women 's national ice hockey team is one of the two predominant international women 's teams in the world, alongside its longtime rival Team Canada. Soccer has been increasing in popularity in the United States in recent years. Soccer is played by over 13 million people in the U.S., making it the third-most played sport in the U.S., more widely played than ice hockey and American football. Most NCAA Division I colleges field both a men 's and women 's varsity soccer team, and those that field only one team almost invariably field a women 's team. The United States men 's national team and women 's national team, as well as a number of national youth teams, represent the United States in international soccer competitions and are governed by the United States Soccer Federation (U.S. Soccer). The U.S. women 's team holds the record for most Women 's World Cup championships, and is the only team that has never finished worse than third place in a World Cup. The U.S. women beat Japan 5 -- 2 in the 2015 FIFA Women 's World Cup final to claim their third Women 's World Cup title, and first since 1999. Major League Soccer is the premier soccer league in the United States. MLS has 23 clubs (20 from the U.S. and 3 from Canada). The 34 - game schedule runs from mid-March to late October, with the playoffs and championship in November. Soccer - specific stadiums continue to be built for MLS teams around the country, both because American football stadiums are considered to have excessive capacity, and because teams profit from operating their stadiums. Other professional men 's soccer leagues in the U.S. include the current second division, the United Soccer League, and the North American Soccer League, which had been the second - level league until being demoted in 2018 due to instability. The USL now has a formal relationship with MLS, and a number of its teams are either owned by or affiliated with MLS sides. Younger generations of Americans have strong fan appreciation for the sport, due to factors such as the U.S. hosting of the 1994 FIFA World Cup and the formation of Major League Soccer, as well as increased U.S. television coverage of soccer competitions. Many immigrants living in the United States continue to follow soccer as their favorite team sport. United States will host the 2026 FIFA World Cup sharing with Canada and Mexico. Women 's professional soccer in the United States has not seen sustained success. Following the demise of two professional leagues in the early 21st century, the Women 's United Soccer Association (1999 -- 2001) and Women 's Professional Soccer (2009 -- 2011), U.S. Soccer established a new National Women 's Soccer League in 2013. The NWSL has now survived longer than either of its two professional predecessors, and five of its current nine teams are owned by professional men 's clubs (four in MLS and one in the USL). However, at the lower levels of the salary scale, the NWSL is effectively semi-professional. Many notable international soccer players played in the U.S. in the original North American Soccer League, usually at the end of their playing careers -- including Pelé, Eusébio, George Best, Franz Beckenbauer, and Johan Cruyff -- or in MLS -- including Roberto Donadoni, Lothar Matthäus, David Beckham, Thierry Henry, Kaká, and David Villa. The best American soccer players enter the U.S. Soccer Hall of Fame. The following table shows additional sports that are played by over 500,000 people in the United States. Lacrosse is a team sport that is believed to have originated with the Iroquois and the Lenape. Lacrosse is most popular in the East Coast area. The National Lacrosse League and Major League Lacrosse are the national box and outdoor lacrosse leagues, respectively, with both leagues operating on a semi-professional level. Volleyball is also a notable sport in the United States, especially at the college and university levels. Unlike most Olympic sports which are sponsored widely at the collegiate level for both sexes, the support for college volleyball is dramatically skewed in favor of the women 's game. In the 2011 -- 12 school year, over 300 schools in NCAA Division I alone (the highest of three NCAA tiers) sponsored women 's volleyball at the varsity level, while fewer than 100 schools in all three NCAA divisions combined sponsored varsity men 's volleyball, with only 23 of them in Division I. This is partially due to Title IX; female - oriented sports such as volleyball help balance a college 's athletic opportunities for women with those for men. The men 's national team has won three gold medals at the Olympic Games, one FIVB World Championship, two FIVB Volleyball World Cup, and one FIVB World League. Meanwhile, the women 's national team has won the one FIVB World Championship and six editions of the FIVB World Grand Prix. Beach volleyball has increasingly become popular in the United States, in part due to media exposure during the Olympic Games. Rugby union is played professionally, recreationally and in colleges, though it is not governed by the NCAA (see college rugby). An estimated 1.2 million people in the United States play rugby. The U.S. national team has competed at the Rugby World Cup. In rugby sevens, the men 's national team is one of 15 "core teams '' that participate in every event of the annual World Rugby Sevens Series, and the women 's national team is one of 11 core teams in the Women 's Sevens Series. The professional domestic club competition PRO Rugby began play in April 2016, but lasted only one season; a second attempt at a professional league, Major League Rugby, launched in 2018. Rugby union participation in the U.S. has grown significantly in recent years, growing by 350 % between 2004 and 2011. A 2010 survey by the National Sporting Goods Manufacturers Association ranked rugby union as the fastest - growing sport in the U.S. The sports profile in the U.S. has received a tremendous boost from the IOC 's announcement in 2009 that rugby union would return to the Olympics in 2016. Since the Olympic announcement, rugby union events such as the Collegiate Rugby Championship, the USA Sevens, and the Rugby World Cup have been broadcast on network TV. The USA Sevens, held every year in February or March as part of the World Rugby Sevens Series and adding a parallel women 's event in the World Rugby Women 's Sevens Series in 2017, regularly draws more than 60,000 fans to Sam Boyd Stadium in Las Vegas. Rugby football formed the basis of modern American football; the two sports were nearly identical in the late 19th century but diverged into distinct, incompatible codes by the start of the 20th century. In 2006 it was estimated that 30,000 people in the United States play or watch cricket annually. By 2017, this figure had risen to 200,000 people playing cricket in 6,000 teams. Cricket in the United States is not as popular as baseball and is not as popular among as large a fraction of the population as it is within either the Commonwealth nations or the other ICC full member (or Test cricket) nations. There are at least two historical reasons for the relative obscurity of cricket within the United States. One reason was the 19th - century - rise of the summer time bat and ball sport now called baseball, which seems to have displaced cricket as a popular pastime. Another reason was that in 1909 when the ICC was originally organized as the Imperial Cricket Conference it was open only to Commonwealth nations and thereby excluded the US from participating in the sport at the highest level. Nevertheless, in 1965 the US was admitted to the renamed ICC as an associate member and the sport grew in popularity in the second half of the 20th century. An oft mentioned reason for the growing popularity of cricket is the growing population of immigrants to the US who come from cricket playing nations. With the launching of the United States Youth Cricket Association in 2010, a more focused effort to bring the game to American schools was begun, with the intention of broadening cricket 's fan base beyond expatriates and their children. ESPN has been stepping up its coverage of cricket in recent years, buying the cricket website Cricinfo in 2007, and broadcasting the final of the 2014 ICC World Twenty20 competition, the 2014 Indian Premier League, English County Championship games, and international Test cricket. Ultimate is a team sport played with a flying disc. The object of the game is to score points by passing the disc to members of your own team until you have completed a pass to a team member in the opposing teams end zone. Over 5.1 million people play some form of organized ultimate in the US. Alternative sports, using the flying disc, began in the mid-sixties, when numbers of young people looked for alternative recreational activities, including throwing a Frisbee. What started with a few players experimenting with a Frisbee later would become known as playing disc freestyle. Organized disc sports in the 1970s began with a few tournaments, and professionals using Frisbee show tours to perform at universities, fairs and sporting events. Disc sports such as disc freestyle, disc dog (with a human handler throwing discs for a dog to catch), double disc court, disc guts, disc ultimate, and disc golf became this sport 's first events. Disc guts was invented in the 1950s and developed at the International Frisbee Tournament. Ultimate, the most widely played disc sport, began in the late 1960s. In the 1970s it developed as an organized sport with the creation of the Ultimate Players Association. Double disc court was invented and introduced in the early 1970s. In 1974, disc freestyle competition was created. In 1976, the game of disc golf was standardized by the Professional Disc Golf Association. Beginning in 1974, the International Frisbee Association became the regulatory organization for all of these sports. Disc sports includes both ultimate and disc golf. Ultimate has added the American Ultimate Disc League, which began play in 2012. However, the league is still competing at a lower level than club teams established across the U.S. In 2015, the International Olympic Committee granted full recognition to the World Flying Disc Federation for flying disc sports including Ultimate. The development of snowboarding was inspired by skateboarding, sledding, surfing and skiing. It was developed in the United States in the 1960s, became a Winter Olympic Sport at Nagano in 1998 and first featured in the Winter Paralympics at Sochi in 2014. Australian rules football in the United States was first played in the country in 1996. The United States Australian Football League is the governing body for the sport in the U.S, with various clubs and leagues around the country. The National Championships are held annually. The United States men 's national Australian rules football team and the women 's national team both regularly play international matches, and play in the Australian Football International Cup, an international tournament. The sport also benefits from an active fan based organization, the Australian Football Association of North America. Bandy is only played in Minnesota. The national team regularly plays in Division A of the Bandy World Championships. In terms of licensed athletes, it is the second biggest winter sport in the world. Cricket in the United States is not a popular sport, but has a niche market with limited inroads, mainly in immigrant communities. The United States of America Cricket Association governs cricket in the United States. Historically, cricket used to be the most popular sport in America during the 18th and early 19th centuries, but declined as baseball overtook cricket. The first intercollegiate tournament in America was the first annual Canada vs. U.S. cricket match, played since 1844, when it was attended by 10,000 spectators in New York., and the annual match is the oldest international sporting event in the modern world. The United States national cricket team plays in World Cricket League Division IV, the ICC Americas Championship and qualified for ICC Intercontinental Cup. Curling is popular in northern states, possibly because of climate, proximity to Canada, or Scandinavian heritage. The national popularity of curling is growing after significant media coverage of the sport in the 2006 and 2010 Winter Olympics. Gaelic football and hurling are governed by North American GAA and New York GAA. They do not have a high profile, but are developing sports, with New York fielding a representative team in the All - Ireland Senior Football Championship. Field hockey is played in the United States predominantly by women. It is played widely at numerous NCAA colleges, where it is used as a sport to offset Title IX regulations assuring equal opportunities for men and women in sports (it thus offsets male - dominated sports such as college football). Handball, a common sport in European countries, is seldom seen in the United States. The sport is mostly played in the country on the amateur level. Handball is played in the Summer Olympics, but is not sanctioned by the NCAA; all college and university teams play as club teams. The sport 's governing body is USA Team Handball. Inline hockey was invented by Americans as a way to play the sport in all climates. The PIHA is the league with the largest number of professional teams in the nation. Street hockey is a non-standard version of inline hockey played by amateurs in informal games. Rugby league in the United States is played by the USA Rugby League (USARL) is a 14 team semi-professional rugby league football competition based on the East Coast of the United States. The league was founded in 2011 by clubs that had broken with the established American National Rugby League (AMNRL), plus expansion franchises. The USARL began its inaugural season in 2011. In November 2014, the USARL were granted Affiliate membership of the RLIF and RLEF and are now the official governing body for the sport in the USA. The United States national rugby league team played in their first World Cup in 2013 losing to Australia in the quarter finals 62 -- 0. The United States, along with Canada, will host the 2025 Rugby League World Cup. Water polo does not have a professional competition in the U.S., so the highest level of competitive play is at the college level and in the Olympics. The NCAA sanctions water polo as a varsity sport for both men and women, but sport is not popular in the U.S. beyond the west coast, and no team outside of California has ever reached the finals of the NCAA Division I men 's water polo championship. Angleball is a sport developed as a way to maintain physical fitness. Angleball is used by colleges, schools, and camps. Angleball gameplay is simple. Two large balls are placed atop standards at opposite sides of a field. Teams pass a smaller ball back and forth, attempting to knock the other team 's ball off its perch with the smaller ball. Badminton has also become a popular backyard sport in the United States. Capture the flag is played recreationally by adults and children. Dodgeball is played traditionally by children in school, though adult leagues in urban areas have formed within the past 10 years. A caricatured version was portrayed in the 2004 film comedy Dodgeball: A True Underdog Story. Kickball is also played recreationally by adults and children, especially at the elementary school level. Its rules are largely identical to baseball, except that no bat is used and instead a large rubber ball is rolled along the ground for the "batter '' to kick. Roller derby is a contact sport played on roller skates that has had brief surges of popularity throughout the 20th and 21st centuries. Roller Derby was portrayed in the 2009 film Whip It. Since September 2009, there were 350 women 's, men 's, and junior leagues in the U.S.A. For the most part, unlike sports in Europe and other parts of the world, there is no system of promotion and relegation in American professional sports. Major sports leagues operate as associations of franchises. The same 30 -- 32 teams play in the league each year unless they move to another city or the league chooses to expand with new franchises. All American sports leagues use the same type of schedule. After the regular season, the 10 -- 16 teams with the best records enter a playoff tournament leading to a championship series or game. American sports, except for soccer, have no equivalent to the cup competitions that run concurrently with leagues in European sports. Even in the case of soccer, the cup competition, the Lamar Hunt U.S. Open Cup, draws considerably less attention than the regular season. Also, the only top - level U.S. professional teams that play teams from other organizations in meaningful games are those in MLS. Since the 2012 season, all U.S. - based MLS teams have automatically qualified for the U.S. Open Cup, in which they compete against teams from lower - level U.S. leagues. In addition, three or four U.S. - based MLS teams (depending on the results of the U.S. Open Cup) qualify to play clubs from countries outside the U.S. and Canada in the CONCACAF Champions League. NBA teams have played European teams in preseason exhibitions on a semi-regular basis, and recent MLS All - Star Games have pitted top players from the league against major European soccer teams, such as members of the Premier League. International competition is not as important in American sports as it is in the sporting culture of most other countries, although Olympic ice hockey and basketball tournaments do generate attention. The first international baseball tournament with top - level players, the World Baseball Classic, also generated some positive reviews after its inaugural tournament in 2006. The major professional sports leagues operate drafts once a year, in which each league 's teams selected eligible prospects. Eligibility differs from league to league. Baseball and ice hockey operate minor league systems for players who have finished education but are not ready or good enough for the major leagues. The NBA also has a development league for players who are not ready to play at the top level. The extent in the United States to which sports are associated with secondary and tertiary education is rare among nations. Millions of students participate in athletics programs operated by high schools and colleges. Student - athletes often receive scholarships to colleges in recognition of their athletic potential. Currently, the largest governing body of collegiate sports is the National Collegiate Athletic Association (NCAA). Especially in football and basketball, college sports are followed in numbers equaling those of professional sports. College football games can draw over 100,000 spectators. For upper - tier institutions, sports are a significant source of revenue; for less prominent teams, maintaining a high - level team is a major expense. To ensure some semblance of competitive balance, the NCAA divides its institutions into three divisions (four in football), sorted by the number of athletic scholarships each school is willing to offer. The most practiced college sports, measured by NCAA reporting on varsity team participation, are: (1) football (64,000), (2) baseball / softball (47,000), (3) track and field (46,000), (4) soccer (43,000), (5) basketball (32,000), (6) cross-country running (25,000), and (7) swimming / diving (20,000). The most popular sport among female athletes is soccer, followed closely by track and field. Community college athletics are governed separately by the National Junior College Athletic Association (NJCAA). Most public high schools are members of their respective state athletic association, and those associations are members of the National Federation of State High School Associations. Some states have separate associations for public and non-public high schools. The high school sports with the highest number of participants are: Popular high school sports in various regions of the U.S. include the Texas High School football championships, the Indiana basketball championships, and ice hockey in Minnesota. The Minnesota State High School Hockey Tournament is the largest high school sporting event in the country, with average attendance to the top tier, or "AA '', games over 18,000. The Amateur Athletic Union claims to have over 670,000 participants and over 100,000 volunteers... The AAU has been around since 1888, and has been influential in amateur sports for more than 125 years. In the 1970s, the AAU received growing criticism. Many claimed that its regulatory framework was outdated. Women were banned from participating in certain competitions and some runners were locked out. There were also problems with sporting goods that did not meet the standards of the AAU. During this time, the Amateur Sports Act of 1978 organized the United States Olympic Committee and saw the re-establishment of state - supported independent associations for the Olympic sports, referred to as national governing bodies. As a result, the AAU lost its influence and importance in international sports, and focused on the support and promotion of predominantly youthful athletes, as well as on the organization of national sports events. No American government agency is charged with overseeing sports. However, the President 's Council on Physical Fitness and Sports advises the President through the Secretary of Health and Human Services about physical activity, fitness, and sports, and recommends programs to promote regular physical activity for the health of all Americans. The U.S. Congress has chartered the United States Olympic Committee to govern American participation in the Olympic Movement and promote Olympic sports. Congress has also involved itself in several aspects of sports, notably gender equity in college athletics, illegal drugs in pro sports, sports broadcasting and the application of antitrust law to sports leagues. Individual states may also have athletic commissions, which primarily govern individual sports such as boxing, kickboxing and mixed martial arts. Notable state athletic commissions are the Nevada Athletic Commission, California State Athletic Commission, New York State Athletic Commission and New Jersey State Athletic Control Board. Although these commissions only have jurisdiction over their own states, the Full Faith and Credit Clause of the U.S. Constitution is often interpreted as forcing all other states to recognize any state athletic commission 's rulings regarding an athlete 's fitness for participating in a sport. Sports have been a major part of American broadcasting since the early days of radio. Today, television networks and radio networks pay millions (sometimes billions) of dollars for the rights to broadcast sporting events. Contracts between leagues and broadcasters stipulate how often games must be interrupted for commercials. Because of all of the advertisements, broadcasting contracts are very lucrative and account for the biggest chunk of major professional teams ' revenues. Broadcasters also covet the television contracts for the major sports leagues (especially in the case of the NFL) in order to amplify their ability to promote their programming to the audience, especially young and middle - aged adult males. The advent of cable and satellite television has greatly expanded sports offerings on American TV. ESPN, the first all - sports cable network in the U.S., went on the air in 1979. It has been followed by several sister networks and competitors. Some sports television networks are national, such as CBS Sports Network, Fox Sports 1 and NBC Sports Network, whereas others are regional, such as Comcast SportsNet, Fox Sports Networks and Time Warner Cable SportsChannel. General entertainment channels like FX, TBS, TNT, and USA Network also air sports events. Some sports leagues have their own sports networks, such as NFL Network, MLB Network, NBA TV, NHL Network, Big Ten Network, Pac - 12 Network and SEC Network. Some sports teams run their own television networks as well. Sports are also widely broadcast at the local level, ranging from college and professional sports down to (on some smaller stations) recreational and youth leagues. Internet radio has allowed these broadcasts to reach a worldwide audience. In the broadest definition of sports -- physical recreation of all sorts -- the four most popular sports among the general population of the United States are exercise walking (90 million), exercising with equipment (53 million), swimming (52 million) and camping (47 million). The most popular competitive sport (and fifth most popular recreational sport) is bowling (43 million). Other most popular sports are fishing (35 million), bicycling (37 million), weightlifting (33 million), aerobics (30 million), and hiking (28 million). According to a January 2018 Poll by Gallup, 37 % of Americans consider football their favorite spectator sport, while 11 % prefer basketball, 9 % baseball, and 7 % soccer. There is some variation by viewer demographics. Men, show a stronger preference for football than women, conservatives a stronger preference than liberals, and those over 35 a stronger preference than those under 35. In all groups, however, football is still the most popular. Basketball and soccer are more popular among liberals than conservatives. Though baseball has historically been called the "national pastime '', American football has considerably grown in popularity with the advent of television over the last several decades. Most debates about "America 's most popular sport '' tend to center on the degree of Americans ' identification with either of these two games; the question is a difficult one to resolve. Advocates of baseball point to the overwhelming number of baseball tickets sold annually in the United States and Canada, compared to NFL football. It is likely that the average American sports fan will attend many more major league baseball games than NFL football games in his or her lifetime, due in part to baseball 's longer schedule and football 's (generally) higher ticket prices. Advocates of football, in turn, point to football 's large television audience, including the Super Bowl, though the sport is also facing some negative publicity in the world of youth sports due to media coverage of documented health and injury risks posed to players, including the potential long - term health concerns that concussions pose for children or teenagers. Certain teams of both sports, such as the Green Bay Packers, Boston Red Sox, New York Yankees, St. Louis Cardinals, New England Patriots, Washington Redskins, Oakland Raiders and Pittsburgh Steelers, have cultivated famously loyal fan bases across the country. In many cases, identification with a certain football or baseball team is a matter of local identity and family inheritance going back many generations. Furthermore, the popularity of each, as well as of other major team sports, may vary depending on region, ethnicity and age. The following table shows the major professional sports leagues, which average over 15,000 fans per game and that have a national TV contract that pays rights fees.
when did detroit lions last win in green bay
Lions -- Packers rivalry - wikipedia GB leads 2 -- 0 The Lions -- Packers rivalry is an NFL rivalry between the Detroit Lions and Green Bay Packers. They first met in 1930 when the Lions were known as the Portsmouth Spartans and based in Portsmouth, Ohio. The team eventually moved to Detroit for the 1934 season. The Lions and Packers have been division rivals since 1933, having both played in the NFL 's Western Conference from 1933 to 1970 and in the NFC North since 1970 (known as the NFC Central from 1970 to 2001). They have always met at least twice a season since 1932, without any cancelled games between both rivals (as of today). This is therefore the longest continuously - running rivalry in the NFL. Updated January 1, 2017. Other rivalries involving the two teams: does not include 1966 or 1967 NFL championships
what describes the forces that exist between like molecules
London dispersion force - wikipedia London dispersion forces (LDF, also known as dispersion forces, London forces, instantaneous dipole -- induced dipole forces, or loosely van der Waals forces) are a type of force acting between atoms and molecules. They are part of the van der Waals forces. The LDF is named after the German - American physicist Fritz London. The LDF is a weak intermolecular force arising from quantum - induced instantaneous polarization multipoles in molecules. They can therefore act between molecules without permanent multipole moments. London forces are exhibited by non-polar molecules because of the correlated movements of the electrons in interacting molecules. Because the electrons in adjacent molecules "flee '' as they repel each other, electron density in a molecule becomes redistributed in proximity to another molecule (see quantum mechanical theory of dispersion forces). This is frequently described as the formation of instantaneous dipoles that attract each other. London forces are present between all chemical groups, and usually represent the main part of the total interaction force in condensed matter, even though they are generally weaker than ionic bonds and hydrogen bonds. London forces become stronger as the atom in question becomes larger, and to a smaller degree for large molecules. This is due to the increased polarizability of molecules with larger, more dispersed electron clouds. This trend is exemplified by the halogens (from smallest to largest: F, Cl, Br, I). Fluorine and chlorine are gases at room temperature, bromine is a liquid, and iodine is a solid. The London forces also become stronger with larger amounts of surface contact. Greater surface area means closer interaction between different molecules. The first explanation of the attraction between noble gas atoms was given by Fritz London in 1930. He used a quantum - mechanical theory based on second - order perturbation theory. The perturbation is because of the Coulomb interaction between the electrons and nuclei of the two moieties (atoms or molecules). The second - order perturbation expression of the interaction energy contains a sum over states. The states appearing in this sum are simple products of the stimulated electronic states of the monomers. Thus, no intermolecular antisymmetrization of the electronic states is included and the Pauli exclusion principle is only partially satisfied. London wrote Taylor series expansion of the perturbation in 1 R (\ displaystyle (\ frac (1) (R))), where R (\ displaystyle R) is the distance between the nuclear centers of mass of the moieties. This expansion is known as the multipole expansion because the terms in this series can be regarded as energies of two interacting multipoles, one on each monomer. Substitution of the multipole - expanded form of V into the second - order energy yields an expression that resembles somewhat an expression describing the interaction between instantaneous multipoles (see the qualitative description above). Additionally, an approximation, named after Albrecht Unsöld, must be introduced in order to obtain a description of London dispersion in terms of dipole polarizabilities and ionization potentials. In this manner, the following approximation is obtained for the dispersion interaction E A B d i s p (\ displaystyle E_ (AB) ^ (\ rm (disp))) between two atoms A (\ displaystyle A) and B (\ displaystyle B). Here α A (\ displaystyle \ alpha _ (A)) and α B (\ displaystyle \ alpha _ (B)) are the dipole polarizabilities of the respective atoms. The quantities I A (\ displaystyle I_ (A)) and I B (\ displaystyle I_ (B)) are the first ionization potentials of the atoms, and R (\ displaystyle R) is the intermolecular distance. E A B d i s p ≈ − 3 2 I A I B I A + I B α A α B R 6 (\ displaystyle E_ (AB) ^ (\ rm (disp)) \ approx - (3 \ over 2) (I_ (A) I_ (B) \ over I_ (A) + I_ (B)) (\ alpha _ (A) \ alpha _ (B) \ over (R ^ (6)))) Note that this final London equation does not contain instantaneous dipoles (see molecular dipoles). The "explanation '' of the dispersion force as the interaction between two such dipoles was invented after London arrived at the proper quantum mechanical theory. The authoritative work contains a criticism of the instantaneous dipole model and a modern and thorough exposition of the theory of intermolecular forces. The London theory has much similarity to the quantum mechanical theory of light dispersion, which is why London coined the phrase "dispersion effect. '' In physics, the term "dispersion '' describes the variation of a quantity with frequency, which is the fluctuation of the electrons in the case of the London dispersion. Dispersion forces are usually dominant of the three van der Waals forces (orientation, induction, dispersion) between atoms and molecules, with the exception of molecules that are small and highly polar, such as water. The following contribution of the dispersion to the total intermolecular interaction energy has been given:
in 1428 the aztecs joined with two other city-state texcoco and tlacopan to for
Aztec empire - wikipedia Nahuatl (lingua franca) The Aztec Empire, or the Triple Alliance (Classical Nahuatl: Ēxcān Tlahtōlōyān, (ˈjéːʃkaːn̥ t͡ɬaʔtoːˈlóːjaːn̥)), began as an alliance of three Nahua altepetl city - states: Mexico - Tenochtitlan, Texcoco, and Tlacopan. These three city - states ruled the area in and around the Valley of Mexico from 1428 until the combined forces of the Spanish conquistadores and their native allies under Hernán Cortés defeated them in 1521. The Triple Alliance was formed from the victorious factions in a civil war fought between the city of Azcapotzalco and its former tributary provinces. Despite the initial conception of the empire as an alliance of three self - governed city - states, Tenochtitlan quickly became dominant militarily. By the time the Spanish arrived in 1519, the lands of the Alliance were effectively ruled from Tenochtitlan, while the other partners in the alliance had taken subsidiary roles. The alliance waged wars of conquest and expanded rapidly after its formation. At its height, the alliance controlled most of central Mexico as well as some more distant territories within Mesoamerica, such as the Xoconochco province, an Aztec exclave near the present - day Guatemalan border. Aztec rule has been described by scholars as "hegemonic '' or "indirect ''. The Aztecs left rulers of conquered cities in power so long as they agreed to pay semi-annual tribute to the Alliance, as well as supply military forces when needed for the Aztec war efforts. In return, the imperial authority offered protection and political stability, and facilitated an integrated economic network of diverse lands and peoples who had significant local autonomy. The state religion of the empire was polytheistic, worshiping a diverse pantheon that included dozens of deities. Many had officially recognized cults large enough so that the deity was represented in the central temple precinct of the capital Tenochtitlan. The imperial cult, specifically, was that of Huitzilopochtli, the distinctive warlike patron god of the Mexica. Peoples in conquered provinces were allowed to retain and freely continue their own religious traditions, so long as they added the imperial god Huitzilopochtli to their local pantheons. The word "Aztec '' in modern usage would not have been used by the people themselves. It has variously been used to refer to the Triple Alliance empire, the Nahuatl - speaking people of central Mexico prior to the Spanish conquest, or specifically the Mexica ethnicity of the Nahuatl - speaking peoples. The name comes from a Nahuatl word meaning "people from Aztlan, '' reflecting the mythical place of origin for Nahua peoples. For the purpose of this article, "Aztec '' refers only to those cities that constituted or were subject to the Triple Alliance. For the broader use of the term, see the article on Aztec civilization. Nahua peoples descended from Chichimec peoples who migrated to central Mexico from the north in the early 13th century. The migration story of the Mexica is similar to those of other polities in central Mexico, with supernatural sites, individuals, and events, joining earthly and divine history as they sought political legitimacy. According to the pictographic codices in which the Aztecs recorded their history, the place of origin was called Aztlán. Early migrants settled the Basin of Mexico and surrounding lands by establishing a series of independent city - states. These early Nahua city - states or altepetl, were ruled by dynastic heads called tlahtohqueh (singular, tlatoāni). Most of the existing settlements had been established by other indigenous peoples before the Mexica migration. These early city - states fought various small - scale wars with each other, but due to shifting alliances, no individual city gained dominance. The Mexica were the last of the Nahua migrants to arrive in Central Mexico. They entered the Basin of Mexico around the year 1250 AD, and by then most of the good agricultural land had already been claimed. The Mexica persuaded the king of Culhuacan, a small city - state but important historically as a refuge of the Toltecs, to allow them to settle in a relatively infertile patch of land called Chapultepec (Chapoltepēc, "in the hill of grasshoppers ''). The Mexica served as mercenaries for Culhuacan. After the Mexica served Culhuacan in battle, the ruler appointed one of his daughters to rule over the Mexica. According to mythological native accounts, the Mexica instead sacrificed her by flaying her skin, on the command of their god Xipe Totec. When the ruler of Culhuacan learned of this, he attacked and used his army to drive the Mexica from Tizaapan by force. The Mexica moved to an island in the middle of Lake Texcoco, where, as legend is said to have predicted, an eagle nested on a nopal cactus. The Mexica interpreted this as a sign from their god and founded their new city, Tenochtitlan, on this island in the year ōme calli, or "Two House '' (1325 AD). The Mexica rose to prominence as fierce warriors and were able to establish themselves as a military power. The importance of warriors and the integral nature of warfare in Mexica political and religious life helped propel them to emerge as the dominant military power prior to the arrival of the Spanish in 1519. The new Mexica city - state allied with the city of Azcapotzalco and paid tribute to its ruler, Tezozomoc. With Mexica assistance, Azcopotzalco began to expand into a small tributary empire. Until this point, the Mexica ruler was not recognized as a legitimate king. Mexica leaders successfully petitioned one of the kings of Culhuacan to provide a daughter to marry into the Mexica line. Their son, Acamapichtli, was enthroned as the first tlatoani of Tenochtitlan in the year 1372. While the Tepanecs of Azcapotzalco expanded their rule with help from the Mexica, the Acolhua city of Texcoco grew in power in the eastern portion of the lake basin. Eventually, war erupted between the two states, and the Mexica played a vital role in the conquest of Texcoco. By then, Tenochtitlan had grown into a major city and was rewarded for its loyalty to the Tepanecs by receiving Texcoco as a tributary province. In 1426, the Tepanec king Tezozomoc died, and the resulting succession crisis precipitated a civil war between potential successors. The Mexica supported Tezozomoc 's preferred heir, Tayahauh, who was initially enthroned as king. But his son, Maxtla, soon usurped the throne and turned against factions that opposed him, including the Mexica ruler Chimalpopoca. The latter died shortly thereafter, possibly assassinated by Maxtla. The new Mexica ruler Itzcoatl continued to defy Maxtla; he blockaded Tenochtitlan and demanded increased tribute payments. Maxtla similarly turned against the Acolhua, and the king of Texcoco, Nezahualcoyotl, fled into exile. Nezahualcoyotl recruited military help from the king of Huexotzinco, and the Mexica gained the support of a dissident Tepanec city, Tlacopan. In 1427, Tenochtitlan, Texcoco, Tlacopan, and Huexotzinco went to war against Azcapotzalco, emerging victorious in 1428. After the war, Huexotzinco withdrew, and in 1430, the three remaining cities formed a treaty known today as the Triple Alliance. The Tepanec lands were carved up among the three cities, whose leaders agreed to cooperate in future wars of conquest. Land acquired from these conquests was to be held by the three cities together. Tribute was to be divided so that two - fifths each went to Tenochtitlan and Texcoco, and one - fifth went to Tlacopan. Each of the three kings of the alliance in turn assumed the title "huetlatoani '' ("Elder Speaker '', often translated as "Emperor ''). In this role, each temporarily held a de jure position above the rulers of other city - states ("tlatoani ''). In the next 100 years, the Triple Alliance of Tenochtitlan, Texcoco, and Tlacopan came to dominate the Valley of Mexico and extend its power to the shores of the Gulf of Mexico and the Pacific. Tenochtitlan gradually became the dominant power in the alliance. Two of the primary architects of this alliance were the half - brothers Tlacaelel and Moctezuma, nephews of Itzcoatl. Moctezuma eventually succeeded Itzcoatl as the Mexica huetlatoani in 1440. Tlacaelel occupied the newly created title of "Cihuacoatl '', equivalent to something between "Prime Minister '' and "Viceroy ''. Shortly after the formation of the Triple Alliance, Itzcoatl and Tlacopan instigated sweeping reforms on the Aztec state and religion. It has been alleged that Tlacaelel ordered the burning of some or most of the extant Aztec books, claiming that they contained lies and that it was "not wise that all the people should know the paintings ''. Even if he did order such book - burnings, it was probably limited primarily to documents containing political propaganda from previous regimes; he thereafter rewrote the history of the Aztecs, naturally placing the Mexica in a more central role. After Moctezuma I succeeded Itzcoatl as the Mexica emperor, more reforms were instigated to maintain control over conquered cities. Uncooperative kings were replaced with puppet rulers loyal to the Mexica. A new imperial tribute system established Mexica tribute collectors that taxed the population directly, bypassing the authority of local dynasties. Nezahualcoyotl also instituted a policy in the Acolhua lands of granting subject kings tributary holdings in lands far from their capitals. This was done to create an incentive for cooperation with the empire; if a city 's king rebelled, he lost the tribute he received from foreign land. Some rebellious kings were replaced by calpixqueh, or appointed governors rather than dynastic rulers. Moctezuma issued new laws that further separated nobles from commoners and instituted the death penalty for adultery and other offenses. By royal decree, a religiously supervised school was built in every neighborhood. Commoner neighborhoods had a school called a "telpochcalli '' where they received basic religious instruction and military training. A second, more prestigious type of school called a "calmecac '' served to teach the nobility, as well as commoners of high standing seeking to become priests or artisans. Moctezuma also created a new title called "quauhpilli '' that could be conferred on commoners. This title was a form of non-hereditary lesser nobility awarded for outstanding military or civil service (similar to the English knight). In some rare cases, commoners that received this title married into royal families and became kings. One component of this reform was the creation of an institution of regulated warfare called the Flower Wars. Mesoamerican warfare overall is characterized by a strong preference for capturing live prisoners as opposed to slaughtering the enemy on the battlefield, which was considered sloppy and gratuitous. The Flower Wars are a potent manifestation of this approach to warfare. These highly ritualized wars ensured a steady, healthy supply of experienced Aztec warriors as well as a steady, healthy supply of captured enemy warriors for sacrifice to the gods. Flower wars were pre-arranged by officials on both sides and conducted specifically for the purpose of each polity collecting prisoners for sacrifice. According to native historical accounts, these wars were instigated by Tlacaelel as a means of appeasing the gods in response to a massive drought that gripped the Basin of Mexico from 1450 to 1454. The flower wars were mostly waged between the Aztec Empire and the neighboring cities of their arch - enemy Tlaxcala. After the defeat of the Tepanecs, Itzcoatl and Nezahualcoyotl rapidly consolidated power in the Basin of Mexico and began to expand beyond its borders. The first targets for imperial expansion were Coyoacan in the Basin of Mexico and Cuauhnahuac and Huaxtepec in the modern Mexican state of Morelos. These conquests provided the new empire with a large influx of tribute, especially agricultural goods. On the death of Itzcoatl, Moctezuma I was enthroned as the new Mexica emperor. The expansion of the empire was briefly halted by a major four - year drought that hit the Basin of Mexico in 1450, and several cities in Morelos had to be re-conquered after the drought subsided. Moctezuma and Nezahualcoyotl continued to expand the empire east towards the Gulf of Mexico and south into Oaxaca. In 1468, Moctezuma I died and was succeeded by his son, Axayacatl. Most of Axayacatl 's thirteen - year - reign was spent consolidating the territory acquired under his predecessor. Motecuzoma and Nezahualcoyotl had expanded rapidly and many provinces rebelled. At the same time as the Aztec Empire was expanding and consolidating power, the Purépecha Empire in West Mexico was similarly expanding. In 1455, the Purépecha under their king Tzitzipandaquare had invaded the Toluca Valley, claiming lands previously conquered by Motecuzoma and Itzcoatl. In 1472, Axayacatl re-conquered the region and successfully defended it from Purépecha attempts to take it back. In 1479, Axayacatl launched a major invasion of the Purépecha Empire with 32,000 Aztec soldiers. The Purépecha met them just across the border with 50,000 soldiers and scored a resounding victory, killing or capturing over 90 % of the Aztec army. Axayacatl himself was wounded in the battle, retreated to Tenochtitlan, and never engaged the Purépecha in battle again. In 1472, Nezahualcoyotl died and his son Nezahualpilli was enthroned as the new huetlatoani of Texcoco. This was followed by the death of Axayacatl in 1481. Axayacatl was replaced by his brother Tizoc. Tizoc 's reign was notoriously brief. He proved to be ineffectual and did not significantly expand the empire. Apparently due to his incompetence, Tizoc was likely assassinated by his own nobles five years into his rule. Tizoc was succeeded by his brother Ahuitzotl in 1486. Like his predecessors, the first part of Ahuitzotl 's reign was spent suppressing rebellions that were commonplace due to the indirect nature of Aztec rule. Ahuitzotl then began a new wave of conquests including the Oaxaca Valley and the Soconusco Coast. Due to increased border skirmishes with the Purépechas, Ahuitzotl conquered the border city of Otzoma and turned the city into a military outpost. The population of Otzoma was either killed or dispersed in the process. The Purépecha subsequently established fortresses nearby to protect against Aztec expansion. Ahuitzotl responded by expanding further west to the Pacific Coast of Guerrero. By the reign of Ahuitzotl, the Mexica were the largest and most powerful faction in the Aztec Triple Alliance. Building on the prestige the Mexica had acquired over the course of the conquests, Ahuitzotl began to use the title "huehuetlatoani '' ("Eldest Speaker '') to distinguish himself from the rulers of Texcoco and Tlacopan. Even though the alliance still technically ran the empire, the Mexica Emperor now assumed nominal if not actual seniority. Ahuitzotl was succeeded by his nephew Motecuzoma II in 1502. Motecuzoma II spent most of his reign consolidating power in lands conquered by his predecessors. In 1515, Aztec armies commanded by the Tlaxcalan general Tlahuicole invaded the Purépecha Empire once again. The Aztec army failed to take any territory and was mostly restricted to raiding. The Purépechas defeated them and the army withdrew. Motecuzoma II instituted more imperial reforms. After the death of Nezahualcoyotl, the Mexica Emperors had become the de facto rulers of the alliance. Motecuzoma II used his reign to attempt to consolidate power more closely with the Mexica Emperor. He removed many of Ahuitzotl 's advisors and had several of them executed. He also abolished the "quauhpilli '' class, destroying the chance for commoners to advance to the nobility. His reform efforts were cut short by the Spanish Conquest in 1519. Spanish expedition leader Hernán Cortés landed in Yucatán in 1519 with approximately 630 men (most armed with only a sword and shield). Cortés had actually been removed as the expedition 's commander by the governor of Cuba, Diego Velásquez, but had stolen the boats and left without permission. At the island of Cozumel, Cortés encountered a shipwrecked Spaniard named Gerónimo de Aguilar who joined the expedition and translated between Spanish and Mayan. The expedition then sailed west to Campeche, where after a brief battle with the local army, Cortés was able to negotiate peace through his interpreter, Aguilar. The King of Campeche gave Cortés a second translator, a bilingual Nahua - Maya slave woman named La Malinche (she was known also as Malinalli (maliˈnalːi), Malintzin (maˈlintsin) or Doña Marina (ˈdoɲa maˈɾina)). Aguilar translated from Spanish to Mayan and La Malinche translated from Mayan to Nahuatl. Once Malinche learned Spanish, she became Cortés 's translator for both language and culture, and was a key figure in interactions with Nahua rulers. An important article, "Rethinking Malinche '' by Frances Karttunen examines her role in the conquest and beyond. Cortés then sailed from Campeche to Cempoala, a tributary province of the Aztec Triple Alliance. Nearby, he founded the town of Veracruz where he met with ambassadors from the reigning Mexica emperor, Motecuzoma II. When the ambassadors returned to Tenochtitlan, Cortés went to Cempoala to meet with the local Totonac leaders. After the Totonac ruler told Cortés of his various grievances against the Mexica, Cortés convinced the Totonacs to imprison an imperial tribute collector. Cortés subsequently released the tribute collector after persuading him that the move was entirely the Totonac 's idea and that he had no knowledge of it. Having effectively declared war on the Aztecs, the Totonacs provided Cortés with 20 companies of soldiers for his march to Tlaxcala. At this time several of Cortés 's soldiers attempted to mutiny. When Cortés discovered the plot, he had his ships scuttled and sank them in the harbor to remove any possibility of escaping to Cuba. The Spanish - led Totonac army crossed into Tlaxcala to seek the latter 's alliance against the Aztecs. However, the Tlaxcalan general Xicotencatl the Younger believed them to be hostile, and attacked. After fighting several close battles, Cortés eventually convinced the leaders of Tlaxcala to order their general to stand down. Cortés then secured an alliance with the people of Tlaxcala, and traveled from there to the Basin of Mexico with a smaller company of 5,000 - 6,000 Tlaxcalans and 400 Totonacs, in addition to the Spanish soldiers. During his stay in the city of Cholula, Cortés claims he received word of a planned ambush against the Spanish. In a pre-emptive response, Cortés directed his troops attack and kill a large number of unarmed Cholulans gathered in the main square of the city. Following the massacre at Cholula, Hernan Cortés and the other Spaniards entered Tenochtitlan, where they were greeted as guests and given quarters in the palace of former emperor Axayacatl. After staying in the city for six weeks, two Spaniards from the group left behind in Veracruz were killed in an altercation with an Aztec lord named Quetzalpopoca. Cortés claims that he used this incident as an excuse to take Motecuzoma prisoner under threat of force. For several months, Motecuzoma continued to run the kingdom as a prisoner of Hernan Cortés. Then, in 1520, a second, larger Spanish expedition arrived under the command of Pánfilo de Narváez sent by Diego Velásquez with the goal of arresting Cortés for treason. Before confronting Narváez, Cortés secretly persuaded Narváez 's lieutenants to betray him and join Cortés. While Cortés was away from Tenochtitlan dealing with Narváez, his second in command Pedro de Alvarado massacred a group of Aztec nobility in response to a ritual of human sacrifice honoring Huitzilopochtli. The Aztecs retaliated by attacking the palace where the Spanish were quartered. Cortés returned to Tenochtitlan and fought his way to the palace. He then took Motecuzoma up to the roof of the palace to ask his subjects to stand down. However, by this point the ruling council of Tenochtitlan had voted to depose Motecuzoma and had elected his brother Cuitlahuac as the new emperor. One of the Aztec soldiers struck Motecuzoma in the head with a sling stone, and he died several days later -- although the exact details of his death, particularly who was responsible, are unclear. The Spaniards and their allies, realizing they were vulnerable to the hostile Mexica in Tenochtitlan following Moctezuma 's death, attempted to retreat without detection in what is known as the "Sad Night '' or La Noche Triste. Spaniards and their Indian allies were discovered clandestinely retreating, and then were forced to fight their way out of the city, with heavy loss of life. Some Spaniards lost their lives by drowning, loaded down with gold. They retreated to Tlacopan (now Tacuba) and made their way to Tlaxcala, where they recovered and prepared for the second, successful assault on Tenochtitlan. After this incident, a smallpox outbreak hit Tenochtitlan. As the indigenous of the New World had no previous exposure to smallpox, this outbreak alone killed more than 50 % of the region 's population, including the emperor, Cuitlahuac. While the new emperor Cuauhtémoc dealt with the smallpox outbreak, Cortés raised an army of Tlaxcalans, Texcocans, Totonacs, and others discontent with Aztec rule. With a combined army of up to 100,000 warriors, the overwhelming majority of which were indigenous rather than Spanish, Cortés marched back into the Basin of Mexico. Through numerous subsequent battles and skirmishes, he captured the various indigenous city - states or altepetl around the lake shore and surrounding mountains, including the other capitals of the Triple Alliance, Tlacopan and Texcoco. Texcoco in fact had already become firm allies of the Spaniards and the city - state, and subsequently petitioned the Spanish crown for recognition of their services in the conquest, just as Tlaxcala had done. Using boats constructed in Texcoco from parts salvaged from the scuttled ships, Cortés blockaded and laid siege to Tenochtitlan for a period of several months. Eventually, the Spanish - led army assaulted the city both by boat and using the elevated causeways connecting it to the mainland. Although the attackers took heavy casualties, the Aztecs were ultimately defeated. The city of Tenochtitlan was thoroughly destroyed in the process. Cuauhtémoc was captured as he attempted to flee the city. Cortés kept him prisoner and tortured him for a period of several years before finally executing him in 1525. The Aztec Empire was an example of an empire that ruled by indirect means. Like most European empires, it was ethnically very diverse, but unlike most European empires, it was more a system of tributes than a single unitary form of government. In the theoretical framework of imperial systems posited by American historian Alexander J. Motyl the Aztec empire was an informal type of empire in that the Alliance did not claim supreme authority over its tributary provinces; it merely expected tributes to be paid. The empire was also territorially discontinuous, i.e. not all of its dominated territories were connected by land. For example, the southern peripheral zones of Xoconochco were not in immediate contact with the central part of the empire. The hegemonic nature of the Aztec empire can be seen in the fact that generally local rulers were restored to their positions once their city - state was conquered and the Aztecs did not interfere in local affairs as long as the tribute payments were made. Although the form of government is often referred to as an empire, in fact most areas within the empire were organized as city - states (individually known as altepetl in Nahuatl, the language of the Aztecs). These were small polities ruled by a king or tlatoani (literally "speaker '', plural tlatoque) from an aristocratic dynasty. The Early Aztec period was a time of growth and competition among altepeme. Even after the empire was formed in 1428 and began its program of expansion through conquest, the altepetl remained the dominant form of organization at the local level. The efficient role of the altepetl as a regional political unit was largely responsible for the success of the empire 's hegemonic form of control. It should be remembered that the term "Aztec empire '' is a modern one, not one used by the Aztec themselves. The Aztec realm was at its core composed of three Nahuatl - speaking city states in the densely populated Valley of Mexico. Over time, asymmetries of power elevated one of those city states, Tenochtitlan, above the other two. The "Triple Alliance '' came to establish hegemony over much of central Mesoamerica, including areas of great linguistic and cultural diversity. Administration of the empire was performed through largely traditional, indirect means. However, over time something of a nascent bureaucracy may have been beginning to form insofar as the state organization became increasingly centralized. Before the reign of Nezahualcoyotl (1429 -- 1472), the Aztec empire operated as a confederation along traditional Mesoamerican lines. Independent altepetl were led by tlatoani (lit., "speakers ''), who supervised village headmen, who in turn supervised groups of households. A typical Mesoamerican confederation placed a Huey Tlatoani (lit., "great speaker '') at the head of several tlatoani. Following Nezahualcoyotl, the Aztec empire followed a somewhat divergent path, with some tlatoani of recently conquered or otherwise subordinated altepetl becoming replaced with calpixque stewards charged with collecting tribute on behalf of the Huetlatoani rather than simply replacing an old tlatoque with new ones from the same set of local nobility. Yet the Huey tlatoani was not the sole executive. It was the responsibility of the Huey tlatoani to deal with the external issues of empire; the management of tribute, war, diplomacy, and expansion were all under the purview of the Huey tlatoani. It was the role of the Cihuacoatl to govern the city of Tenochtitlan itself. The Cihuacoatl was always a close relative of the Huey tlatoani; Tlacaelel, for example, was the brother of Moctezuma I. Both the title "Cihuacoatl '', which means "female snake '' (it is the name of a Nahua deity), and the role of the position, somewhat analogous to a European Viceroy or Prime Minister, reflect the dualistic nature of Nahua cosmology. Neither the position of Cihuacoatl nor the position of Huetlatoani were priestly, yet both did have important ritual tasks. Those of the former were associated with the "female '' wet season, those of the latter with the "male '' dry season. While the position of Cihuacoatl is best attested in Tenochtitlan, it is known that the position also existed the nearby altepetl of Atzcapotzalco, Culhuacan, and Tenochtitlan 's ally Texcoco. Despite the apparent lesser status of the position, a Cihuacoatl could prove both influential and powerful, as in the case of Tlacaelel. Early in the history of the empire, Tenochtitlan developed a four - member military and advisory Council which assisted the Huey tlatoani in his decision - making: the tlacochcalcatl; the tlaccatecatl; the ezhuahuacatl; and the tlillancalqui. This design not only provided advise for the ruler, it also served to contain ambition on the part of the nobility, as henceforth Huey Tlatoani could only be selected from the Council. Moreover, the actions of any one member of the Council could easily be blocked by the other three, providing a simple system of checks on the ambition higher officials. These four Council members were also generals, members of various military societies. The ranks of the members were not equal, with the tlacochcalcatl and tlaccatecatl having a higher status than the others. These two Councillors were members of the two most prestigious military societies, the cuauhchique ("shorn ones '') and the otontin ("Otomies ''). Traditionally, provinces and altepetl were governed by hereditary tlatoani. As the empire grew, the system evolved further and some tlatoani were replaced by other officials. The other officials had similar authority to tlatoani. As has already been mentioned, directly appointed stewards (singular calpixqui, plural calpixque) were sometimes imposed on altepetl instead of the selection of provincial nobility to the same position of tlatoani. At the height of empire, the organization of the state into tributary and strategic provinces saw an elaboration of this system. The 38 tributary provinces fell under the supervision of high stewards, or huecalpixque, whose authority extended over the lower - ranking calpixque. These calpixque and huecalpixque were essentially managers of the provincial tribute system which was overseen and coordinated in the paramount capital of Tenochtitlan not by the huetlatoani, but rather by a separate position altogether: the petlacalcatl. On the occasion that a recently conquered altepetl was seen as particularly restive, a military governor, or cuauhtlatoani, was placed at the head of provincial supervision. During the reign of Moctezuma I, the calpixque system was elaborated, with two calpixque assigned per tributary province. One was stationed in the province itself, perhaps for supervising the collection of tribute, and the other in Tenochtitlan, perhaps for supervising storage of tribute. Tribute was drawn from commoners, the macehualtin, and distributed to the nobility, be they ' kings ' (tlatoque), lesser rulers (teteuctin), or provincial nobility (pipiltin). Tribute collection was supervised by the above officials and relied upon the coercive power of the Aztec military, but also upon the cooperation of the pipiltin (the local nobility who were themselves exempt from and recipient to tribute) and the hereditary class of merchants known as pochteca. These pochteca had various gradations of ranks which granted them certain trading rights and so were not necessarily pipiltin themselves, yet they played an important role in both the growth and administration of the Aztec tributary system nonetheless. The power, political and economic, of the pochteca was strongly tied to the political and military power of the Aztec nobility and state. In addition to serving as diplomats (teucnenenque, or "travelers of the lord '') and spies in the prelude to conquest, higher - ranking pochteca also served as judges in market plazas and were to certain degree autonomous corporate groups, having administrative duties within their own estate. Originally, the Aztec empire was loose alliance between three cities: Tenochtitlan, Texcoco, and the most junior partner, Tlacopan. As such, they were known as the ' Triple Alliance. ' This political form was very common in Mesoamerica, where alliances of city - states were ever fluctuating. However, overtime it was Tenochtitlan which assumed paramount authority in the alliance, and although each partner city shared spoils of war and rights to regular tribute from the provinces and were governed by their own Huetlatoani, it was Tenochtitlan which became the largest, most powerful, and most influential of the three cities. It was the de facto and acknowledged center of empire. Though they were not described by the Aztec this way, there were essentially two types of provinces: Tributary and Strategic. Strategic provinces were essentially subordinate client states which provided tribute or aid to the Aztec state under "mutual consent ''. Tributary provinces, on the other hand, provided regular tribute to the empire; obligations on the part of Tributary provinces were mandatory rather than consensual. Rulers, be they local teteuctin or tlatoani, or central Huetlatoani, were seen as representatives of the gods and therefore ruled by divine right. Tlatocayotl, or the principle of rulership, established that this divine right was inherited by descent. Political order was therefore also a cosmic order, and to kill a tlatoani was to transgress that order. For that reason, whenever a tlatoani was killed or otherwise removed from their station, a relative and member of the same bloodline was typically placed in their stead. The establishment of the office of Huetlatoani understood through the creation of another level of rulership, hueitlatocayotl, standing in superior contrast to the lesser tlatocayotl principle. Expansion of the empire was guided by a militaristic interpretation of Nahua religion, specifically a devout veneration of the sun god, Huitzilopochtli. Militaristic state rituals were performed throughout the year according to a ceremonial calendar of events, rites, and mock battles. The time period they lived in was understood as the Ollintonatiuh, or Sun of Movement, which was believed to be the final age after which humanity would be destroyed. It was under Tlacaelel that Huitzilopochtli assumed his elevated role in the state pantheon and who argued that it was through blood sacrifice that the Sun would be maintained and thereby stave off the end of the world. It was under this new, militaristic interpretation of Huitzilopochtli that Aztec soldiers were encouraged to fight wars and capture enemy soldiers for sacrifice. Though blood sacrifice was common in Mesoamerica, the scale of human sacrifice under the Aztecs was likely unprecedented in the region. The most developed code of law was developed in the city - state of Texcoco under its ruler Nezahualcoyotl. It was a formal written code, not merely a collection of customary practices. The sources for knowing about the legal code are colonial - era writings by Franciscan Toribio de Benavente Motolinia, Franciscan Fray Juan de Torquemada, and Texcocan historians Juan Bautista Pomar, and Fernando de Alva Cortés Ixtlilxochitl. The law code in Texcoco under Nezahualcoyotl was legalistic, that is cases were tried by particular types of evidence and the social status of the litigants was disregarded, and consisted of 80 written laws. These laws called for severe, publicly administered punishments, creating a legal framework of social control. Much less is known about the legal system in Tenochtitlan, which might be less "legalistic or sophisticated as those of Texcoco for this period. It seems to have been established under the reign of Moctezuma I. These laws served to establish and govern relations between the state, classes, and individuals. Punishment was to be meted out solely by state authorities. Nahua mores were enshrined in these laws, criminalizing public acts of homosexuality, drunkenness, and nudity, not to mention more universal proscriptions against theft, murder, and property damage. As stated before, pochteca could serve as judges, often exercising judicial oversight of their own members. Likewise, military courts dealt with both cases within the military and without during wartime. There was an appeal process, with appellate courts standing between local, typically market - place courts, on the provincial level and a supreme court and two special higher appellate courts at Tenochtitlan. One of those two special courts dealt with cases arising within Tenochtitlan, the other with cases originating from outside the capital. The ultimate judicial authority laid in hands of the Huey tlatoani, who had the right to appoint lesser judges.
viktor wynd museum of curiosities fine art & natural history
The Viktor Wynd museum of Curiosities, Fine Art & Natural history - Wikipedia The Viktor Wynd Museum of Curiosities, Fine Art & Natural History is a museum and bar in Cambridge Heath, situated in a former call centre on Mare Street in the London Borough of Hackney. It is operated by Viktor Wynd and part of The Last Tuesday Society and was funded on Kickstarter in 2015. The museum collection includes classic curiosities such as hairballs, two headed lambs and Fiji mermaids, its art collection spans several centuries including the largest collection of work by Austin Osman Spare on public display and what is reputed to be the country 's largest collection of work by the Anglo - Mexican surrealist Leonora Carrington. The museum 's natural history collection includes dodo bones and extinct bird feathers, as well as much taxidermy and the skeleton of a giant anteater. It has a section dedicated to the Dandy including Sebastian Horsley 's nails from his crucifixion and drawings and archive material to do with Stephen Tennant, a collection of human remains including shrunken heads, Tribal Skulls, dead babies in bottles and parts of pickled prostitutes, tribal art collected in The Congo and New Guinea by the proprietor, fossils, and scientific and medical instruments. It also displays celebrity faecal matter, erotica and condoms used by the Rolling Stones. The contents of the museum are insured for £ 1million. The museum is generally open to the public but is occasionally hired out for private events. The Museum holds regular exhibitions of artists including Alasdair Gray, Mervyn Peake & Gunter Grass Robin Ironside and English Surrealists Coordinates: 51 ° 32 ′ 05 '' N 0 ° 03 ′ 27 '' W  /  51.5347301 ° N 0.0575708 ° W  / 51.5347301; - 0.0575708
who was fighting against who in the civil war
American Civil War - Wikipedia Union victory: Abraham Lincoln Ulysses S. Grant William T. Sherman David Farragut George B. McClellan Henry Halleck George Meade Jefferson Davis Robert E. Lee J.E. Johnston G.T. Beauregard A.S. Johnston † Braxton Bragg 2,200,000: 750,000 -- 1,000,000: 110,000 + killed in action / died of wounds 230,000 + accident / disease deaths 25,000 -- 30,000 died in Confederate prisons 365,000 + total dead 282,000 + wounded 181,193 captured 94,000 + killed in action / died of wounds 26,000 -- 31,000 died in Union prisons 290,000 + total dead 137,000 + wounded 436,658 captured The American Civil War (also known by other names) was a war (though a Declaration of War was never issued by Congress) fought in the United States from 1861 to 1865. As a result of the long - standing controversy over slavery, war broke out in April 1861, when Confederate forces attacked Fort Sumter in South Carolina, shortly after U.S. President Abraham Lincoln was inaugurated. The nationalists of the Union proclaimed loyalty to the U.S. Constitution. They faced secessionists of the Confederate States, who advocated for states ' rights to expand slavery. Among the 34 U.S. states in February 1861, seven Southern slave states individually declared their secession from the U.S. to form the Confederate States of America, or the South. The Confederacy grew to include eleven slave states. The Confederacy was never diplomatically recognized by the United States government, nor was it recognized by any foreign country (although the United Kingdom and France granted it belligerent status). The states that remained loyal to the U.S. (including the border states where slavery was legal) were known as the Union or the North. The Union and Confederacy quickly raised volunteer and conscription armies that fought mostly in the South over the course of four years. The Union finally won the war when General Robert E. Lee surrendered to General Ulysses S. Grant at the Battle of Appomattox Court House, followed by a series of surrenders by Confederate generals throughout the southern states. Four years of intense combat left 620,000 to 750,000 people dead, more than the number of U.S. military deaths in all other wars combined (at least until approximately the Vietnam War). Much of the South 's infrastructure was destroyed, especially the transportation systems. The Confederacy collapsed, slavery was abolished, and 4 million slaves were freed. The Reconstruction Era (1863 -- 1877) overlapped and followed the war, with the process of restoring national unity, strengthening the national government, and granting civil rights to freed slaves throughout the country. The Civil War is the most studied and written about episode in U.S. history. In the 1860 presidential election, Republicans, led by Abraham Lincoln, supported banning slavery in all the U.S. territories. The Southern states viewed this as a violation of their constitutional rights and as the first step in a grander Republican plan to eventually abolish slavery. The three pro-Union candidates together received an overwhelming 82 % majority of the votes cast nationally: Republican Lincoln 's votes centered in the north, Democrat Stephen A. Douglas ' votes were distributed nationally and Constitutional Unionist John Bell 's votes centered in Tennessee, Kentucky, and Virginia. The Republican Party, dominant in the North, secured a plurality of the popular votes and a majority of the electoral votes nationally, thus Lincoln was constitutionally elected president. He was the first Republican Party candidate to win the presidency. However, before his inauguration, seven slave states with cotton - based economies declared secession and formed the Confederacy. The first six to declare secession had the highest proportions of slaves in their populations, a total of 49 percent. Of those states whose legislatures resolved for secession, the first seven voted with split majorities for unionist candidates Douglas and Bell (Georgia with 51 % and Louisiana with 55 %), or with sizable minorities for those unionists (Alabama with 46 %, Mississippi with 40 %, Florida with 38 %, Texas with 25 %, and South Carolina, which cast Electoral College votes without a popular vote for president. Of these, only Texas held a referendum on secession. Eight remaining slave states continued to reject calls for secession. Outgoing Democratic President James Buchanan and the incoming Republicans rejected secession as illegal. Lincoln 's March 4, 1861, inaugural address declared that his administration would not initiate a civil war. Speaking directly to the "Southern States '', he attempted to calm their fears of any threats to slavery, reaffirming, "I have no purpose, directly or indirectly to interfere with the institution of slavery in the United States where it exists. I believe I have no lawful right to do so, and I have no inclination to do so. '' After Confederate forces seized numerous federal forts within territory claimed by the Confederacy, efforts at compromise failed and both sides prepared for war. The Confederates assumed that European countries were so dependent on "King Cotton '' that they would intervene, but none did, and none recognized the new Confederate States of America. Hostilities began on April 12, 1861, when Confederate forces fired upon Fort Sumter. While in the Western Theater the Union made significant permanent gains, in the Eastern Theater, the battle was inconclusive from 1861 -- 1862. Later, in 1863, Lincoln issued the Emancipation Proclamation, which made ending slavery a war goal. To the west, by summer 1862 the Union destroyed the Confederate river navy, then much of their western armies, and seized New Orleans. The 1863 Union Siege of Vicksburg split the Confederacy in two at the Mississippi River. In 1863, Robert E. Lee 's Confederate incursion north ended at the Battle of Gettysburg. Western successes led to Ulysses S. Grant 's command of all Union armies in 1864. Inflicting an ever - tightening naval blockade of Confederate ports, the Union marshaled the resources and manpower to attack the Confederacy from all directions, leading to the fall of Atlanta to William T. Sherman and his march to the sea. The last significant battles raged around the Siege of Petersburg. Lee 's escape attempt ended with his surrender at Appomattox Court House, on April 9, 1865. While the military war was coming to an end, the political reintegration of the nation was to take another 12 years, known as the Reconstruction Era. The American Civil War was one of the earliest true industrial wars. Railroads, the telegraph, steamships and iron - clad ships, and mass - produced weapons were employed extensively. The mobilization of civilian factories, mines, shipyards, banks, transportation and food supplies all foreshadowed the impact of industrialization in World War I, World War II and subsequent conflicts. It remains the deadliest war in American history. From 1861 to 1865, it is estimated that 620,000 to 750,000 soldiers died, along with an undetermined number of civilians. By one estimate, the war claimed the lives of 10 percent of all Northern males 20 -- 45 years old, and 30 percent of all Southern white males aged 18 -- 40. The causes of secession were complex and have been controversial since the war began, but most academic scholars identify slavery as a central cause of the war. James C. Bradford wrote that the issue has been further complicated by historical revisionists, who have tried to offer a variety of reasons for the war. Slavery was the central source of escalating political tension in the 1850s. The Republican Party was determined to prevent any spread of slavery, and many Southern leaders had threatened secession if the Republican candidate, Lincoln, won the 1860 election. After Lincoln won, many Southern leaders felt that disunion was their only option, fearing that the loss of representation would hamper their ability to promote pro-slavery acts and policies. Slavery was a major cause of disunion. Although there were opposing views even in the Union States, most northern soldiers were largely indifferent on the subject of slavery, while Confederates fought the war largely to protect a southern society of which slavery was an integral part. From the anti-slavery perspective, the issue was primarily about whether the system of slavery was an anachronistic evil that was incompatible with republicanism. The strategy of the anti-slavery forces was containment -- to stop the expansion and thus put slavery on a path to gradual extinction. The slave - holding interests in the South denounced this strategy as infringing upon their Constitutional rights. Southern whites believed that the emancipation of slaves would destroy the South 's economy, due to the large amount of capital invested in slaves and fears of integrating the ex-slave black population. In particular, southerners feared a repeat of "the horrors of Santo Domingo '', in which nearly all white people -- including men, women, children, and even many sympathetic to abolition -- were killed after the successful slave revolt in Haiti. Historian Thomas Fleming points to the historical phrase "a disease in the public mind '' used by critics of this idea, and proposes it contributed to the segregation in the Jim Crow era following emancipation. These fears were exacerbated by the recent attempts of John Brown to instigate an armed slave rebellion in the South. Slavery was illegal in much of the North, having been outlawed in the late 18th and early 19th centuries. It was also fading in the border states and in Southern cities, but it was expanding in the highly profitable cotton districts of the rural South and Southwest. Subsequent writers on the American Civil War looked to several factors explaining the geographic divide. Sectionalism resulted from the different economies, social structure, customs and political values of the North and South. Regional tensions came to a head during the War of 1812, resulting in the Hartford Convention which manifested Northern dissastisfaction with a foreign trade embargo that affected the industrial North disproportionately, the Three - Fifths Compromise, dilution of Northern power by new states, and a succession of Southern Presidents. Sectionalism increased steadily between 1800 and 1860 as the North, which phased slavery out of existence, industrialized, urbanized, and built prosperous farms, while the deep South concentrated on plantation agriculture based on slave labor, together with subsistence farming for poor freedmen. In the 1840s and 50s, the issue of accepting slavery (in the guise of rejecting slave - owning bishops and missionaries) split the nation 's largest religious denominations (the Methodist, Baptist and Presbyterian churches) into separate Northern and Southern denominations. Historians have debated whether economic differences between the industrial Northeast and the agricultural South helped cause the war. Most historians now disagree with the economic determinism of historian Charles A. Beard in the 1920s and emphasize that Northern and Southern economies were largely complementary. While socially different, the sections economically benefited each other. Slave owners preferred low - cost manual labor with no mechanization. Northern manufacturing interests supported tariffs and protectionism while southern planters demanded free trade, The Democrats in Congress, controlled by Southerners, wrote the tariff laws in the 1830s, 1840s, and 1850s, and kept reducing rates so that the 1857 rates were the lowest since 1816. The Republicans called for an increase in tariffs in the 1860 election. The increases were only enacted in 1861 after Southerners resigned their seats in Congress. The tariff issue was a Northern grievance. However, neo-Confederate writers have claimed it as a Southern grievance. In 1860 -- 61 none of the groups that proposed compromises to head off secession raised the tariff issue. Pamphleteers North and South rarely mentioned the tariff. The South argued that each state had the right to secede -- leave the Union -- at any time, that the Constitution was a "compact '' or agreement among the states. Northerners (including President Buchanan) rejected that notion as opposed to the will of the Founding Fathers who said they were setting up a perpetual union. Historian James McPherson writes concerning states ' rights and other non-slavery explanations: While one or more of these interpretations remain popular among the Sons of Confederate Veterans and other Southern heritage groups, few professional historians now subscribe to them. Of all these interpretations, the states ' - rights argument is perhaps the weakest. It fails to ask the question, states ' rights for what purpose? States ' rights, or sovereignty, was always more a means than an end, an instrument to achieve a certain goal more than a principle. Between 1803 and 1854, the United States achieved a vast expansion of territory through purchase, negotiation, and conquest. At first, the new states carved out of these territories entering the union were apportioned equally between slave and free states. It was over territories west of the Mississippi that the proslavery and antislavery forces collided. With the conquest of northern Mexico west to California in 1848, slaveholding interests looked forward to expanding into these lands and perhaps Cuba and Central America as well. Northern "free soil '' interests vigorously sought to curtail any further expansion of slave territory. The Compromise of 1850 over California balanced a free - soil state with stronger fugitive slave laws for a political settlement after four years of strife in the 1840s. But the states admitted following California were all free: Minnesota (1858), Oregon (1859) and Kansas (1861). In the southern states the question of the territorial expansion of slavery westward again became explosive. Both the South and the North drew the same conclusion: "The power to decide the question of slavery for the territories was the power to determine the future of slavery itself. '' By 1860, four doctrines had emerged to answer the question of federal control in the territories, and they all claimed they were sanctioned by the Constitution, implicitly or explicitly. The first of these "conservative '' theories, represented by the Constitutional Union Party, argued that the Missouri Compromise apportionment of territory north for free soil and south for slavery should become a Constitutional mandate. The Crittenden Compromise of 1860 was an expression of this view. The second doctrine of Congressional preeminence, championed by Abraham Lincoln and the Republican Party, insisted that the Constitution did not bind legislators to a policy of balance -- that slavery could be excluded in a territory as it was done in the Northwest Ordinance of 1787 at the discretion of Congress, thus Congress could restrict human bondage, but never establish it. The Wilmot Proviso announced this position in 1846. Senator Stephen A. Douglas proclaimed the doctrine of territorial or "popular '' sovereignty -- which asserted that the settlers in a territory had the same rights as states in the Union to establish or disestablish slavery as a purely local matter. The Kansas -- Nebraska Act of 1854 legislated this doctrine. In Kansas Territory, years of pro and anti-slavery violence and political conflict erupted; the congressional House of Representatives voted to admit Kansas as a free state in early 1860, but its admission in the Senate was delayed until January 1861, after the 1860 elections when southern senators began to leave. The fourth theory was advocated by Mississippi Senator Jefferson Davis, one of state sovereignty ("states ' rights ''), also known as the "Calhoun doctrine '', named after the South Carolinian political theorist and statesman John C. Calhoun. Rejecting the arguments for federal authority or self - government, state sovereignty would empower states to promote the expansion of slavery as part of the federal union under the U.S. Constitution. "States ' rights '' was an ideology formulated and applied as a means of advancing slave state interests through federal authority. As historian Thomas L. Krannawitter points out, the "Southern demand for federal slave protection represented a demand for an unprecedented expansion of federal power. '' These four doctrines comprised the major ideologies presented to the American public on the matters of slavery, the territories and the U.S. Constitution prior to the 1860 presidential election. Nationalism was a powerful force in the early 19th century, with famous spokesmen such as Andrew Jackson and Daniel Webster. While practically all Northerners supported the Union, Southerners were split between those loyal to the entire United States (called "unionists '') and those loyal primarily to the southern region and then the Confederacy. C. Vann Woodward said of the latter group, A great slave society... had grown up and miraculously flourished in the heart of a thoroughly bourgeois and partly puritanical republic. It had renounced its bourgeois origins and elaborated and painfully rationalized its institutional, legal, metaphysical, and religious defenses... When the crisis came it chose to fight. It proved to be the death struggle of a society, which went down in ruins. Perceived insults to Southern collective honor included the enormous popularity of Uncle Tom 's Cabin (1852) and the actions of abolitionist John Brown in trying to incite a slave rebellion in 1859. While the South moved towards a Southern nationalism, leaders in the North were also becoming more nationally minded, and they rejected any notion of splitting the Union. The Republican national electoral platform of 1860 warned that Republicans regarded disunion as treason and would not tolerate it: "We denounce those threats of disunion... as denying the vital principles of a free government, and as an avowal of contemplated treason, which it is the imperative duty of an indignant people sternly to rebuke and forever silence. '' The South ignored the warnings: Southerners did not realize how ardently the North would fight to hold the Union together. The election of Abraham Lincoln in November 1860 was the final trigger for secession. Efforts at compromise, including the "Corwin Amendment '' and the "Crittenden Compromise '', failed. Southern leaders feared that Lincoln would stop the expansion of slavery and put it on a course toward extinction. The slave states, which had already become a minority in the House of Representatives, were now facing a future as a perpetual minority in the Senate and Electoral College against an increasingly powerful North. Before Lincoln took office in March 1861, seven slave states had declared their secession and joined to form the Confederacy. According to Lincoln, the people had shown that they can be successful in establishing and administering a republic, but a third challenge faced the nation, maintaining a republic based on the people 's vote against an attempt to overthrow it. The election of Lincoln caused the legislature of South Carolina to call a state convention to consider secession. Prior to the war, South Carolina did more than any other Southern state to advance the notion that a state had the right to nullify federal laws, and even to secede from the United States. The convention summoned unanimously voted to secede on December 20, 1860, and adopted the "Declaration of the Immediate Causes Which Induce and Justify the Secession of South Carolina from the Federal Union ''. It argued for states ' rights for slave owners in the South, but contained a complaint about states ' rights in the North in the form of opposition to the Fugitive Slave Act, claiming that Northern states were not fulfilling their federal obligations under the Constitution. The "cotton states '' of Mississippi, Florida, Alabama, Georgia, Louisiana, and Texas followed suit, seceding in January and February 1861. Among the ordinances of secession passed by the individual states, those of three -- Texas, Alabama, and Virginia -- specifically mentioned the plight of the "slaveholding states '' at the hands of northern abolitionists. The rest make no mention of the slavery issue, and are often brief announcements of the dissolution of ties by the legislatures. However, at least four states -- South Carolina, Mississippi, Georgia, and Texas -- also passed lengthy and detailed explanations of their causes for secession, all of which laid the blame squarely on the movement to abolish slavery and that movement 's influence over the politics of the northern states. The southern states believed slaveholding was a constitutional right because of the Fugitive slave clause of the Constitution. These states agreed to form a new federal government, the Confederate States of America, on February 4, 1861. They took control of federal forts and other properties within their boundaries with little resistance from outgoing President James Buchanan, whose term ended on March 4, 1861. Buchanan said that the Dred Scott decision was proof that the South had no reason for secession, and that the Union "was intended to be perpetual '', but that "The power by force of arms to compel a State to remain in the Union '' was not among the "enumerated powers granted to Congress ''. One quarter of the U.S. Army -- the entire garrison in Texas -- was surrendered in February 1861 to state forces by its commanding general, David E. Twiggs, who then joined the Confederacy. As Southerners resigned their seats in the Senate and the House, Republicans were able to pass bills for projects that had been blocked by Southern Senators before the war, including the Morrill Tariff, land grant colleges (the Morrill Act), a Homestead Act, a transcontinental railroad (the Pacific Railway Acts), the National Banking Act and the authorization of United States Notes by the Legal Tender Act of 1862. The Revenue Act of 1861 introduced the income tax to help finance the war. On December 18, 1860, the Crittenden Compromise was proposed to re-establish the Missouri Compromise line by constitutionally banning slavery in territories to the north of the line while guaranteeing it to the south. The adoption of this compromise likely would have prevented the secession of every southern state apart from South Carolina, but Lincoln and the Republicans rejected it. It was then proposed to hold a national referendum on the compromise. The Republicans again rejected the idea, although a majority of both Northerners and Southerners would have voted in favor of it. A pre-war February Peace Conference of 1861 met in Washington, proposing a solution similar to that of the Crittenden compromise, it was rejected by Congress. The Republicans proposed an alternative compromise to not interfere with slavery where it existed but the South regarded it as insufficient. Nonetheless, the remaining eight slave states rejected pleas to join the Confederacy following a two - to - one no - vote in Virginia 's First Secessionist Convention on April 4, 1861. On March 4, 1861, Abraham Lincoln was sworn in as President. In his inaugural address, he argued that the Constitution was a more perfect union than the earlier Articles of Confederation and Perpetual Union, that it was a binding contract, and called any secession "legally void ''. He had no intent to invade Southern states, nor did he intend to end slavery where it existed, but said that he would use force to maintain possession of Federal property. The government would make no move to recover post offices, and if resisted, mail delivery would end at state lines. Where popular conditions did not allow peaceful enforcement of Federal law, U.S. marshals and judges would be withdrawn. No mention was made of bullion lost from U.S. mints in Louisiana, Georgia, and North Carolina. He stated that it would be U.S. policy to only collect import duties at its ports; there could be no serious injury to the South to justify armed revolution during his administration. His speech closed with a plea for restoration of the bonds of union, famously calling on "the mystic chords of memory '' binding the two regions. The South sent delegations to Washington and offered to pay for the federal properties and enter into a peace treaty with the United States. Lincoln rejected any negotiations with Confederate agents because he claimed the Confederacy was not a legitimate government, and that making any treaty with it would be tantamount to recognition of it as a sovereign government. Secretary of State William Seward, who at the time saw himself as the real governor or "prime minister '' behind the throne of the inexperienced Lincoln, engaged in unauthorized and indirect negotiations that failed. President Lincoln was determined to hold all remaining Union - occupied forts in the Confederacy, Fort Monroe in Virginia, in Florida, Fort Pickens, Fort Jefferson, Fort Taylor and Fort Sumter, located at the cockpit of secession in Charleston, South Carolina. Fort Sumter was located in the middle of the harbor of Charleston, South Carolina. Its garrison recently moved there to avoid incidents with local militias in the streets of the city. Lincoln told Maj. Anderson to hold on until fired upon. Jefferson Davis ordered the surrender of the fort. Anderson gave a conditional reply that the Confederate government rejected, and Davis ordered General P.G.T. Beauregard to attack the fort before a relief expedition could arrive. He bombarded Fort Sumter on April 12 -- 13, forcing its capitulation. The attack on Fort Sumter rallied the North to the defense of American nationalism. Historian Allan Nevins said: Union leaders incorrectly assumed that only a minority of Southerners were in favor of secession and that there were large numbers of southern Unionists that could be counted on. Had Northerners realized that most Southerners really did favor secession, they might have hesitated at attempting the enormous task of conquering a united South. Lincoln called on all the states to send forces to recapture the fort and other federal properties. With the scale of the rebellion apparently small so far, Lincoln called for only 75,000 volunteers for 90 days. The governor of Massachusetts had state regiments on trains headed south the next day. In western Missouri, local secessionists seized Liberty Arsenal. On May 3, 1861, Lincoln called for an additional 42,000 volunteers for a period of three years. Four states in the middle and upper South had repeatedly rejected Confederate overtures, but now Virginia, Tennessee, Arkansas, and North Carolina refused to send forces against their neighbors, declared their secession, and joined the Confederacy. To reward Virginia, the Confederate capital was moved to Richmond. Maryland, Delaware, Missouri, and Kentucky were slave states that were opposed to both secession and coercing the South. West Virginia then joined them as an additional border state after it separated from Virginia and became a state of the Union in 1863. Maryland 's territory surrounded the United States ' capital of Washington, DC and could cut it off from the North. It had numerous anti-Lincoln officials who tolerated anti-army rioting in Baltimore and the burning of bridges, both aimed at hindering the passage of troops to the South. Maryland 's legislature voted overwhelmingly (53 -- 13) to stay in the Union, but also rejected hostilities with its southern neighbors, voting to close Maryland 's rail lines to prevent them from being used for war. Lincoln responded by establishing martial law and unilaterally suspending habeas corpus in Maryland, along with sending in militia units from the North. Lincoln rapidly took control of Maryland and the District of Columbia by seizing many prominent figures, including arresting 1 / 3 of the members of the Maryland General Assembly on the day it reconvened. All were held without trial, ignoring a ruling by the Chief Justice of the U.S. Supreme Court Roger Taney, a Maryland native, that only Congress (and not the president) could suspend habeas corpus (Ex parte Merryman). Indeed, federal troops imprisoned a prominent Baltimore newspaper editor, Frank Key Howard, Francis Scott Key 's grandson, after he criticized Lincoln in an editorial for ignoring the Supreme Court Chief Justice 's ruling. In Missouri, an elected convention on secession voted decisively to remain within the Union. When pro-Confederate Governor Claiborne F. Jackson called out the state militia, it was attacked by federal forces under General Nathaniel Lyon, who chased the governor and the rest of the State Guard to the southwestern corner of the state (see also: Missouri secession). In the resulting vacuum, the convention on secession reconvened and took power as the Unionist provisional government of Missouri. Kentucky did not secede; for a time, it declared itself neutral. When Confederate forces entered the state in September 1861, neutrality ended and the state reaffirmed its Union status, while trying to maintain slavery. During a brief invasion by Confederate forces, Confederate sympathizers organized a secession convention, inaugurated a governor, and gained recognition from the Confederacy. The rebel government soon went into exile and never controlled Kentucky. After Virginia 's secession, a Unionist government in Wheeling asked 48 counties to vote on an ordinance to create a new state on October 24, 1861. A voter turnout of 34 percent approved the statehood bill (96 percent approving). The inclusion of 24 secessionist counties in the state and the ensuing guerrilla war engaged about 40,000 Federal troops for much of the war. Congress admitted West Virginia to the Union on June 20, 1863. West Virginia provided about 20,000 -- 22,000 soldiers to both the Confederacy and the Union. A Unionist secession attempt occurred in East Tennessee, but was suppressed by the Confederacy, which arrested over 3,000 men suspected of being loyal to the Union. They were held without trial. The Civil War was a contest marked by the ferocity and frequency of battle. Over four years, 237 named battles were fought, as were many more minor actions and skirmishes, which were often characterized by their bitter intensity and high casualties. In his book The American Civil War, John Keegan writes that "The American Civil War was to prove one of the most ferocious wars ever fought ''. Without geographic objectives, the only target for each side was the enemy 's soldier. As the first seven states began organizing a Confederacy in Montgomery, the entire U.S. army numbered 16,000. However, Northern governors had begun to mobilize their militias. The Confederate Congress authorized the new nation up to 100,000 troops sent by governors as early as February. By May, Jefferson Davis was pushing for 100,000 men under arms for one year or the duration, and that was answered in kind by the U.S. Congress. In the first year of the war, both sides had far more volunteers than they could effectively train and equip. After the initial enthusiasm faded, reliance on the cohort of young men who came of age every year and wanted to join was not enough. Both sides used a draft law -- conscription -- as a device to encourage or force volunteering; relatively few were actually drafted and served. The Confederacy passed a draft law in April 1862 for young men aged 18 to 35; overseers of slaves, government officials, and clergymen were exempt. The U.S. Congress followed in July, authorizing a militia draft within a state when it could not meet its quota with volunteers. European immigrants joined the Union Army in large numbers, including 177,000 born in Germany and 144,000 born in Ireland. When the Emancipation Proclamation went into effect in January 1863, ex-slaves were energetically recruited by the states, and used to meet the state quotas. States and local communities offered higher and higher cash bonuses for white volunteers. Congress tightened the law in March 1863. Men selected in the draft could provide substitutes or, until mid-1864, pay commutation money. Many eligibles pooled their money to cover the cost of anyone drafted. Families used the substitute provision to select which man should go into the army and which should stay home. There was much evasion and overt resistance to the draft, especially in Catholic areas. The great draft riot in New York City in July 1863 involved Irish immigrants who had been signed up as citizens to swell the vote of the city 's Democratic political machine, not realizing it made them liable for the draft. Of the 168,649 men procured for the Union through the draft, 117,986 were substitutes, leaving only 50,663 who had their personal services conscripted. In both the North and South, the draft laws were highly unpopular. In the North, some 120,000 men evaded conscription, many of them fleeing to Canada, and another 280,000 soldiers deserted during the war. At least 100,000 Southerners deserted, or about 10 percent. In the South, many men deserted temporarily to take care of their distressed families, then returned to their units. In the North, "bounty jumpers '' enlisted to get the generous bonus, deserted, then went back to a second recruiting station under a different name to sign up again for a second bonus; 141 were caught and executed. From a tiny frontier force in 1860, the Union and Confederate armies had grown into the "largest and most efficient armies in the world '' within a few years. European observers at the time dismissed them as amateur and unprofessional, but British historian John Keegan 's assessment is that each outmatched the French, Prussian and Russian armies of the time, and but for the Atlantic, would have threatened any of them with defeat. Perman and Taylor (2010) say that historians are of two minds on why millions of men seemed so eager to fight, suffer and die over four years: Some historians emphasize that Civil War soldiers were driven by political ideology, holding firm beliefs about the importance of liberty, Union, or state rights, or about the need to protect or to destroy slavery. Others point to less overtly political reasons to fight, such as the defense of one 's home and family, or the honor and brotherhood to be preserved when fighting alongside other men. Most historians agree that no matter what a soldier thought about when he went into the war, the experience of combat affected him profoundly and sometimes altered his reasons for continuing the fight. At the start of the civil war, a system of paroles operated. Captives agreed not to fight until they were officially exchanged. Meanwhile, they were held in camps run by their own army where they were paid but not allowed to perform any military duties. The system of exchanges collapsed in 1863 when the Confederacy refused to exchange black prisoners. After that, about 56,000 of the 409,000 POWs died in prisons during the war, accounting for nearly 10 percent of the conflict 's fatalities. The small U.S. Navy of 1861 was rapidly enlarged to 6,000 officers and 45,000 men in 1865, with 671 vessels, having a tonnage of 510,396. Its mission was to blockade Confederate ports, take control of the river system, defend against Confederate raiders on the high seas, and be ready for a possible war with the British Royal Navy. Meanwhile, the main riverine war was fought in the West, where a series of major rivers gave access to the Confederate heartland, if the U.S. Navy could take control. In the East, the Navy supplied and moved army forces about, and occasionally shelled Confederate installations. By early 1861, General Winfield Scott had devised the Anaconda Plan to win the war with as little bloodshed as possible. Scott argued that a Union blockade of the main ports would weaken the Confederate economy. Lincoln adopted parts of the plan, but he overruled Scott 's caution about 90 - day volunteers. Public opinion, however, demanded an immediate attack by the army to capture Richmond. In April 1861, Lincoln announced the Union blockade of all Southern ports; commercial ships could not get insurance and regular traffic ended. The South blundered in embargoing cotton exports in 1861 before the blockade was effective; by the time they realized the mistake, it was too late. "King Cotton '' was dead, as the South could export less than 10 percent of its cotton. The blockade shut down the ten Confederate seaports with railheads that moved almost all the cotton, especially New Orleans, Mobile, and Charleston. By June 1861, warships were stationed off the principal Southern ports, and a year later nearly 300 ships were in service. The Civil War occurred during the early stages of the industrial revolution and subsequently many naval innovations emerged during this time, most notably the advent of the ironclad warship. It began when the Confederacy, knowing they had to meet or match the Union 's naval superiority, responded to the Union blockade by building or converting more than 130 vessels, including twenty - six ironclads and floating batteries. Only half of these saw active service. Many were equipped with ram bows, creating "ram fever '' among Union squadrons wherever they threatened. But in the face of overwhelming Union superiority and the Union 's own ironclad warships, they were unsuccessful. The Confederacy experimented with a submarine, which did not work well, and with building an ironclad ship, the CSS Virginia, which was based on rebuilding a sunken Union ship, the Merrimack. On its first foray on March 8, 1862, the Virginia inflicted significant damage to the Union 's wooden fleet, but the next day the first Union ironclad, the USS Monitor, arrived to challenge it in the Chesapeake Bay. The resulting three hour battle between the Ironclads was a draw, but it marked the worldwide transition to ironclad warships. Not long after the battle the Confederacy was forced to scuttle the Virginia to prevent its capture, while the Union built many copies of the Monitor. Lacking the technology and infrastructure to build effective warships, the Confederacy attempted to obtain warships from Britain. British investors built small, fast, steam - driven blockade runners that traded arms and luxuries brought in from Britain through Bermuda, Cuba, and the Bahamas in return for high - priced cotton. Many of the ships were designed for speed and were so small that only a small amount of cotton went out. When the Union Navy seized a blockade runner, the ship and cargo were condemned as a Prize of war and sold, with the proceeds given to the Navy sailors; the captured crewmen were mostly British and they were simply released. The Southern economy nearly collapsed during the war. There were multiple reasons for this: the severe deterioration of food supplies, especially in cities, the failure of Southern railroads, the loss of control of the main rivers, foraging by Northern armies, and the seizure of animals and crops by Confederate armies. Most historians agree that the blockade was a major factor in ruining the Confederate economy; however, Wise argues that the blockade runners provided just enough of a lifeline to allow Lee to continue fighting for additional months, thanks to fresh supplies of 400,000 rifles, lead, blankets, and boots that the homefront economy could no longer supply. Surdam argues that the blockade was a powerful weapon that eventually ruined the Southern economy, at the cost of few lives in combat. Practically, the entire Confederate cotton crop was useless (although it was sold to Union traders), costing the Confederacy its main source of income. Critical imports were scarce and the coastal trade was largely ended as well. The measure of the blockade 's success was not the few ships that slipped through, but the thousands that never tried it. Merchant ships owned in Europe could not get insurance and were too slow to evade the blockade; they simply stopped calling at Confederate ports. To fight an offensive war, the Confederacy purchased ships from Britain, converted them to warships, and raided American merchant ships in the Atlantic and Pacific oceans. Insurance rates skyrocketed and the American flag virtually disappeared from international waters. However, the same ships were reflagged with European flags and continued unmolested. After the war, the U.S. demanded that Britain pay for the damage done, and Britain paid the U.S. $15 million in 1871. The 1862 Union strategy called for simultaneous advances along four axes: Ulysses Grant used river transport and Andrew Foote 's gunboats of the Western Flotilla to threaten the Confederacy 's "Gibraltar of the West '' at Columbus, Kentucky. Though rebuffed at Belmont, Grant cut off Columbus. The Confederates, lacking their own gunboats, were forced to retreat and the Union took control of western Kentucky and opened Tennessee in March 1862, taking control of the Tennessee and Cumberland Rivers In addition to ocean - going warships coming up the Mississippi, the Union Navy used timberclads, tinclads, and armored gunboats. Shipyards at Cairo, Illinois, and St. Louis built new boats or modified steamboats for action. They took control of the Red, Tennessee, Cumberland, Mississippi, and Ohio rivers after victories at Fort Henry (February 6, 1862) and Fort Donelson (February 11 to 16, 1862), and supplied Grant 's forces as he moved into Tennessee. At Shiloh (Pittsburg Landing), in Tennessee in April 1862, the Confederates made a surprise attack that pushed Union forces against the river as night fell. Overnight, the Navy landed additional reinforcements, and Grant counter-attacked. Grant and the Union won a decisive victory -- the first battle with the high casualty rates that would repeat over and over. Memphis fell to Union forces on June 6, 1862, and became a key base for further advances south along the Mississippi River. On April 24, 1862, U.S. Naval forces under Farragut ran past Confederate defenses south of New Orleans. Confederate forces abandoned the city, giving the Union a critical anchor in the deep South. Naval forces assisted Grant in the long, complex Vicksburg Campaign that resulted in the Confederates surrendering at Vicksburg, Mississippi in July 1863, and in the Union fully controlling the Mississippi River soon after. In one of the first highly visible battles, a march by Union troops under the command of Maj. Gen. Irvin McDowell on the Confederate forces near Washington was repulsed. Maj. Gen. George B. McClellan took command of the Union Army of the Potomac on July 26 (he was briefly general - in - chief of all the Union armies, but was subsequently relieved of that post in favor of Maj. Gen. Henry W. Halleck), and the war began in earnest in 1862. Upon the strong urging of President Lincoln to begin offensive operations, McClellan attacked Virginia in the spring of 1862 by way of the peninsula between the York River and James River, southeast of Richmond. Although McClellan 's army reached the gates of Richmond in the Peninsula Campaign, Johnston halted his advance at the Battle of Seven Pines, then General Robert E. Lee and top subordinates James Longstreet and Stonewall Jackson defeated McClellan in the Seven Days Battles and forced his retreat. The Northern Virginia Campaign, which included the Second Battle of Bull Run, ended in yet another victory for the South. McClellan resisted General - in - Chief Halleck 's orders to send reinforcements to John Pope 's Union Army of Virginia, which made it easier for Lee 's Confederates to defeat twice the number of combined enemy troops. Emboldened by Second Bull Run, the Confederacy made its first invasion of the North. General Lee led 45,000 men of the Army of Northern Virginia across the Potomac River into Maryland on September 5. Lincoln then restored Pope 's troops to McClellan. McClellan and Lee fought at the Battle of Antietam near Sharpsburg, Maryland, on September 17, 1862, the bloodiest single day in United States military history. Lee 's army, checked at last, returned to Virginia before McClellan could destroy it. Antietam is considered a Union victory because it halted Lee 's invasion of the North and provided an opportunity for Lincoln to announce his Emancipation Proclamation. When the cautious McClellan failed to follow up on Antietam, he was replaced by Maj. Gen. Ambrose Burnside. Burnside was soon defeated at the Battle of Fredericksburg on December 13, 1862, when more than 12,000 Union soldiers were killed or wounded during repeated futile frontal assaults against Marye 's Heights. After the battle, Burnside was replaced by Maj. Gen. Joseph Hooker. Hooker, too, proved unable to defeat Lee 's army; despite outnumbering the Confederates by more than two to one, he was humiliated in the Battle of Chancellorsville in May 1863. Gen. Stonewall Jackson was shot in the arm by accidental friendly fire during the battle and subsequently died of complications. Gen. Hooker was replaced by Maj. Gen. George Meade during Lee 's second invasion of the North, in June. Meade defeated Lee at the Battle of Gettysburg (July 1 to 3, 1863). This was the bloodiest battle of the war, and has been called the war 's turning point. Pickett 's Charge on July 3 is often considered the high - water mark of the Confederacy because it signaled the collapse of serious Confederate threats of victory. Lee 's army suffered 28,000 casualties (versus Meade 's 23,000). However, Lincoln was angry that Meade failed to intercept Lee 's retreat, and after Meade 's inconclusive fall campaign, Lincoln turned to the Western Theater for new leadership. At the same time, the Confederate stronghold of Vicksburg surrendered, giving the Union control of the Mississippi River, permanently isolating the western Confederacy, and producing the new leader Lincoln needed, Ulysses S. Grant. While the Confederate forces had numerous successes in the Eastern Theater, they were defeated many times in the West. They were driven from Missouri early in the war as a result of the Battle of Pea Ridge. Leonidas Polk 's invasion of Columbus ended Kentucky 's policy of neutrality and turned it against the Confederacy. Nashville and central Tennessee fell to the Union early in 1862, leading to attrition of local food supplies and livestock and a breakdown in social organization. The Mississippi was opened to Union traffic to the southern border of Tennessee with the taking of Island No. 10 and New Madrid, Missouri, and then Memphis, Tennessee. In April 1862, the Union Navy captured New Orleans, which allowed Union forces to begin moving up the Mississippi. Only the fortress city of Vicksburg, Mississippi, prevented Union control of the entire river. General Braxton Bragg 's second Confederate invasion of Kentucky ended with a meaningless victory over Maj. Gen. Don Carlos Buell at the Battle of Perryville, although Bragg was forced to end his attempt at invading Kentucky and retreat due to lack of support for the Confederacy in that state. Bragg was narrowly defeated by Maj. Gen. William Rosecrans at the Battle of Stones River in Tennessee. The one clear Confederate victory in the West was the Battle of Chickamauga. Bragg, reinforced by Lt. Gen. James Longstreet 's corps (from Lee 's army in the east), defeated Rosecrans, despite the heroic defensive stand of Maj. Gen. George Henry Thomas. Rosecrans retreated to Chattanooga, which Bragg then besieged. The Union 's key strategist and tactician in the West was Ulysses S. Grant, who won victories at Forts Henry and Donelson (by which the Union seized control of the Tennessee and Cumberland Rivers); the Battle of Shiloh; and the Battle of Vicksburg, which cemented Union control of the Mississippi River and is considered one of the turning points of the war. Grant marched to the relief of Rosecrans and defeated Bragg at the Third Battle of Chattanooga, driving Confederate forces out of Tennessee and opening a route to Atlanta and the heart of the Confederacy. Extensive guerrilla warfare characterized the trans - Mississippi region, as the Confederacy lacked the troops and the logistics to support regular armies that could challenge Union control. Roving Confederate bands such as Quantrill 's Raiders terrorized the countryside, striking both military installations and civilian settlements. The "Sons of Liberty '' and "Order of the American Knights '' attacked pro-Union people, elected officeholders, and unarmed uniformed soldiers. These partisans could not be entirely driven out of the state of Missouri until an entire regular Union infantry division was engaged. By 1864, these violent activities harmed the nationwide anti-war movement organizing against the re-election of Lincoln. Missouri not only stayed in the Union, but Lincoln took 70 percent of the vote for re-election. Numerous small - scale military actions south and west of Missouri sought to control Indian Territory and New Mexico Territory for the Union. The Union repulsed Confederate incursions into New Mexico in 1862, and the exiled Arizona government withdrew into Texas. In the Indian Territory, civil war broke out within tribes. About 12,000 Indian warriors fought for the Confederacy, and smaller numbers for the Union. The most prominent Cherokee was Brigadier General Stand Watie, the last Confederate general to surrender. After the fall of Vicksburg in July 1863, General Kirby Smith in Texas was informed by Jefferson Davis that he could expect no further help from east of the Mississippi River. Although he lacked resources to beat Union armies, he built up a formidable arsenal at Tyler, along with his own Kirby Smithdom economy, a virtual "independent fiefdom '' in Texas, including railroad construction and international smuggling. The Union in turn did not directly engage him. Its 1864 Red River Campaign to take Shreveport, Louisiana was a failure and Texas remained in Confederate hands throughout the war. At the beginning of 1864, Lincoln made Grant commander of all Union armies. Grant made his headquarters with the Army of the Potomac, and put Maj. Gen. William Tecumseh Sherman in command of most of the western armies. Grant understood the concept of total war and believed, along with Lincoln and Sherman, that only the utter defeat of Confederate forces and their economic base would end the war. This was total war not in killing civilians but rather in taking provisions and forage and destroying homes, farms, and railroads, that Grant said "would otherwise have gone to the support of secession and rebellion. This policy I believe exercised a material influence in hastening the end. '' Grant devised a coordinated strategy that would strike at the entire Confederacy from multiple directions. Generals George Meade and Benjamin Butler were ordered to move against Lee near Richmond, General Franz Sigel (and later Philip Sheridan) were to attack the Shenandoah Valley, General Sherman was to capture Atlanta and march to the sea (the Atlantic Ocean), Generals George Crook and William W. Averell were to operate against railroad supply lines in West Virginia, and Maj. Gen. Nathaniel P. Banks was to capture Mobile, Alabama. Grant 's army set out on the Overland Campaign with the goal of drawing Lee into a defense of Richmond, where they would attempt to pin down and destroy the Confederate army. The Union army first attempted to maneuver past Lee and fought several battles, notably at the Wilderness, Spotsylvania, and Cold Harbor. These battles resulted in heavy losses on both sides, and forced Lee 's Confederates to fall back repeatedly. An attempt to outflank Lee from the south failed under Butler, who was trapped inside the Bermuda Hundred river bend. Each battle resulted in setbacks for the Union that mirrored what they had suffered under prior generals, though unlike those prior generals, Grant fought on rather than retreat. Grant was tenacious and kept pressing Lee 's Army of Northern Virginia back to Richmond. While Lee was preparing for an attack on Richmond, Grant unexpectedly turned south to cross the James River and began the protracted Siege of Petersburg, where the two armies engaged in trench warfare for over nine months. Grant finally found a commander, General Philip Sheridan, aggressive enough to prevail in the Valley Campaigns of 1864. Sheridan was initially repelled at the Battle of New Market by former U.S. Vice President and Confederate Gen. John C. Breckinridge. The Battle of New Market was the Confederacy 's last major victory of the war. After redoubling his efforts, Sheridan defeated Maj. Gen. Jubal A. Early in a series of battles, including a final decisive defeat at the Battle of Cedar Creek. Sheridan then proceeded to destroy the agricultural base of the Shenandoah Valley, a strategy similar to the tactics Sherman later employed in Georgia. Meanwhile, Sherman maneuvered from Chattanooga to Atlanta, defeating Confederate Generals Joseph E. Johnston and John Bell Hood along the way. The fall of Atlanta on September 2, 1864, guaranteed the reelection of Lincoln as president. Hood left the Atlanta area to swing around and menace Sherman 's supply lines and invade Tennessee in the Franklin - Nashville Campaign. Union Maj. Gen. John Schofield defeated Hood at the Battle of Franklin, and George H. Thomas dealt Hood a massive defeat at the Battle of Nashville, effectively destroying Hood 's army. Leaving Atlanta, and his base of supplies, Sherman 's army marched with an unknown destination, laying waste to about 20 percent of the farms in Georgia in his "March to the Sea ''. He reached the Atlantic Ocean at Savannah, Georgia in December 1864. Sherman 's army was followed by thousands of freed slaves; there were no major battles along the March. Sherman turned north through South Carolina and North Carolina to approach the Confederate Virginia lines from the south, increasing the pressure on Lee 's army. Lee 's army, thinned by desertion and casualties, was now much smaller than Grant 's. One last Confederate attempt to break the Union hold on Petersburg failed at the decisive Battle of Five Forks (sometimes called "the Waterloo of the Confederacy '') on April 1. This meant that the Union now controlled the entire perimeter surrounding Richmond - Petersburg, completely cutting it off from the Confederacy. Realizing that the capital was now lost, Lee decided to evacuate his army. The Confederate capital fell to the Union XXV Corps, composed of black troops. The remaining Confederate units fled west after a defeat at Sayler 's Creek. Initially, Lee did not intend to surrender, but planned to regroup at the village of Appomattox Court House, where supplies were to be waiting, and then continue the war. Grant chased Lee and got in front of him, so that when Lee 's army reached Appomattox Court House, they were surrounded. After an initial battle, Lee decided that the fight was now hopeless, and surrendered his Army of Northern Virginia on April 9, 1865, at the McLean House. In an untraditional gesture and as a sign of Grant 's respect and anticipation of peacefully restoring Confederate states to the Union, Lee was permitted to keep his sword and his horse, Traveller. On April 14, 1865, President Lincoln was shot by John Wilkes Booth, a Southern sympathizer. Lincoln died early the next morning, and Andrew Johnson became the president. Meanwhile, Confederate forces across the South surrendered as news of Lee 's surrender reached them. On April 26, 1865, General Joseph E. Johnston surrendered nearly 90,000 men of the Army of Tennessee to Major General William T. Sherman at the Bennett Place near present - day Durham, North Carolina. It proved to be the largest surrender of Confederate forces, effectively bringing the war to an end. President Johnson officially declared a virtual end to the insurrection on May 9, 1865; President Jefferson Davis was captured the following day. On June 2, Kirby Smith officially surrendered his troops in the Trans - Mississippi Department. On June 23, Cherokee leader Stand Watie became the last Confederate general to surrender his forces. Though the Confederacy hoped that Britain and France would join them against the Union, this was never likely, and so they instead tried to bring Britain and France in as mediators. The Union, under Lincoln and Secretary of State William H. Seward worked to block this, and threatened war if any country officially recognized the existence of the Confederate States of America. In 1861, Southerners voluntarily embargoed cotton shipments, hoping to start an economic depression in Europe that would force Britain to enter the war to get cotton, but this did not work. Worse, Europe developed other cotton suppliers, which they found superior, hindering the South 's recovery after the war. Cotton diplomacy proved a failure as Europe had a surplus of cotton, while the 1860 -- 62 crop failures in Europe made the North 's grain exports of critical importance. It also helped to turn European opinion further away from the Confederacy. It was said that "King Corn was more powerful than King Cotton '', as U.S. grain went from a quarter of the British import trade to almost half. When Britain did face a cotton shortage, it was temporary, being replaced by increased cultivation in Egypt and India. Meanwhile, the war created employment for arms makers, ironworkers, and British ships to transport weapons. Lincoln 's foreign policy was deficient in 1861 in terms of appealing to European public opinion. Diplomats had to explain that United States was not committed to the ending of slavery, but instead they repeated legalistic arguments about the unconstitutionality of secession. Confederate spokesmen, on the other hand, were much more successful by ignoring slavery and instead focusing on their struggle for liberty, their commitment to free trade, and the essential role of cotton in the European economy. In addition, the European aristocracy (the dominant factor in every major country) was "absolutely gleeful in pronouncing the American debacle as proof that the entire experiment in popular government had failed. European government leaders welcomed the fragmentation of the ascendant American Republic. '' U.S. minister to Britain Charles Francis Adams proved particularly adept and convinced Britain not to boldly challenge the blockade. The Confederacy purchased several warships from commercial shipbuilders in Britain (CSS Alabama, CSS Shenandoah, CSS Tennessee, CSS Tallahassee, CSS Florida, and some others). The most famous, the CSS Alabama, did considerable damage and led to serious postwar disputes. However, public opinion against slavery created a political liability for politicians in Britain, where the antislavery movement was powerful. War loomed in late 1861 between the U.S. and Britain over the Trent affair, involving the U.S. Navy 's boarding of the British ship Trent and seizure of two Confederate diplomats. However, London and Washington were able to smooth over the problem after Lincoln released the two. In 1862, the British considered mediation between North and South -- though even such an offer would have risked war with the U.S. British Prime Minister Lord Palmerston reportedly read Uncle Tom 's Cabin three times when deciding on this. The Union victory in the Battle of Antietam caused them to delay this decision. The Emancipation Proclamation over time would reinforce the political liability of supporting the Confederacy. Despite sympathy for the Confederacy, France 's own seizure of Mexico ultimately deterred them from war with the Union. Confederate offers late in the war to end slavery in return for diplomatic recognition were not seriously considered by London or Paris. After 1863, the Polish revolt against Russia further distracted the European powers, and ensured that they would remain neutral. The causes of the war, the reasons for its outcome, and even the name of the war itself are subjects of lingering contention today. The North and West grew rich while the once - rich South became poor for a century. The national political power of the slaveowners and rich southerners ended. Historians are less sure about the results of the postwar Reconstruction, especially regarding the second class citizenship of the Freedmen and their poverty. Historians have debated whether the Confederacy could have won the war. Most scholars, including James McPherson, argue that Confederate victory was at least possible. McPherson argues that the North 's advantage in population and resources made Northern victory likely but not guaranteed. He also argues that if the Confederacy had fought using unconventional tactics, they would have more easily been able to hold out long enough to exhaust the Union. Confederates did not need to invade and hold enemy territory to win, but only needed to fight a defensive war to convince the North that the cost of winning was too high. The North needed to conquer and hold vast stretches of enemy territory and defeat Confederate armies to win. Lincoln was not a military dictator, and could continue to fight the war only as long as the American public supported a continuation of the war. The Confederacy sought to win independence by out - lasting Lincoln; however, after Atlanta fell and Lincoln defeated McClellan in the election of 1864, all hope for a political victory for the South ended. At that point, Lincoln had secured the support of the Republicans, War Democrats, the border states, emancipated slaves, and the neutrality of Britain and France. By defeating the Democrats and McClellan, he also defeated the Copperheads and their peace platform. Many scholars argue that the Union held an insurmountable long - term advantage over the Confederacy in industrial strength and population. Confederate actions, they argue, only delayed defeat. Civil War historian Shelby Foote expressed this view succinctly: "I think that the North fought that war with one hand behind its back... If there had been more Southern victories, and a lot more, the North simply would have brought that other hand out from behind its back. I do n't think the South ever had a chance to win that War. '' A minority view among historians is that the Confederacy lost because, as E. Merton Coulter put it, "people did not will hard enough and long enough to win. '' Marxist historian Armstead Robinson agrees, pointing to a class conflict in the Confederate army between the slave owners and the larger number of non-owners. He argues that the non-owner soldiers grew embittered about fighting to preserve slavery, and fought less enthusiastically. He attributes the major Confederate defeats in 1863 at Vicksburg and Missionary Ridge to this class conflict. However, most historians reject the argument. James M. McPherson, after reading thousands of letters written by Confederate soldiers, found strong patriotism that continued to the end; they truly believed they were fighting for freedom and liberty. Even as the Confederacy was visibly collapsing in 1864 -- 65, he says most Confederate soldiers were fighting hard. Historian Gary Gallagher cites General Sherman who in early 1864 commented, "The devils seem to have a determination that can not but be admired. '' Despite their loss of slaves and wealth, with starvation looming, Sherman continued, "yet I see no sign of let up -- some few deserters -- plenty tired of war, but the masses determined to fight it out. '' Also important were Lincoln 's eloquence in rationalizing the national purpose and his skill in keeping the border states committed to the Union cause. The Emancipation Proclamation was an effective use of the President 's war powers. The Confederate government failed in its attempt to get Europe involved in the war militarily, particularly Britain and France. Southern leaders needed to get European powers to help break up the blockade the Union had created around the Southern ports and cities. Lincoln 's naval blockade was 95 percent effective at stopping trade goods; as a result, imports and exports to the South declined significantly. The abundance of European cotton and Britain 's hostility to the institution of slavery, along with Lincoln 's Atlantic and Gulf of Mexico naval blockades, severely decreased any chance that either Britain or France would enter the war. Historian Don Doyle has argued that the Union victory had a major impact on the course of world history. The Union victory energized popular democratic forces. A Confederate victory, on the other hand, would have meant a new birth of slavery, not freedom. Historian Fergus Bordewich, following Doyle, argues that: The North 's victory decisively proved the durability of democratic government. Confederate independence, on the other hand, would have established an American model for reactionary politics and race - based repression that would likely have cast an international shadow into the twentieth century and perhaps beyond. '' Scholars have debated what the effects of the war were on political and economic power in the South. The prevailing view is that the southern planter elite retained its powerful position in the South. However, a 2017 study challenges this, noting that while some Southern elites retained their economic status, the turmoil of the 1860s created greater opportunities for economic mobility in the South than in the North. The war resulted in at least 1,030,000 casualties (3 percent of the population), including about 620,000 soldier deaths -- two - thirds by disease, and 50,000 civilians. Binghamton University historian J. David Hacker believes the number of soldier deaths was approximately 750,000, 20 percent higher than traditionally estimated, and possibly as high as 850,000. The war accounted for more American deaths than in all other U.S. wars combined. Based on 1860 census figures, 8 percent of all white males aged 13 to 43 died in the war, including 6 percent in the North and 18 percent in the South. About 56,000 soldiers died in prison camps during the War. An estimated 60,000 men lost limbs in the war. Union army dead, amounting to 15 percent of the over two million who served, was broken down as follows: In addition there were 4,523 deaths in the Navy (2,112 in battle) and 460 in the Marines (148 in battle). Black troops made up 10 percent of the Union death toll, they amounted to 15 percent of disease deaths but less than 3 percent of those killed in battle. Losses among African Americans were high, in the last year and a half and from all reported casualties, approximately 20 percent of all African Americans enrolled in the military lost their lives during the Civil War. Notably, their mortality rate was significantly higher than white soldiers: (We) find, according to the revised official data, that of the slightly over two millions troops in the United States Volunteers, over 316,000 died (from all causes), or 15.2 percent. Of the 67,000 Regular Army (white) troops, 8.6 percent, or not quite 6,000, died. Of the approximately 180,000 United States Colored Troops, however, over 36,000 died, or 20.5 percent. In other words, the mortality "rate '' amongst the United States Colored Troops in the Civil War was thirty - five percent greater than that among other troops, notwithstanding the fact that the former were not enrolled until some eighteen months after the fighting began. Confederate records compiled by historian William F. Fox list 74,524 killed and died of wounds and 59,292 died of disease. Including Confederate estimates of battle losses where no records exist would bring the Confederate death toll to 94,000 killed and died of wounds. Fox complained, however, that records were incomplete, especially during the last year of the war, and that battlefield reports likely under - counted deaths (many men counted as wounded in battlefield reports subsequently died of their wounds). Thomas L. Livermore, using Fox 's data, put the number of Confederate non-combat deaths at 166,000, using the official estimate of Union deaths from disease and accidents and a comparison of Union and Confederate enlistment records, for a total of 260,000 deaths. However, this excludes the 30,000 deaths of Confederate troops in prisons, which would raise the minimum number of deaths to 290,000. The United States National Park Service uses the following figures in its official tally of war losses: Union: 853,838 Confederate: 914,660 While the figures of 360,000 army deaths for the Union and 260,000 for the Confederacy remained commonly cited, they are incomplete. In addition to many Confederate records being missing, partly as a result of Confederate widows not reporting deaths due to being ineligible for benefits, both armies only counted troops who died during their service, and not the tens of thousands who died of wounds or diseases after being discharged. This often happened only a few days or weeks later. Francis Amasa Walker, Superintendent of the 1870 Census, used census and Surgeon General data to estimate a minimum of 500,000 Union military deaths and 350,000 Confederate military deaths, for a total death toll of 850,000 soldiers. While Walker 's estimates were originally dismissed because of the 1870 Census 's undercounting, it was later found that the census was only off by 6.5 %, and that the data Walker used would be roughly accurate. Analyzing the number of dead by using census data to calculate the deviation of the death rate of men of fighting age from the norm suggests that at least 627,000 and at most 888,000, but most likely 761,000 soldiers, died in the war. This would break down to approximately 350,000 Confederate and 411,000 Union military deaths, going by the proportion of Union to Confederate battle losses. Deaths among former slaves has proven much harder to estimate, due to the lack of reliable census data at the time, though they were known to be considerable, as former slaves were set free or escaped in massive numbers in an area where the Union army did not have sufficient shelter, doctors, or food for them. University of Connecticut Professor James Downs states that tens to hundreds of thousands of slaves died during the war from disease, starvation, exposure, or execution at the hands of the Confederates, and that if these deaths are counted in the war 's total, the death toll would exceed 1 million. Losses were far higher than during the recent defeat of Mexico, which saw roughly thirteen thousand American deaths, including fewer than two thousand killed in battle, between 1846 and 1848. One reason for the high number of battle deaths during the war was the continued use of tactics similar to those of the Napoleonic Wars at the turn of the century, such as charging. With the advent of more accurate rifled barrels, Minié balls and (near the end of the war for the Union army) repeating firearms such as the Spencer Repeating Rifle and the Henry Repeating Rifle, soldiers were mowed down when standing in lines in the open. This led to the adoption of trench warfare, a style of fighting that defined much of World War I. The wealth amassed in slaves and slavery for the Confederacy 's 3.5 million blacks effectively ended when Union armies arrived; they were nearly all freed by the Emancipation Proclamation. Slaves in the border states and those located in some former Confederate territory occupied before the Emancipation Proclamation were freed by state action or (on December 6, 1865) by the Thirteenth Amendment. The war destroyed much of the wealth that had existed in the South. All accumulated investment Confederate bonds was forfeit; most banks and railroads were bankrupt. Income per person in the South dropped to less than 40 percent of that of the North, a condition that lasted until well into the 20th century. Southern influence in the U.S. federal government, previously considerable, was greatly diminished until the latter half of the 20th century. The full restoration of the Union was the work of a highly contentious postwar era known as Reconstruction. While not all Southerners saw themselves as fighting to preserve slavery, most of the officers and over a third of the rank and file in Lee 's army had close family ties to slavery. To Northerners, in contrast, the motivation was primarily to preserve the Union, not to abolish slavery. Abraham Lincoln consistently made preserving the Union the central goal of the war, though he increasingly saw slavery as a crucial issue and made ending it an additional goal. Lincoln 's decision to issue the Emancipation Proclamation angered both Peace Democrats ("Copperheads '') and War Democrats, but energized most Republicans. By warning that free blacks would flood the North, Democrats made gains in the 1862 elections, but they did not gain control of Congress. The Republicans ' counterargument that slavery was the mainstay of the enemy steadily gained support, with the Democrats losing decisively in the 1863 elections in the northern state of Ohio when they tried to resurrect anti-black sentiment. The Emancipation Proclamation enabled African - Americans, both free blacks and escaped slaves, to join the Union Army. About 190,000 volunteered, further enhancing the numerical advantage the Union armies enjoyed over the Confederates, who did not dare emulate the equivalent manpower source for fear of fundamentally undermining the legitimacy of slavery. During the Civil War, sentiment concerning slaves, enslavement and emancipation in the United States was divided. In 1861, Lincoln worried that premature attempts at emancipation would mean the loss of the border states, and that "to lose Kentucky is nearly the same as to lose the whole game. '' Copperheads and some War Democrats opposed emancipation, although the latter eventually accepted it as part of total war needed to save the Union. At first, Lincoln reversed attempts at emancipation by Secretary of War Simon Cameron and Generals John C. Frémont (in Missouri) and David Hunter (in South Carolina, Georgia and Florida) to keep the loyalty of the border states and the War Democrats. Lincoln warned the border states that a more radical type of emancipation would happen if his gradual plan based on compensated emancipation and voluntary colonization was rejected. But only the District of Columbia accepted Lincoln 's gradual plan, which was enacted by Congress. When Lincoln told his cabinet about his proposed emancipation proclamation, Seward advised Lincoln to wait for a victory before issuing it, as to do otherwise would seem like "our last shriek on the retreat ''. Lincoln laid the groundwork for public support in an open letter published in abolitionist Horace Greeley 's newspaper. In September 1862, the Battle of Antietam provided this opportunity, and the subsequent War Governors ' Conference added support for the proclamation. Lincoln issued his preliminary Emancipation Proclamation on September 22, 1862, and his final Emancipation Proclamation on January 1, 1863. In his letter to Albert G. Hodges, Lincoln explained his belief that "If slavery is not wrong, nothing is wrong... And yet I have never understood that the Presidency conferred upon me an unrestricted right to act officially upon this judgment and feeling... I claim not to have controlled events, but confess plainly that events have controlled me. '' Lincoln 's moderate approach succeeded in inducing border states, War Democrats and emancipated slaves to fight for the Union. The Union - controlled border states (Kentucky, Missouri, Maryland, Delaware and West Virginia) and Union - controlled regions around New Orleans, Norfolk and elsewhere, were not covered by the Emancipation Proclamation. All abolished slavery on their own, except Kentucky and Delaware. Since the Emancipation Proclamation was based on the President 's war powers, it only included territory held by Confederates at the time. However, the Proclamation became a symbol of the Union 's growing commitment to add emancipation to the Union 's definition of liberty. The Emancipation Proclamation greatly reduced the Confederacy 's hope of getting aid from Britain or France. By late 1864, Lincoln was playing a leading role in getting Congress to vote for the Thirteenth Amendment, which made emancipation universal and permanent. In Texas v. White, 74 U.S. 700 (1869) the United States Supreme Court ruled that Texas had remained a state ever since it first joined the Union, despite claims that it joined the Confederate States; the court further held that the Constitution did not permit states to unilaterally secede from the United States, and that the ordinances of secession, and all the acts of the legislatures within seceding states intended to give effect to such ordinances, were "absolutely null '', under the constitution. Reconstruction began during the war, with the Emancipation Proclamation of January 1, 1863, and it continued until 1877. It comprised multiple complex methods to resolve the outstanding issues of the war 's aftermath, the most important of which were the three "Reconstruction Amendments '' to the Constitution, which remain in effect to the present time: the 13th (1865), the 14th (1868) and the 15th (1870). From the Union perspective, the goals of Reconstruction were to consolidate the Union victory on the battlefield by reuniting the Union; to guarantee a "republican form of government for the ex-Confederate states; and to permanently end slavery -- and prevent semi-slavery status. President Johnson took a lenient approach and saw the achievement of the main war goals as realized in 1865, when each ex-rebel state repudiated secession and ratified the Thirteenth Amendment. Radical Republicans demanded proof that Confederate nationalism was dead and that the slaves were truly free. They came to the fore after the 1866 elections and undid much of Johnson 's work. In 1872 the "Liberal Republicans '' argued that the war goals had been achieved and that Reconstruction should end. They ran a presidential ticket in 1872 but were decisively defeated. In 1874, Democrats, primarily Southern, took control of Congress and opposed any more reconstruction. The Compromise of 1877 closed with a national consensus that the Civil War had finally ended. With the withdrawal of federal troops, however, whites retook control of every Southern legislature; the Jim Crow period of disenfranchisement and legal segregation was about to begin. The Civil War is one of the central events in American collective memory. There are innumerable statues, commemorations, books and archival collections. The memory includes the home front, military affairs, the treatment of soldiers, both living and dead, in the war 's aftermath, depictions of the war in literature and art, evaluations of heroes and villains, and considerations of the moral and political lessons of the war. The last theme includes moral evaluations of racism and slavery, heroism in combat and heroism behind the lines, and the issues of democracy and minority rights, as well as the notion of an "Empire of Liberty '' influencing the world. Professional historians have paid much more attention to the causes of the war, than to the war itself. Military history has largely developed outside academe, leading to a proliferation of solid studies by non-scholars who are thoroughly familiar with the primary sources, pay close attention to battles and campaigns, and write for the large public readership, rather than the small scholarly community. Bruce Catton and Shelby Foote are among the best - known writers. Practically every major figure in the war, both North and South, has had a serious biographical study. Deeply religious Southerners saw the hand of God in history, which demonstrated His wrath at their sinfulness, or His rewards for their suffering. Historian Wilson Fallin has examined the sermons of white and black Baptist preachers after the War. Southern white preachers said: God had chastised them and given them a special mission -- to maintain orthodoxy, strict biblicism, personal piety, and traditional race relations. Slavery, they insisted, had not been sinful. Rather, emancipation was a historical tragedy and the end of Reconstruction was a clear sign of God 's favor. In sharp contrast, Black preachers interpreted the Civil War as: God 's gift of freedom. They appreciated opportunities to exercise their independence, to worship in their own way, to affirm their worth and dignity, and to proclaim the fatherhood of God and the brotherhood of man. Most of all, they could form their own churches, associations, and conventions. These institutions offered self - help and racial uplift, and provided places where the gospel of liberation could be proclaimed. As a result, black preachers continued to insist that God would protect and help him; God would be their rock in a stormy land. Memory of the war in the white South crystallized in the myth of the "Lost Cause '', shaping regional identity and race relations for generations. Alan T. Nolan notes that the Lost Cause was expressly "a rationalization, a cover - up to vindicate the name and fame '' of those in rebellion. Some claims revolve around the insignificance of slavery; some appeals highlight cultural differences between North and South; the military conflict by Confederate actors is idealized; in any case, secession was said to be lawful. Nolan argues that the adoption of the Lost Cause perspective facilitated the reunification of the North and the South while excusing the "virulent racism '' of the 19th century, sacrificing African - American progress to a white man 's reunification. He also deems the Lost Cause "a caricature of the truth. This caricature wholly misrepresents and distorts the facts of the matter '' in every instance. The economic and political - power determinism forcefully presented by Charles A. Beard and Mary R. Beard in The Rise of American Civilization (1927) was highly influential among historians and the general public until the civil rights movement of the 1950s and 1960s. The Beards downplayed slavery, abolitionism, and issues of morality. They ignored constitutional issues of states ' rights and even ignored American nationalism as the force that finally led to victory in the war. Indeed, the ferocious combat itself was passed over as merely an ephemeral event. Much more important was the calculus of class conflict. The Beards announced that the Civil War was really: (A) social cataclysm in which the capitalists, laborers, and farmers of the North and West drove from power in the national government the planting aristocracy of the South. The Beards themselves abandoned their interpretation by the 1940s and it became defunct among historians in the 1950s, when scholars shifted to an emphasis on slavery. However, Beardian themes still echo among Lost Cause writers. The first efforts at Civil War battlefield preservation and memorialization came during the war itself with the establishment of National Cemeteries at Gettysburg, Mill Springs and Chattanooga. Soldiers began erecting markers on battlefields beginning with the First Battle of Bull Run in July 1861, but the oldest surviving monument is the Hazen monument, erected at Stones River near Murfreesboro, Tennessee, in the summer of 1863 by soldiers in Union Col. William B. Hazen 's brigade to mark the spot where they buried their dead in the Battle of Stones River. In the 1890s, the United States government established five Civil War battlefield parks under the jurisdiction of the War Department, beginning with the creation of the Chickamauga and Chattanooga National Military Park in Tennessee and the Antietam National Battlefield in Maryland in 1890. The Shiloh National Military Park was established in 1894, followed by the Gettysburg National Military Park in 1895 and Vicksburg National Military Park in 1899. In 1933, these five parks and other national monuments were transferred to the jurisdiction of the National Park Service. The modern Civil War battlefield preservation movement began in 1987 with the founding of the Association for the Preservation of Civil War Sites (APCWS), a grassroots organization created by Civil War historians and others to preserve battlefield land by acquiring it. In 1991, the original Civil War Trust was created in the mold of the Statue of Liberty / Ellis Island Foundation, but failed to attract corporate donors and soon helped manage the disbursement of U.S. Mint Civil War commemorative coin revenues designated for battlefield preservation. Although the two non-profit organizations joined forces on a number of battlefield acquisitions, ongoing conflicts prompted the boards of both organizations to facilitate a merger, which happened in 1999 with the creation of the Civil War Preservation Trust. In 2011, the organization was renamed, again becoming the Civil War Trust. After expanding its mission in 2014 to include battlefields of the Revolutionary War and War of 1812, the non-profit became the American Battlefield Trust in May 2018, operating with two divisions, the Civil War Trust and the Revolutionary War Trust. From 1987 through May 2018, the Trust and its predecessor organizations, along with their partners, preserved 49,893 acres of battlefield land through acquisition of property or conservation easements at more than 130 battlefields in 24 states. The American Civil War has been commemorated in many capacities ranging from the reenactment of battles, to statues and memorial halls erected, to films being produced, to stamps and coins with Civil War themes being issued, all of which helped to shape public memory. This varied advent occurred in greater proportions on the 100th and 150th anniversary. Hollywood 's take on the war has been especially influential in shaping public memory, as seen in such film classics as Birth of a Nation (1915), Gone with the Wind (1939), and more recently Lincoln (2012). Ken Burns produced a notable PBS series on television titled The Civil War (1990). It was digitally remastered and re-released in 2015. There were numerous technological innovations during the Civil War that had a great impact on 19th - century science. The Civil War was one of the earliest examples of an "industrial war '', in which technological might is used to achieve military supremacy in a war. New inventions, such as the train and telegraph, delivered soldiers, supplies and messages at a time when horses were considered to be the fastest way to travel. It was also in this war when countries first used aerial warfare, in the form of reconnaissance balloons, to a significant effect. It saw the first action involving steam - powered ironclad warships in naval warfare history. Repeating firearms such as the Henry rifle, Spencer rifle, Colt revolving rifle, Triplett & Scott carbine and others, first appeared during the Civil War; they were a revolutionary invention that would soon replace muzzle - loading and single - shot firearms in warfare, as well as the first appearances of rapid - firing weapons and machine guns such as the Agar gun and the Gatling gun. General reference Union Confederacy Ethnic articles Topical articles National articles State articles Memorials
who is the actor that plays tyrion lannister
Peter Dinklage - wikipedia Peter Hayden Dinklage (/ ˈdɪŋklɪdʒ /; born June 11, 1969) is an American actor and film producer. Dinklage studied acting at Bennington College, starring in a number of amateur stage productions. His film debut was in Living in Oblivion (1995) and his breakthrough came with the comedy - drama The Station Agent (2003). He has since appeared in Elf (2003), Find Me Guilty (2006), Underdog (2007), Penelope (2008), The Chronicles of Narnia: Prince Caspian (2008), X-Men: Days of Future Past (2014), Three Billboards Outside Ebbing, Missouri (2017), which earned him his first Screen Actors Guild Award, and Avengers: Infinity War (2018). Since 2011, Dinklage has portrayed Tyrion Lannister in the HBO series Game of Thrones, for which he received the Emmy for Outstanding Supporting Actor in a Drama Series in 2011 and 2015, as well as consecutive Emmy nominations from 2011 to 2016. He also won a Golden Globe for Best Supporting Actor -- Series, Miniseries or Television Film in 2012. Peter Hayden Dinklage was born on June 11, 1969, in Morristown, New Jersey, to John Carl Dinklage, an insurance salesman, and Diane Dinklage, an elementary - school music teacher. He was born with achondroplasia, a common form of dwarfism. Dinklage grew up as the only dwarf in his family in Brookside, New Jersey, with his parents and older brother, Jonathan. He is of German and Irish descent. As a child, Dinklage and his brother performed puppet musicals for people in their neighborhood. Dinklage has described his brother, who is a violinist, as being the "real performer of the family, '' saying that his brother 's passion for the violin was the only thing that kept him from pursuing acting. Dinklage had his first theatrical success in a fifth - grade production of The Velveteen Rabbit. Playing the lead, he was delighted by the audience 's response to the show. Dinklage attended Delbarton School, a Catholic preparatory school for boys, where he developed his acting. In 1984, Dinklage was inspired by a production of the play True West, written by American playwright Sam Shepard, to pursue a career in acting. Dinklage then attended Bennington College, where he studied for a drama degree and also appeared in numerous productions before graduating in 1991. After that he moved to New York City with his friend Ian Bell to build a theater company. Failing to pay the rent, they moved out of their apartment. Dinklage then worked at a data processing company for six years before pursuing a career as a full - time actor. Dinklage initially struggled to find work as an actor, partially because he refused to take the roles typically offered to actors with his condition, such as "elves or leprechauns. '' He made his credited film debut in the low - budget independent comedy - drama Living in Oblivion (1995) where he performed alongside Steve Buscemi. The film tells the story of a director, crew, and cast filming a low - budget independent film in the middle of New York City. Dinklage 's role was that of a frustrated actor with dwarfism who complains about his clichéd roles. The film has been well received by critics. The following year he appeared as a building manager in the crime drama Bullet starring rapper Tupac Shakur. Even after his well - received performance in Living in Oblivion, Dinklage still could not find someone willing to be his agent. After a recommendation from Buscemi to the director Alexandre Rockwell, Dinklage was cast in the comedy 13 Moons (2002). When later interviewed for a theater website, he was asked what his ideal role was, and he replied "the romantic lead '' who gets the girl. Dinklage found his breakthrough playing Finbar McBride, who is a quiet, withdrawn, unmarried man in the 2003 Tom McCarthy - directed film The Station Agent. According to co-star Bobby Cannavale, the film took three years to make and was not at first written with Dinklage in mind, Cannavale said McCarthy "set out to tell a story about a guy who was a train enthusiast who had chosen to isolate himself from the world, '' but when McCarthy actually started "putting pen to paper '' for the screenplay he decided to write the role for him. Speaking about the role, Dinklage noted that usually "roles written for someone my size are a little flat '' -- often either comical or "sort of Lord of the Rings '' type characters filled with wisdom; further: "They 're not sexual, they 're not romantic '' and "they 're not flawed. '' What attracted him to the character McCarthy had written was that it was not one of the stereotypical roles people with dwarfism play; rather, McBride has "romantic feelings '' as well as "anger and... flaws. '' The role earned him the Independent Spirit Award and Screen Actors Guild Award for Best Actor nominations. In the New York Observer, reviewer Andrew Sarris wrote, "Dinklage projects both size and intelligence in the fascinating reticence of his face. '' Besides being Dinklage 's highest - rated film on the review aggregator Rotten Tomatoes, The Station Agent was modestly successful at the box office, earning over $8 million against its small budget. Dinklage later appeared in the direct - to - DVD film Tiptoes (2003) with Gary Oldman and Matthew McConaughey. The film met with negative reviews, particularly Oldman 's role as a person with dwarfism. According to Dinklage, the original cut of the film was "gorgeous, '' but the director was fired shortly after turning it in, and the film was re-cut into a "rom - com with dwarves. '' Speaking on the Oldman controversy, Dinklage said, "There was some flak: Why would you put Gary Oldman on his knees? That 's almost like blackface. And I have my own opinions about political correctness, but I was just like, It 's Gary Oldman. He can do whatever he wants, and I 'm so happy to be here. '' That year, Dinklage also starred in several Off - Broadway productions, such as Richard III. Dinklage appeared in the Christmas comedy film Elf as Miles Finch, an irritable children 's author who beats up Buddy Hobbs (Will Ferrell) after he mistakes him for an elf. In 2005, he starred in the short - lived CBS science fiction series Threshold and appeared as a wedding planner in the comedy film The Baxter. He also made an appearance in the adventure comedy - drama Lassie as a traveling circus performer. The film received highly positive reviews, though it did not fare well at the box office. In 2006, Dinklage co-starred with Vin Diesel in Find Me Guilty, a courtroom drama directed by Sidney Lumet. The film tells the true story of the longest Mafia trial in American history; Dinklage played Ben Klandis, the lead defense attorney. Critical reaction to the film was mostly positive, though it was a commercial failure. Writing for Chicago Sun - Times, film critic Roger Ebert praised Dinklage 's performance, saying that the character he plays stands apart as "concise, articulate and professional. '' The same year, he portrayed the character Marlowe Sawyer in episodes of Nip / Tuck. He played a fictionalized version of himself in an episode of the HBO series Entourage and appeared in NBC 's 30 Rock as Stewart. The same year, Dinklage appeared in the British romantic comedy Penelope playing Lemon, a tabloid reporter. The film received mixed reviews from critics. Dinklage appeared in the 2007 British comedy film Death at a Funeral, reprising the role in the 2010 American remake; the films tell the story of a family trying to deal with a variety of issues after the death of their father. Later in 2007, he played the villainous Simon Bar Sinister in Underdog, which was poorly received but had some commercial success. Dinklage played Trumpkin in the 2008 film The Chronicles of Narnia: Prince Caspian. The movie was a box office disappointment, with global revenues of $419.7 million, although film critic Bill Gibron described Dinklage 's role as a "cutesy stereotype he has tried to avoid. '' Later that year, he played the title role in Uncle Vanya (directed by his wife, Erica Schmidt) in Bard College 's annual Bard SummerScape, the Upstate New York summer stage on the Annandale - on - Hudson campus. In 2010, he appeared in the Australian movie I Love You Too alongside Brendan Cowell and Peter Helliar. Since 2011, Dinklage has portrayed Tyrion Lannister in HBO 's fantasy drama Game of Thrones, an adaptation of author George R.R. Martin 's A Song of Ice and Fire novels. Game of Thrones takes place on the fictional continents of Westeros and Essos and chronicles the power struggles among noble families as they fight for control of the Iron Throne of the Seven Kingdoms. Tyrion is a member of House Lannister, one of the wealthiest and most powerful families in Westeros, and uses his status as a Lannister to mitigate the impact of the marginalization and derision he has received all of his life. In May 2009, he was the first actor to be cast, as showrunners David Benioff and D.B. Weiss noted that Dinklage, whom he described as funny, smart and witty, was their first choice for the role, as the actor 's "core of humanity, covered by a shell of sardonic dry wit, is pretty well in keeping with the character. '' Unfamiliar with the source material, Dinklage was cautious in his first meeting with the producers; as a dwarf, "he would n't play elves or leprechauns '' and was choosy about genre roles. Benioff and Weiss told Dinklage that the character was "a different kind of fantasy little person, '' or in the actor 's words, "No beard, no pointy shoes, a romantic, real human being. '' Dinklage signed on to play Tyrion before the meeting was half over, in part because, he said, "They told me how popular he was. '' Martin said of Dinklage 's casting, "If he had n't accepted the part, oh, boy, I do n't know what we would have done. '' The series proved to be a commercial success; it has been renewed for multiple seasons and will conclude with its eighth season in 2019. Dinklage has received widespread praise for his performance, with Matthew Gilbert from The Boston Globe saying that Dinklage "gives a winning performance that is charming, morally ambiguous, and self - aware. '' Dan Kois of The New York Times noted that Dinklage 's performance has made the character "all the more popular. '' The Los Angeles Times wrote "In many ways, Game of Thrones belongs to Dinklage. '' Tyrion has been called the "most quotable '' character and "one of the most beloved characters '' of the series. For his performance, he has gone on to win a Emmy Award for Outstanding Supporting Actor in a Drama Series in 2011 and 2015, as well as the 2012 Golden Globe Award for Best Supporting Actor. As a result of his performance and increased screen time, Dinklage was given top billing starting in the series ' second season. In 2014, he said on The Late Show with David Letterman that he had once tried to read the books the show is based upon, but had found them confusing. He joked, "George Martin, our author, is probably going to kill my character soon because I mentioned that. '' In 2014, Dinklage and four of his Game of Thrones co-stars became some of the highest paid actors on television, although sources differ on the actors ' per - episode salaries as of 2017. In 2015, Dinklage lent his voice for the role of Tyrion in Game of Thrones: A Telltale Games Series, a video game based on the show. In 2012, Dinklage voiced Captain Gutt in Ice Age: Continental Drift, which earned over $877 million. Dinklage has said that because this was his first voiceover role, he prepared himself by making sure to rest his voice before the recording sessions, and that he prefers doing roles he has not done before. After appearing in an episode of NBC 's late - night sketch comedy Saturday Night Live in 2013, Dinklage hosted an episode of the show in April 2016; his appearances included a sketch of him and Gwen Stefani singing a new song called called "Space Pants. '' He received praise for his performance. In 2014, Dinklage starred in the comedy horror film Knights of Badassdom opposite Ryan Kwanten and Steve Zahn. The film is about three best friends that go to the woods and reenact a live action Dungeons & Dragons role play, when they mistakenly conjure up a demon from Hell. The same year, he played the villain Bolivar Trask in the superhero film X-Men: Days of Future Past. The movie was the sixth highest - grossing film of 2014 with global revenues of $747.9 million. In preparation for his role, Dinklage stated that he did not want to approach the character necessarily as a villain, saying that Trask "actually sees what he 's doing as a good thing. '' He also voiced the AI Ghost in the 2014 video game Destiny, but was replaced by Nolan North in August 2015. In 2015, Dinklage starred in the science fiction comedy film Pixels as a former arcade champion named Eddie Plant, which was poorly received by critics. The movie had a global revenues of $244.9 million. In 2016, Dinklage provided the voice of The Mighty Eagle in The Angry Birds Movie, and recorded a song for the musical 's soundtrack. The film went on to become the second highest - grossing video game film of all - time, with a global revenues of $349.8 million, only behind Warcraft ($433.5 million). It also became the most successful Finnish film to date. His next release, the independent film Rememory (2017), failed to impress reviewers, but his role of Sam Bloom was praised. Freelance film critic Yasmin Kleinbart stated that "Dinklage deserves better than this film '' and John DeFore in The Hollywood Reporter said that he "delivers a soulful lead performance that will attract fans ' attention. '' Also in 2017, Dinklage had a supporting role in the drama - dark comedy film Three Billboards Outside Ebbing, Missouri, from director Martin McDonagh, and the drama Three Christs, both of which played at the Toronto International Film Festival. With the former receiving widespread critical success. In 2018, Dinklage produced and starred in I Think We 're Alone Now, a post-apocalyptic drama based on the companionship between Del, played by Dinklage, and Grace, played by Elle Fanning. The film premiered at the 2018 Sundance Film Festival. Dinklage appeared in the 2018 Marvel Studios film Avengers: Infinity War as the character Eitri, a giant dwarf, which became the fastest - grossing film to gross over $1 billion, and grossed $2.045 billion -- his highest grossing release as of 2018. Dinklage and writer - director Sacha Gervasi spent several years writing and producing a film based on the final days of actor Hervé Villechaize, who died by suicide shortly after his 1993 interview with Gervasi. As of 2017, Dinklage will star and play the title role in My Dinner with Hervé. The movie has been approved by HBO, with Dinklage being set to co-star alongside Jamie Dornan. In 2017, it was announced that Dinklage has been attached to star in the American comedy O Lucky Day, which is to be directed by Jon S. Baird where he will play a con - man who pretends to a Leprechaun. On October 5, 2017, Dinklage purchased the rights to the film adaptation of Joe R. Lansdale 's novel, The Thicket. Dinklage is set to play Shorty, though production has not yet started. In 2005, Dinklage married Erica Schmidt, a theater director. The name of their daughter, born in 2011, has not been revealed publicly, and Dinklage has denied media reports that the girl 's name is "Zelig. '' Dinklage and Schmidt welcomed a second child in 2017. The couple, notorious for their private lives, never revealed the name or gender of their second child. Dinklage 's face was injured in the early 1990s, when he was in a "punk - funk - rap '' band called Whizzy. It gave him a scar that runs from his neck to his eyebrow. The accident happened while he was playing at the nightclub CBGB in New York City, where he was accidentally kneed in the face and then started bleeding on the stage. In 2008, Dinklage described himself as a lapsed Catholic. Dinklage has been a vegetarian since the age of 16. An advocate for animal rights, he supports Farm Sanctuary and has served as the spokesperson for the organization 's Walk for Farm Animals. He also narrated the video Face your Food, a film on behalf of PETA advocating a vegan diet on ethical grounds. In 2017, Dinklage attended the Women 's March demonstration in Park City, Utah, to advocate legislation and policies regarding human rights and other issues. When asked about the news that president Donald Trump is reportedly ending the funding for national arts and humanities programs, Dinklage responded: "It 's always the first to go, is n't it? Art, then education: the two most important things, '' along with "climate, of course. '' Dinklage has a form of dwarfism, achondroplasia, which affects bone growth. As a result, he is 4 feet 4 inches (132 cm) tall, with a normal - sized head and torso but short limbs. While Dinklage has come to accept his condition, he sometimes found it challenging when growing up. In 2003, he said that when he was younger he was often angry and bitter about his condition, but as he got older, he realized that he "just ha (s) to have a sense of humor, '' to know "that it 's not your problem. It 's theirs. '' When asked in 2012 whether he saw himself as "a spokesman for the rights of little people, '' Dinklage responded: "I do n't know what I would say. Everyone 's different. Every person my size has a different life, a different history. Different ways of dealing with it. Just because I 'm seemingly okay with it, I ca n't preach how to be okay with it. '' Dinklage has been viewed as a role model for people sharing his condition. At the 2012 Golden Globe ceremony, when Dinklage won the award for Best Supporting Actor -- Series, Miniseries or Television Film, he told the audience that he had been thinking about "a gentleman, his name is Martin Henderson, '' and suggested that they Google his name. Henderson was a person with dwarfism from Somerset, England, who was badly injured by being tossed by a rugby fan in a bar. Henderson made a cameo as a goblin in two Harry Potter films. The speech by Dinklage brought media and public attention to the act of dwarf - tossing with Henderson 's name being trended worldwide on social media. Henderson eventually died of his injuries in 2016, 5 years after the incident. Dinklage turned down offers from talk shows to discuss the topic. He later explained that 20 years earlier he might have accepted these offers but that he 's a "little bit more at peace with things now and I -- said what I wanted to say. I have a friend who says the world does n't need another angry dwarf. '' According to the review aggregator site Rotten Tomatoes, Dinklage 's most critically acclaimed films are Living in Oblivion (1995), The Station Agent (2003), Lassie (2005), X-Men: Days of Future Past (2014), Three Billboards Outside Ebbing, Missouri (2017) and Avengers: Infinity War (2018). Dinklage won a Golden Globe Award for his performance in Game of Thrones. He has also won two Primetime Emmy Awards: Outstanding Supporting Actor in a Drama Series for the same role. Dinklage has been nominated for the Screen Actors Guild Award for Outstanding Performance by a Male Actor each year from 2013 to 2016. He has also been nominated for Critics ' Choice Television Award for Best Supporting Actor three times, in 2012, 2016 and 2017. As of 2018, Dinklage has won eleven awards from 58 nominations. He has been nominated for seven Primetime Emmy Awards and 14 Screen Actor Guild Awards, winning two Primetime Emmy Awards and a Golden Globe Award. General Interviews Talks
which amendment extended the protections of the bill of rights to the states
Incorporation of the Bill of Rights - wikipedia Incorporation, in United States law, is the doctrine by which portions of the Bill of Rights have been made applicable to the states. When the Bill of Rights was first ratified, courts held that its protections only extended to the actions of the federal government and that the Bill of Rights did not place limitations on the authority of state and local governments. However, in the post-Civil War era, beginning in 1897 with Chicago, Burlington and Quincy Railroad v. City of Chicago, various portions of the Bill of Rights have been held to be applicable to state and local governments by incorporation through the Fourteenth Amendment. Prior to the ratification of the Fourteenth Amendment and the development of the incorporation doctrine, the Supreme Court in 1833 held in Barron v. Baltimore that the Bill of Rights applied only to the federal, but not any state governments. Even years after the ratification of the Fourteenth Amendment, the Supreme Court in United States v. Cruikshank (1876) still held that the First and Second Amendment did not apply to state governments. However, beginning in the 1920s, a series of United States Supreme Court decisions interpreted the Fourteenth Amendment to "incorporate '' most portions of the Bill of Rights, making these portions, for the first time, enforceable against the state governments. -- Due Process Clause of the Fifth Amendment (1791) The United States Bill of Rights is the first ten amendments to the United States Constitution. Proposed following the oftentimes bitter 1787 -- 88 battle over ratification of the United States Constitution, and crafted to address the objections raised by Anti-Federalists, the Bill of Rights amendments add to the Constitution specific guarantees of personal freedoms and rights, clear limitations on the government 's power in judicial and other proceedings, and explicit declarations that all powers not specifically delegated to Congress by the Constitution are reserved for the states or the people. The concepts codified in these amendments are built upon those found in several earlier documents, including the Virginia Declaration of Rights and the English Bill of Rights 1689, along with earlier documents such as Magna Carta (1215). Although James Madison 's proposed amendments included a provision to extend the protection of some of the Bill of Rights to the states, the amendments that were finally submitted for ratification applied only to the federal government. -- Due Process Clause of the Fourteenth Amendment (1868) In the 1833 case of Barron v. Baltimore, the Supreme Court of the United States held that the Bill of Rights did not apply to state governments; such protections were instead provided by the constitutions of each state. After the Civil War, Congress and the states ratified the Fourteenth Amendment, which included the Due Process Clause and the Privileges or Immunities Clause. While the Fifth Amendment had included a due process clause, the due process clause of the Fourteenth Amendment crucially differed from the Fifth Amendment in that it explicitly applied to the states. The Privileges or Immunities Clause also explicitly applied to the states, unlike the Privileges and Immunities Clause of Article IV of the Constitution. In the Slaughter - House Cases (1873), the Supreme Court ruled that the Privileges or Immunities Clause was not designed to protect individuals from the actions of state governments. In Twining v. New Jersey (1908), the Supreme Court acknowledged that the Due Process Clause might incorporate some of the Bill of Rights, but continued to reject any incorporation under the Privileges or Immunities Clause. The doctrine of incorporation has been traced back to either Chicago, Burlington and Quincy Railroad v. City of Chicago (1897) in which the Supreme Court appeared to require some form of just compensation for property appropriated by state or local authorities (although there was a state statute on the books that provided the same guarantee) or, more commonly, to Gitlow v. New York (1925), in which the Court expressly held that States were bound to protect freedom of speech. Since that time, the Court has steadily incorporated most of the significant provisions of the Bill of Rights. Provisions that the Supreme Court either has refused to incorporate, or whose possible incorporation has not yet been addressed include the Fifth Amendment right to an indictment by a grand jury, and the Seventh Amendment right to a jury trial in civil lawsuits. Incorporation applies both procedurally and substantively to the guarantees of the states. Thus, procedurally, only a jury can convict a defendant of a serious crime, since the Sixth Amendment jury - trial right has been incorporated against the states; substantively, for example, states must recognize the First Amendment prohibition against a state - established religion, regardless of whether state laws and constitutions offer such a prohibition. The Supreme Court has declined, however, to apply new procedural constitutional rights retroactively against the states in criminal cases (Teague v. Lane, 489 U.S. 288 (1989)) with limited exceptions, and it has waived constitutional requirements if the states can prove that a constitutional violation was "harmless beyond a reasonable doubt. '' Rep. John Bingham, the principal framer of the Fourteenth Amendment, advocated that the Fourteenth applied the first eight Amendments of the Bill of Rights to the States. The U.S. Supreme Court subsequently declined to interpret it that way, despite the dissenting argument in the 1947 case of Adamson v. California by Supreme Court Justice Hugo Black that the framers ' intent should control the Court 's interpretation of the Fourteenth Amendment (he included a lengthy appendix that quoted extensively from Bingham 's congressional testimony). Although the Adamson Court declined to adopt Black 's interpretation, the Court during the following twenty - five years employed a doctrine of selective incorporation that succeeded in extending to the States almost all of the protections in the Bill of Rights, as well as other, unenumerated rights. The Bill of Rights thus imposes legal limits on the powers of governments and acts as an anti-majoritarian / minoritarian safeguard by providing deeply entrenched legal protection for various civil liberties and fundamental rights. The Supreme Court for example concluded in the West Virginia State Board of Education v. Barnette (1943) case that the founders intended the Bill of Rights to put some rights out of reach from majorities, ensuring that some liberties would endure beyond political majorities. As the Court noted the idea of the Bill of Rights "was to withdraw certain subjects from the vicissitudes of political controversy, to place them beyond the reach of majorities and officials and to establish them as legal principles to be applied by the courts. '' This is why "fundamental rights may not be submitted to a vote; they depend on the outcome of no elections. '' The 14th Amendment has vastly expanded civil rights protections and is cited in more litigation than any other amendment to the U.S. Constitution. In the 1940s and 1960s the Supreme Court gradually issued a series of decisions incorporating several of the specific rights from the Bill of Rights, so as to be binding upon the States. A dissenting school of thought championed by Justice Hugo Black supported that incorporation of specific rights, but urged incorporation of all specific rights instead of just some of them. Black was for so - called mechanical incorporation, or total incorporation, of Amendments 1 through 8 of the Bill of Rights (Amendments 9 and 10 being patently connected to the powers of the state governments). Black felt that the Fourteenth Amendment required the States to respect all of the enumerated rights set forth in the first eight amendments, but he did not wish to see the doctrine expanded to include other, unenumerated "fundamental rights '' that might be based on the Ninth Amendment. Black felt that his formulation eliminated any arbitrariness or caprice in deciding what the Fourteenth Amendment ought to protect, by sticking to words already found in the Constitution. Although Black was willing to invalidate federal statutes on federalism grounds, he was not inclined to read any of the first eight amendments as states ' rights provisions as opposed to individual rights provisions. Justice Black felt that the Fourteenth Amendment was designed to apply the first eight amendments from the Bill of Rights to the states, as he expressed in his dissenting opinion in Adamson v. California. This view was again expressed by Black in his concurrence in Duncan v. Louisiana citing the Fourteenth Amendment 's Privileges or Immunities Clause: "' No state shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States ' seem to me an eminently reasonable way of expressing the idea that henceforth the Bill of Rights shall apply to the States. '' Justice Felix Frankfurter, however, felt that the incorporation process ought to be incremental, and that the federal courts should only apply those sections of the Bill of Rights whose abridgment would "shock the conscience, '' as he put it in Rochin v. California (1952). Such a selective incorporation approach followed that of Justice Moody, who wrote in Twining v. New Jersey (1908) that "It is possible that some of the personal rights safeguarded by the first eight Amendments against National action may also be safeguarded against state action, because a denial of them would be a denial of due process of law. If this is so, it is not because those rights are enumerated in the first eight Amendments, but because they are of such a nature that they are included in the conception of due process of law. '' The due process approach thus considers a right to be incorporated not because it was listed in the Bill of Rights, but only because it is required by the definition of due process, which may change over time. For example, Moody 's decision in Twining stated that the 5th Amendment right against self - incrimination was not inherent in a conception of due process and so did not apply to states, but was overruled in Malloy v. Hogan (1964). Similarly, Justice Cardozo stated in Palko v. Connecticut (1937) that the right against double jeopardy was not inherent to due process and so does not apply to the states, but that was overruled in Benton v. Maryland (1969). Frankfurter 's incrementalist approach did carry the day, but the end result is very nearly what Justice Black advocated, with the exceptions noted below. -- Privileges or Immunities Clause of the Fourteenth Amendment Some have suggested that the Privileges or Immunities Clause would be a more appropriate textual basis than the due process clause for incorporation of the Bill of Rights. It is often said that the Slaughter - House Cases "gutted the privileges or immunities clause '' and thus prevented its use for applying the Bill of Rights against the states. In his dissent to Adamson v. California, however, Justice Hugo Black pointed out that the Slaughter - House Cases did not directly involve any right enumerated in the Constitution: (T) he state law under consideration in the Slaughter - House cases was only challenged as one which authorized a monopoly, and the brief for the challenger properly conceded that there was "no direct constitutional provision against a monopoly. '' The argument did not invoke any specific provision of the Bill of Rights, but urged that the state monopoly statute violated "the natural right of a person '' to do business and engage in his trade or vocation. Thus, in Black 's view, the Slaughterhouse Cases should not impede incorporation of the Bill of Rights against the states, via the Privileges or Immunities Clause. Some scholars go even further, and argue that the Slaughterhouse Cases affirmatively supported incorporation of the Bill of Rights against the states. In dicta, Justice Miller 's opinion in Slaughterhouse went so far as to acknowledge that the "right to peaceably assemble and petition for redress of grievances... are rights of the citizen guaranteed by the Federal Constitution, '' although in context Miller may have only been referring to assemblies for petitioning the federal government. In the 2010 landmark case McDonald v. Chicago, the Supreme Court declared the Second Amendment is incorporated through the Due Process Clause. However, Justice Thomas, the fifth justice in the majority, criticized substantive due process and declared instead that he reached the same incorporation only through the Privileges or Immunities Clause. No other justice attempted to question his rationale. This is considered by some as a "revival '' of the Privileges or Immunities Clause, however as it is a concurring opinion and not the majority opinion in the case, it is not binding precedent in lower courts; it is merely an indication that SCOTUS may be inclined, given the proper question, to reconsider and ultimately reverse the Slaughterhouse Cases. Many of the provisions of the First Amendment were applied to the States in the 1930s and 1940s, but most of the procedural protections provided to criminal defendants were not enforced against the States until the Warren Court of the 1960s, famous for its concern for the rights of those accused of crimes, brought state standards in line with federal requirements. The following list enumerates, by amendment and individual clause, the Supreme Court cases that have incorporated the rights contained in the Bill of Rights. (The Ninth Amendment is not listed; its wording indicates that it "is not a source of rights as such; it is simply a rule about how to read the Constitution. '' The Tenth Amendment is also not listed; by its wording, it is a reservation of powers to the states and to the people.) Guarantee against establishment of religion Guarantee of free exercise of religion Guarantee of freedom of speech Guarantee of freedom of the press Guarantee of freedom of assembly Guarantee of the right to petition for redress of grievances Guarantee of freedom of expressive association Right to keep and bear arms Freedom from quartering of soldiers In 1982, the Second Circuit applied the Third Amendment to the states in Engblom v. Carey. This is a binding authority over Connecticut, New York, and Vermont, but is only a persuasive authority over the remainder of the United States. The Tenth Circuit has suggested that the right is incorporated because the Bill of Rights explicitly codifies the "fee ownership system developed in English law '' through the Third, Fourth, and Fifth Amendments, and the Fourteenth Amendment likewise forbids the states from depriving citizens of their property without due process of law. See United States v. Nichols, 841 F. 2d 1485, 1510 n. 1 (10th Cir. 1988). The "problem '' is that the third amendment, by and large, is the only one that is almost never violated by the states and Federal government; almost nobody is suing over the issue, so very few cases are being heard. The U.S. Supreme Court has never had a third amendment case appealed to it. Unreasonable search and seizure Warrant requirements Right to indictment by a grand jury Protection against double jeopardy Constitutional privilege against self - incrimination Protection against taking of private property without just compensation Right to a speedy trial Right to a public trial Right to trial by impartial jury Right to a jury selected from residents of the state and district where the crime occurred Right to notice of accusations Right to confront adverse witnesses Right to compulsory process (subpoenas) to obtain witness testimony Right to assistance of counsel Right to jury trial in civil cases Re-Examination Clause Protection against excessive bail Protection against excessive fines Protection against cruel and unusual punishments A similar legal doctrine to incorporation is that of reverse incorporation. Whereas incorporation applies the Bill of Rights to the states through the Due Process Clause of the Fourteenth Amendment, in reverse incorporation, the Equal Protection Clause of the Fourteenth Amendment has been held to apply to the federal government through the Due Process Clause located in the Fifth Amendment. For example, in Bolling v. Sharpe, 347 U.S. 497 (1954), which was a companion case to Brown v. Board of Education, the schools of the District of Columbia were desegregated even though Washington is a federal enclave. Likewise, in Adarand Constructors, Inc. v. Peña 515 U.S. 200 (1995), an affirmative action program by the federal government was subjected to strict scrutiny based on equal protection.
space jam i can believe i can fly
I Believe I Can Fly - wikipedia "I Believe I Can Fly '' is a 1996 song written, produced and performed by American singer R. Kelly, from the soundtrack to the 1996 film Space Jam. It was originally released on November 26, 1996, and was later included on Kelly 's 1998 album R. In early 1997, "I Believe I Can Fly '' reached number two on the Billboard Hot 100; it was kept from the number one spot by Toni Braxton 's "Un-Break My Heart ''. Although Kelly has had two number one songs on the pop chart, "I Believe I Can Fly '' is his most successful single. It reached the number - one spot of the Billboard R&B Singles Chart and remained there for six non-consecutive weeks, keeping "Un-Break My Heart '' from the top position of that chart for four of those weeks. "I Believe I Can Fly '' also topped the charts in eight countries (including the United Kingdom), has won three Grammy Awards, and was ranked number 406 on Rolling Stone 's list of the 500 Greatest Songs of All Time in 2004. The music video was directed by Kelly with Hype Williams. Other than appearing on the soundtrack for the film Space Jam, "I Believe I Can Fly '' was performed by the school band in the movie Drumline during the high school graduation ceremony of Devon Miles (Nick Cannon). R. Kelly performed his song at the 40th Annual Grammy Awards. STS - 122 crew heard this song on flight day 10 as a wake up call. Since its release, it has become commonly associated with the NBA, most notably with Michael Jordan. The song also played at the conclusion of NBC 's broadcast of the 1997 NBA Finals. In addition to the NBA, the song also found use at other sporting events, most notably at Major League Baseball 's New York Yankees home games during their four consecutive World Series runs from 1998 to 2001, the first three of which they won. A version of the song, recorded by the Halifax community choir, was used as the backing track to a 2012 UK TV advertisement for the Halifax Bank. On October 13, 2012, when the Space Shuttle Endeavour was being transferred from Los Angeles International Airport to the California Science Center through the streets of Los Angeles, the recording was played as the shuttle left The Forum, and the song was performed live by James Ingram later that day at Debbie Allen 's live show celebrating the Endeavour 's arrival at the corner of Crenshaw Blvd and Martin Luther King Blvd. (The shuttle was delayed over five hours in arriving there; to keep the crowd entertained, the performance went on only slightly delayed.) sales figures based on certification alone shipments figures based on certification alone
academy of country music entertainer of the year winners
Academy of Country music Award for Entertainer of the year - wikipedia The Academy of Country Music Award for Entertainer of the Year is the biggest competitive category presented at the Academy of Country Music Awards. The following is the list of winners, with the year representing the nominated work.
how many seasons of survivor has ozzy been on
Ozzy Lusth - wikipedia Oscar "Ozzy '' Lusth (born August 23, 1981, in Guanajuato, Guanajuato, Mexico) is an American reality show veteran contestant who has appeared on several shows, including Survivor: Cook Islands, where he finished as the runner - up; Survivor: Micronesia; and Survivor: South Pacific. He also competed in the 34th edition of Survivor: Game Changers; and the second season of American Ninja Warrior He also appeared on the Playboy reality series, Foursome. Lusth was born on August 23, 1981 in Guanajuato, Mexico. When he was only a couple of years old, his parents divorced and he moved with his mom, to Durham, North Carolina, to be closer to his relatives. After a few more moves, his mother, with himself and his two siblings, Katrin and Zoe, settled in Mountain View, California. He went to high school in Mountain View. After graduating, he attended Santa Barbara City College, for two years, until he moved to the Los Angeles area. On Survivor: Cook Islands, Lusth was originally a member of the Aitutaki (Aitu) tribe, which represented Latinos. Having no particular alliance with any members, he was seen as a threat. In Episode Two, he suggested the tribe throw the immunity challenge so they could vote out the weak link, Billy Garcia. Lusth 's plan was met with skepticism, particularly with Cristina Coria. However, Garcia was voted out at Tribal Council. Following a tribe switch, Lusth remained in Aitu with only one of his original tribemates, Cecelia Mansilla. Lusth was perceived a threat due and he led his tribes to numerous immunities. Yet, he was not part of the dominating alliance of Yul Kwon: Candice Woodcock, Becky Lee, and Jonathan Penner and Lusth lost his original tribemate, Mansilla. Following the third tribal council, Lusth was angry that his tribe had voted for Mansilla, but was comforted by his tribemates. After two straight wins, Probst announced both tribes would go to Tribal Council. Lusth, along with the rest of the tribe, except Jessica "Flica '' Smith, denounced "Plan Voodoo '' and voted out Cao Boi Bui. The tribe lost immunity at the next challenge and Smith was unanimously voted out. Lusth played a large part in the next three straight wins and the Rarotonga (Raro) tribe saw their numbers were dwindling. However, when Probst announced an offer of mutiny, the Aitu tribe was left shocked and decimated, as original Raro members Woodcock and Penner returned, thus making Raro eight members strong. The challenge following the mutiny was dominated by the Aitu Four: Sundra Oakley, Lusth, Kwon and Lee. Lusth and the Aitu Four sent Woodcock to Exile Island several times, eventually sealing the fate of she and Penner. The mutiny sealed Lusth in his alliance with Oakley, Kwon, and Lee, and the Aitu Four dominated subsequent challenges. Following the tribal merge, Lusth remained loyal to the Aitu Four and voted with his allies to send Nate Gonzalez, Woodcock, Penner, Parvati Shallow and Adam Gentry off, despite times when he considered aligning with Shallow. Lusth was seen as a threat by allies Lee and Oakley, who went to Kwon in an attempt to vote Lusth out. But Lusth won immunity once again, forcing the expulsion of former Raro members. Lusth won the final immunity challenge and scored a spot in the Final Three, with Kwon, who had the hidden immunity idol. Forced to vote one of their own out, Lee and Oakley tied with two votes against each. Lee won the fire - making tiebreaker, securing a spot with Kwon and Lusth. At the final Tribal Council, Lusth was praised for his athletic skills, yet criticized for being a loner. He finished in second place in a 5 -- 4 -- 0 vote and was the first male runner - up in two years since the eighth season. Probst stated at the reunion show that he had dominated. Prior to the million dollar vote, Gentry had promised he would give Kwon his vote if he survived longer than Penner. Kwon did and Gentry followed suit. Lusth gained the votes of Raro members: Shallow, Gonzalez, Jenny Guzon - Bae and Rebecca Borman. Lusth was the longest - lasting contestant from the original Aitu tribe, and stayed so after the tribe switch. Dominant in swimming, agility and balance, he won five out of six individual immunity challenges. He won a Mercury Mariner, for being voted favorite player of the season. With five immunity challenge wins in Cook Islands, Lusth became the fourth player, after Colby Donaldson, Tom Westman and Terry Deitz, to win a record five immunity challenges in a single season. Since then, Mike Holloway, on Survivor: Worlds Apart, and Brad Culpepper, on Survivor: Game Changers, have also accomplished this feat. Survivor: Micronesia -- Fans vs. Favorites was Lusth 's second appearance on Survivor. With regard to his prior appearance, Lusth commented, "My mistakes last time were basically being too much of a loner. I 'm never going to go anywhere by myself except to use the bathroom ''. Originally part of the Malakal tribe representing the Favorites, Lusth decided to establish alliances. He allied himself with Amanda Kimmel, a Survivor: China contestant. The two went on to join forces with James Clement, Kimmel 's ally from China, and Parvati Shallow, Lusth 's fellow contestant on Cook Islands. The four became known for the romantic relationships between Lusth and Kimmel and between Clement and Shallow, with Cirie Fields joking that Kimmel would be giving birth to "little Ozzlets. '' The tribe faced an early defeat and the alliance of four found themselves against another dominating alliance of Ami Cusack, Jonathan Penner, Eliza Orlins and Yau - Man Chan. Both alliances attempted to recruit Jonny Fairplay, whose vote would become the swing vote. However, things changed when Fairplay asked to be sent home to be with his pregnant girlfriend, and the tribe honored his wish and voted him out 9 -- 1, with Fairplay voting for Lusth. Faced with a loss in Episode Three, the four recruited Fields as their fifth member and voted out Chan, who was perceived as a threat by her. In Episode Four, Lusth was banished to Exile Island where he found the immunity idol and put a fake one in the place where he found the real one. Following a tribal switch in Episode Five, he remained with two of his alliance members, Kimmel and Fields. The tribe began to lose in immunity challenges and subsequently voted off the Fans: Joel Anderson, Chet Welch, and Tracy Hughes - Wolf; during this time, Penner, now a member of Airai, was evacuated and Kathy Sleckman quit. With the tribe numbers becoming even, Malakal lost the challenge before the merge and voted off Cusack. When Lusth reunited with his alliance at the merge, the five of them took control of the game, voting off Orlins, whom Shallow disliked. He became comfortable with his position in the game, trusting Kimmel, Clement, and Shallow completely. He also found an ally in Fan Erik Reichenbach, who admired Lusth 's abilities. However, Shallow had plans of her own. Having aligned herself with Fans Natalie Bolton and Alexis Jones while on Airai, she found herself in trouble when Fields approached her with a plan to blindside Lusth. Shallow 's vote became the swing vote and she was confused, debating whether or not to vote her ally out. But Shallow stuck with Fields and convinced Lusth to leave his idol back at camp. At Tribal Council, Lusth was blindsided and this became the start of the women 's alliance 's domination, led by Shallow. At the Final Tribal Council, he berated Shallow for selling away their friendship and refused to let her talk. He also confessed his love for Kimmel, for whom he cast his vote to win, saying she deserved the money a million times more than Shallow. He and Kimmel were still together as of the reunion show, but he revealed in an interview that they had separated before he returned for Season 23, Survivor: South Pacific. He also revealed at the reunion show that he and Shallow have healed their friendship. Lusth returned, along with Benjamin "Coach '' Wade, for a third shot at the game in Survivor: South Pacific. He revealed that he wanted to play a strategic game this time, feeling that everyone only saw him as a physical threat in previous games. He was randomly assigned to the Savaii tribe, where he quickly formed a core alliance with Keith Tollefson and Jim Rice, with Whitney Duncan and Elyse Umemoto being the extra members. His increasing closeness with Umemoto prompted Rice to undercut his power in Episode Five by convincing the tribe to vote out Umemoto instead of John Cochran. This initially angered Lusth, but he later forgave his tribe. As the tribe approached what they suspected was the last Tribal Council before the merge and the return of the Redemption Island victor, Lusth made an unprecedented, bold, and strategic move by requesting that his tribe vote him out to Redemption Island, in the hope that he would beat long - running Redemption Island resident Christine Shields - Markoski and return to the merged tribe, making the former Savaii numbers equal to the former Upolu members. Jeff Probst has called this one of the greatest moves in Survivor history. The plan worked, but at the first post-merge Tribal Council Cochran betrayed his own tribe and joined with Upolu in voting out Tollefson, followed by Lusth and his other former tribemates, one by one. Back on Redemption Island, Lusth defeated Tollefson, Rice, Dawn Meehan, Duncan, Cochran, Edna Ma, and Brandon Hantz to stay in the game. Ozzy won immunity when he got back into the game and Rick Nelson was voted off. However, in the next challenge Lusth was defeated by Sophie Clarke at a puzzle game and became the last person voted off to become the final member of the jury. He later cast his vote for Clarke to win. He did, however, win the $100,000 Sprint Player of the Season prize over Cochran, who came in a distant second. Lusth was also the last member of the original Savaii tribe and set a Survivor record in being voted out three times in one season. Lusth returned for a fourth time in the 34th season, Survivor: Game Changers. This made him one of only four contestants at the time to ever compete on Survivor four times -- the others being "Boston Rob '' Mariano, Rupert Boneham, and fellow Game Changers contestant Cirie Fields. On Day 16, Lusth attended his first Tribal Council, where he joined the majority in voting out two - time winner Sandra Diaz - Twine. He had to go to Tribal Council again on Day 18, when he joined the majority in eliminating Jeff Varner. He was ultimately blindsided on Day 24 for being too big of a physical threat. He placed 12th and was the second member of the jury. At Final Tribal Council, he praised finalist Brad Culpepper for his gameplay and voted for Culpepper to win, although Sarah Lacina would ultimately win the title of "Sole Survivor. '' Shortly after Survivor: South Pacific ended, Ozzy was inducted into Xfinity 's Survivor "Hall of Fame '' in the class of 2011, alongside Cirie Fields and Tom Westman. Several years later, in the official issue of CBS Watch magazine commemorating the 15th anniversary of Survivor, Lusth performed well in two major viewer polls that were released in the magazine. He came in fifth in the poll for "Greatest Castaway of All Time, '' and he came in third in the "Hottest Male Castaway '' poll, behind Colby Donaldson and Malcolm Freberg. He was also the only male contestant to appear in both polls, and one of only two contestants overall to appear in both the "greatest players '' poll and one of the "most attractive '' polls, the other being fellow Cook Islands and Micronesia contestant Parvati Shallow. Lastly, his elimination in Episode Ten of Micronesia was voted by viewers in the same magazine as the # 2 most memorable moment in the series, only behind Sandra Diaz - Twine burning Russell Hantz 's hat in Episode 14 of Survivor: Heroes vs. Villains. By making the merge in Survivor: Game Changers, Lusth became the first and only player in Survivor history to make the merge four times. Lusth also currently holds the record for the most time spent playing Survivor with 128 days spent in the game as an active contestant followed by Cirie Fields who played 121 days, Rob Mariano who played 117 days, Shallow who played 114 days, Kimmel who played 108 days, Rupert Boneham who played 104 days and Andrea Boehlke who played 103 days. Lusth lives in Venice Beach, California. He once owned the Brakeman Brewery in Los Angeles, which has since closed.
when can a foul ball be caught for an out
Foul ball - wikipedia In baseball, a foul ball is a batted ball that: A foul fly shall be judged according to the relative position of the ball and the foul line, including the foul pole, and not as to whether the fielder is on foul or fair territory at the time he touches the ball. Additionally, ballpark ground rules may specify that batted balls striking certain fixed objects such as railings, nets, or a roof if present are foul balls. Foul territory or foul ground is defined as that part of the playing field outside the first and third base lines extended to the fence and perpendicularly upwards. Note: the foul lines and foul poles are not part of foul territory. In general, when a batted ball is ruled a foul ball, the ball is dead, all runners must return to their time - of - pitch base without liability to be put out, and the batter returns to home plate to continue his turn at bat. A strike is issued for the batter if he had fewer than two strikes. If the batter already has two strikes against him when he hits a foul ball, a strike is not issued unless the ball was bunted to become a foul ball, in which case a third strike is issued and a strikeout recorded for the batter and pitcher. A strike is, however, recorded for the pitcher for every foul ball the batter hits, regardless of the count. If any member of the fielding team catches a foul ball before it touches the ground or lands outside the field perimeter, the batter is out. However, the caught ball is in play and base runners may attempt to advance. A foul ball is different from a foul tip, in which the ball makes contact with the bat, travels directly to the catcher 's hands, and is caught. In this case, the ball remains live and a strike is added to the batter 's count. If a foul tip is strike three, the batter is out. On rare occasions, such as in extra innings or the ninth inning of a tie game when a runner is on third base, with less than two outs, fielders have been known to let long foul flies drop rather than risk losing the game on a sacrifice fly. Sometimes, in that situation, a fielder will not try to catch a ball that is close to the foul line in the hope that the ball will go foul at the last second. In different situations, a foul ball may be considered a positive or negative outcome of a pitch or swing. When there are zero or one strikes, a foul ball counts as a strike, benefiting the pitcher. However, a foul ball may reveal to the batter that he has timed a pitch well and need only make adjustment to the location of his swing on the next such pitch; this is often called a good cut or simply a good swing. Foul balls with two strikes are generally considered positive for the batter, since he thus avoids strike three on a potentially difficult pitch. Also, foul balls with two strikes increase the pitcher 's pitch count, adding to his / her fatigue, thus providing some small advantage to the offense. A strategy of swinging on any ball to try to produce additional fouls and prolong an at - bat is often used against strong pitchers to try to drive them from the game sooner (and also the possibility of the pitcher throwing a pitch a hitter can get a hit on); this does, however, have the disadvantage of generating more strikeouts.
when was the last time the miami dolphins made the playoffs
List of Miami Dolphins seasons - wikipedia This is a list of seasons completed by the Miami Dolphins American football franchise of the National Football League (NFL). The list documents the season - by - season records of the Dolphins franchise from 1966 to present, including postseason records, and league awards for individual players or head coaches. Although the Miami Dolphins were not successful before joining the NFL, from 1970 when they played their first season after the AFL -- NFL merger until 2001 they were one of the most successful teams in the league, playing in the postseason on 22 occasions over those 32 years and winning 335 and tying two of 528 games for an overall win percentage of 63.6. Early in this period the Dolphins won their only two Super Bowls in consecutive seasons, in the process achieving the only modern - day perfect season in any major professional sports league during only their third year in the NFL. Much of this success was orchestrated by coach Don Shula who joined the team in 1970 and stayed with them until his retirement in 1995. After Shula retired in 1995, the Dolphins remained a force for six years under successors Jimmy Johnson and Dave Wannstedt, but since 2002 and especially since 2004 have fallen on harder times, reaching the postseason only twice in the twelve seasons since. In 2007, they nearly suffered an imperfect season, winning only one game in overtime. For complete team history, see History of the Miami Dolphins. Note: Statistics are correct through the end of the 2017 NFL season. Due to a strike - shortened season in 1982, all teams were ranked by conference instead of division.
who was the united states president that was most responsible for the canal
History of the Panama canal - wikipedia The idea of the Panama canal dates back to the 1513 discovery of the isthmus by Vasco Núñez de Balboa. The narrow land bridge between North and South America houses the Panama Canal, a water passage between the Atlantic and Pacific Oceans. The earliest European colonists recognized this potential, and several proposals for a canal were made. By the late nineteenth century, technological advances and commercial pressure allowed construction to begin in earnest. Noted canal engineer Ferdinand de Lesseps led an initial attempt by France to build a sea - level canal. Beset by cost overruns due to the severe underestimation of the difficulties in excavating the rugged Panama land, heavy personnel losses in Panama due to tropical diseases, and political corruption in France surrounding the financing of the massive project, the project succeeded in only partially completing the canal. Interest in a U.S. - led canal effort picked up as soon as France abandoned the project. Initially, the Panama site was politically unfavorable in the U.S. for a variety of reasons, including the taint of the failed French efforts and the Colombian government 's unfriendly attitude towards the U.S. continuing the project. The U.S. first sought to construct a completely new canal through Nicaragua instead. French engineer and financier Philippe - Jean Bunau - Varilla played a key role in changing U.S. attitudes. Bunau - Varilla had a large stake in the failed French canal company, and stood to make money on his investment only if the Panama Canal was completed. Extensive lobbying of U.S. lawmakers coupled with his support of a nascent independence movement among the Panamanian people led to a simultaneous revolution in Panama and the negotiation of the Hay -- Bunau - Varilla Treaty which secured both independence for Panama and the right for the U.S. to lead a renewed effort to construct the canal. Colombia 's response to the Panamanian independence movement was tempered by U.S. military presence; the move is often cited as a classic example of the era of gunboat diplomacy. U.S. success hinged on two factors. First was converting the original French sea - level plan to a more realistic lock - controlled canal. The second was controlling disease which decimated workers and management alike under the original French attempt. Initial chief engineer John Frank Stevens built much of the infrastructure necessary for later construction; slow progress on the canal itself led to his replacement by George Washington Goethals. Goethals oversaw the bulk of the excavation of the canal, including appointing Major David du Bose Gaillard to oversee the most daunting project, the Culebra Cut through the roughest terrain on the route. Almost as important as the engineering advances was the healthcare advances made during the construction, led by William C. Gorgas, an expert in controlling tropical diseases such as yellow fever and malaria. Gorgas was one of the first to recognize the role of mosquitoes in the spread of these diseases, and by focusing on controlling the mosquitoes greatly improved worker conditions. On 7 January 1914 the French crane boat Alexandre La Valley became the first to make the traverse, and on 1 April 1914 the construction was officially completed with the hand - over of the project from the construction company to the Canal Zone government. The outbreak of World War I caused the cancellation of any official "grand opening '' celebration, and the canal officially opened to commercial traffic on 15 August 1914 with the transit of the SS Ancon. During World War II, the canal proved a vital part of the U.S. military strategy, allowing ships to transfer easily between the Atlantic and Pacific. Politically, the Canal remained a territory of the United States until 1977, when the Torrijos -- Carter Treaties began the process of transferring territorial control of the Panama Canal Zone to Panama, a process completed on 31 December 1999. The Panama Canal continues to be a viable commercial venture and a vital link in world shipping, and continues to be periodically updated and maintained. The Panama Canal expansion project started construction in 2007 and began commercial operation on 26 June 2016. The new locks allow transit of larger Post-Panamax and New Panamax ships, which have a greater cargo capacity than the original locks could accommodate. The idea of a canal across Central America was revived during the early 19th century. In 1819, the Spanish government authorized the construction of a canal and the creation of a company to build it. Although the project stalled for some time, a number of surveys were made between 1850 and 1875. They indicated that the two most - favorable routes were across Panama (then part of Colombia) and Nicaragua, with a third route across the Isthmus of Tehuantepec in Mexico another option. The Nicaraguan route was surveyed. After the 1869 completion of the Suez Canal, France thought that an apparently - similar project to connect the Atlantic and Pacific Oceans could be carried out with little difficulty. In 1876 an international company, La Société internationale du Canal interocéanique, was created to undertake its construction; two years later, it obtained a concession from the Colombian government (since Panama was a Colombian province) to dig a canal across the isthmus. Ferdinand de Lesseps, who was in charge of the Suez Canal construction, headed the project. His enthusiastic leadership and his reputation as the man who had built the Suez Canal persuaded speculators and ordinary citizens to invest nearly $400 million in the project. However, despite his previous success de Lesseps was not an engineer. The Suez Canal, essentially a ditch dug through a flat, sandy desert, presented few challenges. Although Central America 's mountainous spine has a low point in Panama, it is still 110 meters (360.9 ft) above sea level at its lowest crossing point. The sea - level canal proposed by de Lesseps would require a great deal of excavation through a variety of rock, rather than Suez ' sand. Less - obvious barriers were the rivers crossing the canal, particularly the Chagres (which flows strongly during the rainy season). Since the water would be a hazard to shipping if it drained into the canal, a sea - level canal would require the river 's diversion. The most serious problem was tropical disease, particularly malaria and yellow fever, whose methods of transmission were unknown at the time. The legs of hospital beds were placed in cans of water to keep insects from crawling up them, but the stagnant water was an ideal breeding place for mosquitoes (carriers of the diseases). The project was plagued by a lack of engineering expertise. In May 1879, an international engineering congress led by de Lesseps convened in Paris. Of its 136 delegates, only 42 were engineers; the others were speculators, politicians and friends of de Lesseps. He was convinced that a sea - level canal, dug through the mountainous spine of Central America, could be completed at least as easily as the Suez Canal. The engineering congress estimated the project 's cost at $214 million; on February 14, 1880, an engineering commission revised the estimate to $168.6 million. De Lesseps reduced this estimate twice, with no apparent justification: on February 20 to $131.6 million and on March 1 to $120 million. The congress estimated seven or eight years as the time required to complete the canal; de Lesseps reduced this estimate to six years (the Suez Canal required ten). The proposed sea - level canal would have a uniform depth of 9 meters (29.5 ft), a bottom width of 22 meters (72.2 ft) and a width at water level of about 27.5 meters (90.2 ft); the excavation estimate was 120,000,000 m (157,000,000 cu yd). A dam was proposed at Gamboa to control flooding of the Chagres River, with channels to drain water away from the canal. However, the Gamboa dam was later found impracticable and the Chagres River problem was left unsolved. Construction of the canal began on January 1, 1881, with digging at Culebra beginning on January 22. A large labor force was assembled, numbering about 40,000 in 1888 (nine - tenths of whom were afro - Caribbean workers from the West Indies). Although the project attracted good, well - paid French engineers, retaining them was difficult due to disease. The death toll from 1881 to 1889 was estimated at over 22,000, of whom as many as 5,000 were French citizens. By 1885 it had become clear to many that a sea - level canal was impractical, and an elevated canal with locks was preferable; de Lesseps resisted, and a lock canal plan was not adopted until October 1887. By this time increasing mortality rates, as well as financial and engineering problems coupled with frequent floods and mudslides, indicated that the project was in serious trouble. Work continued under the new plan until May 15, 1889, when the company went bankrupt and the project was suspended. After eight years the canal was about two - fifths completed, and about $234.8 million had been spent. The company 's collapse was a scandal in France, and the antisemitic Edouard Drumont exploited the role of two Jewish speculators in the affair. One hundred and four legislators were found to have been involved in the corruption, and Jean Jaurès was commissioned by the French parliament to conduct an inquiry which was completed in 1893. It soon became clear that the only way to recoup expenses for the stockholders was to continue the project. A new concession was obtained from Colombia, and in 1894 the Compagnie Nouvelle du Canal de Panama was created to finish the canal. To comply with the terms of the contract, work began immediately on the Culebra excavation while a team of engineers began a comprehensive study of the project. They eventually settled on a plan for a two - level, lock - based canal. The new effort never gained traction, mainly because of US speculation that a canal through Nicaragua would render one through Panama useless. The most men employed on the new project was 3,600 (in 1896), primarily to comply with the terms of the concession and to maintain the existing excavation and equipment in saleable condition. The company had already begun looking for a buyer, with an asking price of $109 million. In the US, a congressional Isthmian Canal Commission was established in 1899 to examine possibilities for a Central American canal and recommend a route. In November 1901, the commission reported that a US canal should be built through Nicaragua unless the French were willing to sell their holdings for $40 million. The recommendation became law on June 28, 1902, and the New Panama Canal Company was compelled to sell at that price. Although the French effort was, to a large extent, doomed to failure from the beginning due to disease and a lack of understanding of the engineering difficulties, it was not entirely futile. The old and new companies excavated 59,747,638 m (78,146,960 cu yd) of material, of which 14,255,890 m (18,646,000 cu yd) was taken from the Culebra Cut. The old company dredged a channel from Panama Bay to the port at Balboa, and the channel dredged on the Atlantic side (known as the French canal) was useful for bringing in sand and stone for the locks and spillway concrete at Gatún. Detailed surveys and studies (particularly those carried out by the new canal company) and machinery, including railroad equipment and vehicles, aided the later American effort. The French lowered the summit of the Culebra Cut along the canal route by five meters (17 ft), from 64 to 59 metres (210 to 194 ft). An estimated 22,713,396 m (29,708,000 cu yd) of excavation, valued at about $25.4 million, and equipment and surveys valued at about $17.4 million were usable by the Americans. The 1848 discovery of gold in California and the rush of would - be miners stimulated US interest in building a canal between the oceans. In 1887, a United States Army Corps of Engineers regiment surveyed canal possibilities in Nicaragua. Two years later, the Maritime Canal Company was asked to begin a canal in the area and chose Nicaragua. The company lost money in the panic of 1893, and its work in Nicaragua ceased. In 1897 and 1899, the United States Congress charged a canal commission with researching possible construction; Nicaragua was chosen as the location both times. Although the Nicaraguan canal proposal was made redundant by the American takeover of the French Panama Canal project, increases in shipping volume and ship sizes have revived interest in the project. A canal across Nicaragua accommodating post-Panamax ships or a rail link carrying containers between ports on either coast have been proposed. Theodore Roosevelt believed that a US - controlled canal across Central America was a vital strategic interest of the country. This idea gained wide circulation after the destruction of the USS Maine in Cuba on February 15, 1898. Reversing a Walker Commission decision in favor of a Nicaraguan canal, Roosevelt encouraged the acquisition of the French Panama Canal effort. George S. Morison was the only commission member who argued for the Panama location. The purchase of the French - held land for $40 million was authorized by the June 28, 1902 Spooner Act. Since Panama was then part of Colombia, Roosevelt began negotiating with that country to obtain the necessary rights. In early 1903 the Hay -- Herrán Treaty was signed by both nations, but the Senate of Colombia failed to ratify the treaty. Roosevelt implied to Panamanian rebels that if they revolted, the US Navy would assist their fight for independence. Panama declared its independence on November 3, 1903, and the USS Nashville impeded Colombian interference. The victorious Panamanians gave the United States control of the Panama Canal Zone on February 23, 1904, for $10 million in accordance with the November 18, 1903 Hay -- Bunau - Varilla Treaty. The United States took control of the French property connected to the canal on May 4, 1904, when Lieutenant Jatara Oneel of the United States Army was presented with the keys during a small ceremony. The new Panama Canal Zone Control was overseen by the Isthmian Canal Commission (ICC) during construction. The first step taken by the US government was to place all the canal workers under the new administration. The operation was maintained at minimum strength to comply with the canal concession and keep the machinery in working order. The US inherited a small workforce and an assortment of buildings, infrastructure and equipment, much of which had been neglected for fifteen years in the humid jungle environment. There were no facilities in place for a large workforce, and the infrastructure was crumbling. Cataloguing assets was a large job; it took many weeks to card - index available equipment. About 2,150 buildings had been acquired, many of which were uninhabitable; housing was an early problem, and the Panama Railway was in a state of decay. However, much equipment (such as locomotives, dredges and other floating equipment) was still serviceable. Although chief engineer John Findley Wallace was pressured to resume construction, red tape from Washington stifled his efforts to obtain heavy equipment and caused friction between Wallace and the ICC. He and chief sanitary officer William C. Gorgas were frustrated by delay, and Wallace resigned in 1905. He was replaced by John Frank Stevens, who arrived on July 26, 1905. Stevens quickly realized that serious investment in infrastructure was necessary and determined to upgrade the railway, improve sanitation in Panama City and Colón, renovate the old French buildings and build hundreds of new ones for housing. He then began the difficult task of recruiting the large labor force required for construction. Stevens ' approach was to press ahead first and obtain approval later. He improved drilling and dirt - removal equipment at the Culebra Cut for greater efficiency, revising the inadequate provisions in place for soil disposal. No decision had been made about whether the canal should be a lock or a sea - level one; the ongoing excavation would be useful in either case. In late 1905, President Roosevelt sent a team of engineers to Panama to investigate the relative merits of both types in cost and time. Although the engineers voted eight to five in favor of a sea - level canal, Stevens and the ICC opposed the plan; Stevens ' report to Roosevelt was instrumental in convincing the president of the merits of a lock canal and Congress concurred. In November 1906 Roosevelt visited Panama to inspect the canal 's progress, the first trip outside the United States by a sitting president. Whether contract employees or government workers would build the canal was controversial. Bids for the canal 's construction were opened in January 1907, and Knoxville, Tennessee - based contractor William J. Oliver was the low bidder. Stevens disliked Oliver, and vehemently opposed his choice. Although Roosevelt initially favored the use of a contractor, he eventually decided that army engineers should carry out the work and appointed Major George Washington Goethals as chief engineer (under Stevens ' direction) in February 1907. Stevens, frustrated by government inaction and the army involvement, resigned and was replaced by Goethals. The US relied on a stratified workforce to build the canal. High - level engineering jobs, clerical positions, skilled labor and jobs in supporting industries were generally reserved for white Americans, with manual labor primarily by cheap immigrant labor. These jobs were initially filled by Europeans, primarily from Spain, Italy and Greece, many of whom were radical and militant due to political turmoil in Europe. The US then decided to recruit primarily from the British and French West Indies, and these workers provided most of the manual labor on the canal. The Canal Zone originally had minimal facilities for entertainment and relaxation for the canal workers apart from saloons; as a result, alcohol abuse was a great problem. The inhospitable conditions resulted in many American workers returning home each year. A program of improvements was implemented. Clubhouses were built, managed by the YMCA, with billiard, assembly and reading rooms, bowling alleys, darkrooms for camera clubs, gymnastic equipment, ice cream parlors, soda fountains and a circulating library. Member dues were ten dollars a year, with the remaining upkeep (about $7,000 at the larger clubhouses) paid by the ICC. The commission built baseball fields and arranged rail transportation to games; a competitive league soon developed. Semimonthly Saturday - night dances were held at the Hotel Tivoli, which had a spacious ballroom. These measures influenced life in the Canal Zone; alcohol abuse fell, with saloon business declining by 60 percent. The number of workers leaving the project each year dropped significantly. The work done thus far was preparation, rather than construction. By the time Goethals took over, the construction infrastructure had been created or overhauled and expanded from the French effort and he was soon able to begin construction in earnest. Goethals divided the project into three divisions: Atlantic, Central and Pacific. The Atlantic Division, under Major William L. Sibert, was responsible for construction of the breakwater at the entrance to Limon Bay, the Gatún locks and their 5.6 km (3.5 mi) approach channel, and the Gatun Dam. The Pacific Division (under Sydney B. Williamson, the only civilian division head) was responsible for the Pacific entrance to the canal, including a 4.8 km (3.0 mi) breakwater in Panama Bay, the approach channel, and the Miraflores and Pedro Miguel locks and their associated dams. The Central Division, under Major David du Bose Gaillard, was responsible for everything in between. It had arguably the project 's greatest challenge: excavating the Culebra Cut (known as the Gaillard Cut from 1915 to 2000), which involved cutting 8 miles (13 km) through the continental divide down to 12 meters (40 ft) above sea level. By August 1907, 765,000 m3 (1,000,000 cubic yards) per month was being excavated; this set a record for the rainy season; soon afterwards this doubled, before increasing again. At the peak of production, 2,300,000 m3 (3,000,000 cubic yards) were being excavated per month (the equivalent of digging a Channel Tunnel every 31⁄2 months). One of the greatest barriers to a canal was the continental divide, which originally rose to 110 metres (360.9 ft) above sea level at its highest point. The effort to cut through this barrier of rock was one of the greatest challenges faced by the project. Goethals arrived at the canal with Major David du Bose Gaillard of the US Army Corps of Engineers. Gaillard was placed in charge of the canal 's Central Division, which stretched from the Pedro Miguel locks to the Gatun Dam, and dedicated himself to getting the Culebra Cut (as it was then known) excavated. The scale of the work was massive. Six thousand men worked in the cut, drilling holes in which a total of 27,000 t (60,000,000 lb) of dynamite were placed to break up the rock (which was then removed by as many as 160 trains per day). Landslides were frequent, due to the oxidation and weakening of the rock 's underlying iron strata. Although the scale of the job and the frequent, unpredictable slides generated chaos, Gaillard provided quiet, clear - sighted leadership. On May 20, 1913, Bucyrus steam shovels made a passage through the Culebra Cut at the level of the canal bottom. The French effort had reduced the summit to 59 metres (193.6 ft) over a relatively narrow width; the Americans had lowered this to 12 metres (39.4 ft) above sea level over a greater width, and had excavated over 76,000,000 m (99,000,000 cu yd) of material. About 23,000,000 m (30,000,000 cu yd) of this material in addition to the planned excavation, in the form of landslides. Dry excavation ended on September 10, 1913; a January slide had added 1,500,000 m (2,000,000 cu yd) of earth, but it was decided that this loose material would be removed by dredging when the cut was flooded. Two artificial lakes are key parts of the canal: Gatun and Miraflores Lakes. Four dams were constructed to create them. Two small dams at Miraflores impound Miraflores Lake, and a dam at Pedro Miguel encloses the south end of the Culebra Cut (essentially an arm of Lake Gatun). The Gatun Dam is the main dam blocking the original course of the Chagres River, creating Gatun Lake. The Miraflores dams are an 825 - metre (2,707 ft) earth dam connecting the Miraflores Locks in the west and a 150 - metre (492 ft) concrete spillway dam east of the locks. The concrete dam has eight floodgates, similar to those on the Gatun spillway. The earthen, 430 - metre (1,411 ft) Pedro Miguel dam extends from a hill in the west to the lock. Its face is protected by rock riprap at the water level. The largest and most challenging of the dams is the Gatun Dam. This earthen dam, 640 metres (2,100 ft) thick at the base and 2,300 metres (7,546 ft) long along the top, was the largest of its kind in the world when the canal opened. Building the locks began with the first concrete laid at Gatun on August 24, 1909. The Gatun locks are built into a cutting into a hill bordering the lake, requiring the excavation of 3,800,000 m (4,970,212 cu yd) of material (mostly rock). The locks were made of 1,564,400 m (2,046,158 cu yd) of concrete, with an extensive system of electric railways and cable cars transporting concrete to the lock - construction sites. The Pacific - side locks were finished first: the single flight at Pedro Miguel in 1911, and Miraflores in May 1913. The seagoing tugboat Gatun, an Atlantic - entrance tug used to haul barges, traversed the Gatun locks on September 26, 1913. The trip was successful, although the valves were controlled manually; the central control board was not yet ready. On October 10, 1913, the dike at Gamboa which had kept the Culebra Cut isolated from Gatun Lake was demolished; the detonation was made telegraphically by President Woodrow Wilson in Washington. On January 7, 1914, the Alexandre La Valley, an old French crane boat, became the first ship to make a complete transit of the Panama Canal under its own steam after working its way across during the final stages of construction. As construction wound down, the canal team began to disperse. Thousands of workers were laid off, and entire towns were disassembled or demolished. Chief sanitary officer William C. Gorgas, who left to fight pneumonia in the South African gold mines, became surgeon general of the Army. On April 1, 1914 the Isthmian Canal Commission disbanded, and the zone was governed by a Canal Zone Governor; the first governor was George Washington Goethals. Although a large celebration was planned for the canal 's opening, the outbreak of World War I forced the cancellation of the main festivities and it became a modest local affair. The Panama Railway steamship SS Ancon, piloted by Captain John A. Constantine (the canal 's first pilot), made the first official transit on August 15, 1914. With no international dignitaries in attendance, Goethals followed the Ancon 's progress by railroad. The canal was a technological marvel and an important strategic and economic asset to the US. It changed world shipping patterns, removing the need for ships to navigate the Drake Passage and Cape Horn. The canal saves a total of about 7,800 miles (12,600 km) on a sea trip from New York to San Francisco. Its anticipated military significance of the canal was proven during World War II, when the canal helped restore the devastated United States Pacific Fleet. Some of the largest ships the United States had to send through the canal were aircraft carriers, particularly Essex class; they were so large that although the locks could accommodate them, the lampposts along the canal had to be removed. The Panama Canal cost the United States about $375 million, including $10 million paid to Panama and $40 million paid to the French company. Although it was the most expensive construction project in US history to that time, it cost about $23 million less than the 1907 estimate despite landslides and an increase in the canal 's width. An additional $12 million was spent on fortifications. A total of over 75,000 people worked on the project; at the peak of construction, there were 40,000 workers. According to hospital records, 5,609 workers died from disease and accidents during the American construction era. A total of 182,610,550 m (238,845,582 cu yd) of material was excavated in the American effort, including the approach channels at the canal ends. Adding the work by the French, the total excavation was about 204,900,000 m (268,000,000 cu yd) (over 25 times the volume excavated in the Channel Tunnel project). Of the three presidents whose terms spanned the construction period, Theodore Roosevelt is most associated with the canal and Woodrow Wilson presided over its opening. However, William Howard Taft may have given the canal its greatest impetus for the longest time. Taft visited Panama five times as Roosevelt 's secretary of war and twice as president. He hired John Stevens and later recommended Goethals as Stevens ' replacement. Taft became president in 1909, when the canal was half finished, and was in office for most of the remainder of the work. However, Goethals later wrote: "The real builder of the Panama Canal was Theodore Roosevelt ''. The following words by Roosevelt are displayed in the rotunda of the canal 's administration building in Balboa: It is not the critic who counts, not the man who points out how the strong man stumbled, or where the doer of deeds could have done them better. The credit belongs to the man who is actually in the arena; whose face is marred by dust and sweat and blood; who strives valiantly, who errs and comes short again and again; who knows the great enthusiasms, the great devotions, and spends himself in a worthy cause; who, at the best, knows in the end the triumph of high achievement; and who, at the worst, if he fails, at least fails while daring greatly, so that his place shall never be with those cold and timid souls who know neither victory nor defeat. David du Bose Gaillard died of a brain tumor in Baltimore on December 5, 1913, at age 54. Promoted to colonel only a month earlier, Gaillard never saw the opening of the canal whose creation he directed. The Culebra Cut (as it was originally known) was renamed the Gaillard Cut on April 27, 1915, in his honor. A plaque commemorating Gaillard 's work stood over the cut for many years; in 1998 it was moved to the administration building, near a memorial to Goethals. As the situation in Europe deteriorated during the late 1930s, the US again became concerned about its ability to move warships between the oceans. The largest US battleships already had problems with the canal locks, and there were concerns about the locks being incapacitated by bombing. These concerns led Congress to pass a resolution on May 1, 1936, authorizing a study of improving the canal 's defenses against attack and expanding its capacity to handle large vessels. A special engineering section was created on July 3, 1937, to carry out the study. The section reported to Congress on February 24, 1939, recommending work to protect the existing locks and the construction of a new set of locks capable of carrying larger vessels than the existing locks could accommodate. On August 11, Congress authorized the work. Three new locks were planned, at Gatún, Pedro Miguel and Miraflores, parallel to the existing locks with new approach channels. The new locks would add a traffic lane to the canal, with each chamber 1,200 ft (365.76 m) long, 140 ft (42.67 m) wide and 45 ft (13.72 m) deep. They would be ⁄ mi (805 m) east of the existing Gatún locks and ⁄ mi (402 m) west of the Pedro Miguel and Miraflores locks. The first excavations for the new approach channels at Miraflores began on July 1, 1940, following the passage by Congress of an appropriations bill on June 24, 1940. The first dry excavation at Gatún began on February 19, 1941. Considerable material was excavated before the project was abandoned, and the approach channels can still be seen paralleling the original channels at Gatún and Miraflores. In 2006, the Autoridad del Canal de Panamá (the Panama Canal Authority, or ACP) proposed a plan creating a third lane of locks using part of the abandoned 1940s approach canals. Following a referendum, work began in 2007 and the expanded canal began commercial operations on June 26, 2016. After a two - year delay, the new locks allow the transit of Panamax ships (which have a greater cargo capacity than the original locks can handle). The first ship to cross the canal through the third set of locks was a Panamax container ship, the Chinese - owned Cosco Shipping Panama. The cost of the expansion was estimated at $5.25 billion. After construction, the canal and the Canal Zone surrounding it were administered by the United States. On September 7, 1977, US President Jimmy Carter signed the Torrijos - Carter Treaty setting in motion the process of transferring control of the canal to Panama. The treaty became effective on October 1, 1979, providing for a 20 - year period in which Panama would have increasing responsibility for canal operations before complete US withdrawal on December 31, 1999. Since then, the canal has been administered by the Panama Canal Authority (Autoridad de Canal de Panama, or ACP). The treaty was controversial in the US, and its passage was difficult. The controversy was largely generated by contracts to manage two ports, at either end of the canal, which were awarded by Panama to Hong Kong - based conglomerate Hutchison Whampoa. According to US Republicans, the company has close ties to the Chinese government and the Chinese military. However, the United States Department of State said that it found no evidence of connections between Hutchison Whampoa and Beijing. Some Americans were wary of placing the strategic waterway under the protection of Panamanian security forces. Although concerns existed in the US and the shipping industry about the canal after the transfer, Panama has exercised good stewardship. According to ACP figures, canal income increased from $769 million in 2000 (the first year under Panamanian control) to $1.4 billion in 2006. Traffic through the canal increased from 230 million tons in 2000 to nearly 300 million tons in 2006. The number of accidents has decreased from an average of 28 per year in the late 1990s to 12 in 2005. Transit time through the canal averages about 30 hours, about the same as during the late 1990s. Canal expenses have increased less than revenue, from $427 million in 2000 to $497 million in 2006. On October 22, 2006, Panamanian citizens approved a referendum to expand the canal. Former US Ambassador to Panama Linda Watt, who served from 2002 to 2005, said that the canal operation in Panamanian hands has been "outstanding ''. "The international shipping community is quite pleased '', Watt added.
when does the fiscal year start in pakistan
Fiscal year - wikipedia A fiscal year (or financial year, or sometimes budget year) is the period used by governments for accounting and budget purposes, which vary between countries. It is also used for financial reporting by business and other organizations. Laws in many jurisdictions require company financial reports to be prepared and published on a generally annual basis, but generally do not require that the reporting period be calendar year, 1 January to 31 December. Taxation laws generally require accounting records to be maintained and taxes calculated on an annual basis, which usually corresponds to the fiscal year used for government purposes. The calculation of tax on an annual basis is especially relevant for direct taxation, such as income tax. Many annual government fees -- such as Council rates, licence fees, etc. -- are also levied on a fiscal year basis, while others are charged on an anniversary basis. The "fiscal year end '' (FYE) is the date that marks the end of the fiscal year. Some companies -- such as Cisco Systems -- end their fiscal year on the same day of the week each year, e.g. the day that is closest to a particular date (for example, the Friday closest to 31 December). Under such a system, some fiscal years will have 52 weeks and others 53 weeks. The calendar year is used as the fiscal year by about 65 % of publicly traded companies in the United States and for a majority of large corporations in the UK and elsewhere, with notable exceptions being in Australia, New Zealand and Japan. Many universities have a fiscal year which ends during the summer, both to align the fiscal year with the academic year (and, in some cases involving public universities, with the state government 's fiscal year), and because the school is normally less busy during the summer months. In the northern hemisphere this is July to the next June. In the southern hemisphere this is calendar year, January to December. Some media / communication - based organizations use a broadcast calendar as the basis for their fiscal year. The fiscal year is usually denoted by the year in which it ends, so United States federal government spending incurred on 14 November 2017 would belong to fiscal year 2018, operating on a fiscal calendar of October -- September. The fiscal year for individuals and entities to report and pay income taxes is often known as the taxpayer 's tax year or taxable year. Taxpayers in many jurisdictions may choose their tax year. In federal countries (e.g., United States, Canada, Switzerland), state / provincial / cantonal tax years must be the same as the federal year. Nearly all jurisdictions require that the tax year be 12 months or 52 / 53 weeks. However, short years are permitted as the first year or when changing tax years. Most countries require all individuals to pay income tax based on the calendar year. Significant exceptions include: Many jurisdictions require that the tax year conform to the taxpayer 's fiscal year for financial reporting. The United States is a notable exception: taxpayers may choose any tax year, but must keep books and records for such year. In some jurisdictions, particularly those that permit tax consolidation, companies that are part of a group of businesses must use nearly the same fiscal year (differences of up to three months are permitted in some jurisdictions, such as the U.S. and Japan), with consolidating entries to adjust for transactions between units with different fiscal years, so the same resources will not be counted more than once or not at all. In Afghanistan, the fiscal year was recently changed from 1 Hamal -- 29 Hoot (21 March -- 20 March) to 1 Jadi -- 30 Qaus (21 December -- 20 December). The fiscal year runs with the Afghan calendar, thus resulting in difference of the Gregorian dates once in a four - year span. In Australia, a fiscal year is commonly called a "financial year '' (FY) and starts on 1 July and ends on the next 30 June. Financial years are designated by the calendar year of the second half of the period. For example, financial year 2017 is the 12 - month period ending on 30 June 2017 and can be referred to as FY2016 / 17. It is used for official purposes, by individual taxpayers and by the overwhelming majority of business enterprises. Business enterprises may opt to use a financial year that ends at the end of a week (e.g., 52 or 53 weeks in length, and therefore is not exactly one calendar year in length), or opt for its financial year to end on a date that matches the reporting cycle of its foreign parent. All entities within the one group must use the same financial year. For government accounting and budget purposes, pre-Federation colonies changed the financial year from the calendar year to a year ending 30 June on the following dates: Victoria changed in 1870, South Australia in 1874, Queensland in 1875, Western Australia in 1892, New South Wales in 1895 and Tasmania in 1904. The Commonwealth adopted the near - ubiquitous financial year standard since its inception in 1901. The reason given for the change was for convenience, as Parliament typically sits during May and June, while it was difficult for it to meet in November and December to pass a budget. In Austria the fiscal year is the calendar year, 1 January to 31 December. In Bangladesh, the fiscal year is 1 July to the next 30 June. In Belarus, the fiscal year is the calendar year, 1 January to 31 December. In Brazil, the fiscal year is the calendar year, 1 January to 31 December. In Bulgaria, the fiscal year is the calendar year, 1 January to 31 December, both for personal income tax and for corporate taxes. In Canada, the government 's financial year is 1 April to 31 March. (Q1 1 April - 30 June, Q2 1 July - 30 Sept, Q3 1 Oct - 31 Dec and Q4 1 Jan - 31 Mar) For individual taxpayers, the fiscal year is the calendar year, 1 January to 31 December. In China, the fiscal year for all entities is the calendar year, 1 January to 31 December, and applies to the tax year, statutory year, and planning year. In Colombia, the fiscal year is the calendar year, 1 January to 31 December. In Costa Rica, the fiscal year is 1 October to 30 September. In the Arab Republic of Egypt, the fiscal year is 1 July to 30 June. In France, the fiscal year is the calendar year, 1 January to 31 December, and has been since at least 1911. In Greece, the fiscal year is the calendar year, 1 January to 31 December. In Hong Kong, the government 's financial year runs from 1 April to 31 March. In India, the government 's financial year runs from 1 April to 31 March. It is abbreviated as FY18. Companies following the Indian Depositary Receipt (IDR) are given freedom to choose their financial year. For example, Standard Chartered 's IDR follows the UK calendar despite being listed in India. Companies following Indian fiscal year get to know their economical health on 31 March of every Indian financial or fiscal year. The current fiscal year was adopted by the colonial British government in 1867 to align India 's financial year with that of the British Empire. Prior to 1867, India followed a fiscal year that ran from 1 May to 30 April. In 1984, the LK Jha committee recommended adopting a fiscal year that ran from 1 January to 31 December. However, this proposal was not adopted by the government fearing possible issues during the transition period. A panel set up by the NITI Aayog in July 2016, recommended starting the next fiscal year from 1 January to 31 December after the end of the current five - year plan. On 4 May 2017, Madhya Pradesh announced that it would move to a January - December financial year, becoming the first Indian state to do so. In Iran, the fiscal year usually starts on 21 March (1st of Farvardin) and concludes on next year 's 20 March (29th of Esfand) in Solar Hijri calendar Until 2001, the fiscal year in Ireland was the year ending 5 April, as in the United Kingdom. From 2002, to coincide with the introduction of the euro, it was changed to the calendar year, 1 January to 31 December. The 2001 tax year was nine months, from April to December. In Israel, the fiscal year is the calendar year, 1 January to 31 December. In Italy, the fiscal year is the calendar year, 1 January to 31 December. It was changed in 1965, before which it was 1 July to 30 June. In Japan, the government 's financial year is from 1 April to 31 March. The fiscal year is represented by the calendar year in which the period begins, followed by the word nendo (年度); for example the fiscal year from 1 April 2017 to 31 March 2018 is called 2017 -- nendo. Japan 's income tax year is 1 January to 31 December, but corporate tax is charged according to the corporation 's own annual period. In Macau, the government 's financial year is 1 January to 31 December. In Mexico, the fiscal year is the calendar year, 1 January to 31 December. In Myanmar, the fiscal year is 1 April to 31 March. In Nepal, the fiscal year is 1 Shrawan (4th month of Bikram calendar) to 31 Ashad (3rd month of Bikram calendar). Shrawan 1 roughly falls in mid-July. In New Zealand, the government 's fiscal and financial reporting year is 1 July to the next 30 June and applies also to the budget. The company and personal financial year is 1 April to 31 March and applies to company and personal income tax. The Pakistani government 's fiscal year is 1 July of the previous calendar year and concludes on 30 June. Private companies are free to observe their own accounting year, which may not be the same as government 's fiscal year. In Portugal, the fiscal year is the calendar year, 1 January to 31 December. In Qatar, the fiscal year is from 1 April to 31 March. In Russia, the fiscal year is the calendar year, 1 January to 31 December. The fiscal year for the calculation of personal income taxes is 1 January to 31 December. The fiscal year for the Government of Singapore and many government - linked corporations is 1 April to 31 March. Corporations and organisations are permitted to select any date as the end of each fiscal year, as long as this date remains constant. In South Africa, the fiscal year for the Government of South Africa is 1 April to 31 March. The year of assessment for individuals covers twelve months, 1 March to the final day of February the following year. The Act also provides for certain classes of taxpayers to have a year of assessment ending on a day other than the last day of February. Companies are permitted to have a tax year ending on a date that coincides with their financial year. Many older companies still use a tax year that runs from 1 July to 30 June, inherited from the British system. A common practice for newer companies is to run their tax year from 1 March to the final day of February following, to synchronize with the tax year for individuals. In South Korea, the fiscal year is the calendar year, 1 January to 31 December. In Spain, the fiscal year is the calendar year, 1 January to 31 December. In Sweden, the fiscal year for individuals is the calendar year, 1 January to 31 December. The fiscal year for an organisation is typically one of the following: However, all calendar months are allowed. If an organisation wishes to change into a non-calendar year, permission from the Tax Authority is required. In Taiwan, the fiscal year is the calendar year, 1 January to 31 December. However, an enterprise may elect to adopt a special fiscal year at the time it is established and can request approval from the tax authorities to change its fiscal year. In Thailand, the government 's fiscal year (FY) is 1 October to 30 September of the following year. For individual taxpayers it is the calendar year, 1 January to 31 December. In Ukraine, the fiscal year is the calendar year, 1 January to 31 December. In the United Arab Emirates, the fiscal year is the calendar year, 1 January to 31 December. In the United Kingdom, the financial year runs from 1 April to 31 March for the purposes of government financial statements. For personal tax purposes the fiscal year starts on 6 April and ends on 5 April of the next calendar year. Although United Kingdom corporation tax is charged by reference to the government 's financial year, companies can adopt any year as their accounting year: if there is a change in tax rate, the taxable profit is apportioned to financial years on a time basis. A number of major corporations that were once government - owned, such as BT Group and the National Grid, continue to use the government 's financial year, which ends on the last day of March, as they have found no reason to change since privatisation. The 5 April year end for personal tax and benefits reflects the old ecclesiastical calendar, with New Year falling on 25 March (Lady Day), the difference being accounted for by the eleven days "missed out '' when Great Britain converted from the Julian Calendar to the Gregorian Calendar in 1752 (the British tax authorities, and landlords were unwilling to lose 11 days of tax and rent revenue, so under provision 6 (Times of Payment of Rents, Annuities, &c.) of the Calendar (New Style) Act 1750, the 1752 -- 3 tax year was extended by 11 days). From 1753 until 1799, the tax year in Great Britain began on 5 April, which was the "old style '' new year of 25 March. A 12th skipped Gregorian leap day in 1800 changed its start to 6 April. It was not changed when a 13th Julian leap day was skipped in 1900, so the start of the personal tax year in the United Kingdom is still 6 April. The United States federal government 's fiscal year is the 12 - month period ending on 30 September of that year, having begun on 1 October of the previous calendar year. In particular, the identification of a fiscal year is the calendar year in which it ends; thus, the current fiscal year is 2017, often written as "FY2017 '' or "FY17 '', which began on 1 October 2016 and which will end on 30 September 2017. Prior to 1976, the fiscal year began on 1 July and ended on 30 June. The Congressional Budget and Impoundment Control Act of 1974 made the change to allow Congress more time to arrive at a budget each year, and provided for what is known as the "transitional quarter '' from 1 July 1976 to 30 September 1976. An earlier shift in the federal government 's fiscal year was made in 1843, shifting the fiscal year from a calendar year to one starting on 1 July. For example, the United States government fiscal year for 2017 is: State governments set their own fiscal year. It may or may not align with the federal calendar. For example, in the state of California, the fiscal year runs from 1 July to 30 June each year. The tax year for a business is governed by the fiscal year it chooses. A business may choose any consistent fiscal year that it wants; however, for seasonal businesses such as farming and retail, a good account practice is to end the fiscal year shortly after the highest revenue time of year. Consequently, most large agriculture companies end their fiscal years after the harvest season, and most retailers end their fiscal years shortly after the Christmas shopping season.
when was the last time the nuggets made the playoffs
List of Denver Nuggets seasons - wikipedia āThis is a list of seasons completed by the Denver Nuggets of the National Basketball Association (NBA). They have played for 49 seasons, 38 in the NBA and nine in the American Basketball Association (ABA). As of the close of the 2017 season, they have never reached an NBA Finals and only been to three Western Conference Finals series.
in roth the court rule that community standards are based on
Roth v. United States - wikipedia Roth v. United States, 354 U.S. 476 (1957), along with its companion case Alberts v. Christopher Sommer, was a landmark case before the United States Supreme Court which redefined the Constitutional test for determining what constitutes obscene material unprotected by the First Amendment. Under the common law rule that prevailed before Roth, articulated most famously in the 1868 English case Regina v. Hicklin, any material that tended to "deprave and corrupt those whose minds are open to such immoral influences '' was deemed "obscene '' and could be banned on that basis. Thus, works by Balzac, Flaubert, James Joyce and D.H. Lawrence were banned based on isolated passages and the effect they might have on children. Samuel Roth, who ran a literary business in New York City, was convicted under a federal statute criminalizing the sending of "obscene, lewd, lascivious or filthy '' materials through the mail for advertising and selling a publication called American Aphrodite ("A Quarterly for the Fancy - Free '') containing literary erotica and nude photography. David Alberts, who ran a mail - order business from Los Angeles, was convicted under a California statute for publishing pictures of "nude and scantily - clad women. '' The Court granted certiorari and affirmed both convictions. Roth came down as a 6 -- 3 decision, with the opinion of the Court authored by William J. Brennan, Jr... The Court repudiated the Hicklin test and defined obscenity more strictly, as material whose "dominant theme taken as a whole appeals to the prurient interest '' of the "average person, applying contemporary community standards. '' Only material meeting this test could be banned as "obscene. '' However, Brennan reaffirmed that obscenity was not protected by the First Amendment and thus upheld the convictions of Roth and Alberts for publishing and sending obscene material through the mail. Congress could ban material, "utterly without redeeming social importance, '' or in other words, "whether to the average person, applying contemporary community standards, the dominant theme of the material taken as a whole appeals to the prurient interest. '' Chief Justice Earl Warren worried that "broad language used here may eventually be applied to the arts and sciences and freedom of communication generally, '' but, agreeing that obscenity is not constitutionally protected, concurred only in the judgment. Justices Hugo Black and William O. Douglas, First Amendment "literalists, '' dissented in Roth, arguing vigorously that the First Amendment protected obscene material. Justice John Marshall Harlan II dissented in Roth, involving a federal statute, but concurred in Alberts, involving a state law, on the grounds that while states had broad power to prosecute obscenity, the federal government did not. In Memoirs v. Massachusetts (1966), a plurality of the Court further redefined the Roth test by holding unprotected only that which is "patently offensive '' and "utterly without redeeming social value, '' but no opinion in that case could command a majority of the Court either, and the state of the law in the obscenity field remained confused. Pornography and sexually oriented publications proliferated as a result of the Warren Court 's holdings, the "Sexual Revolution '' of the 1960s flowered, and pressure increasingly came on the Court to allow leeway for state and local governments to crack down on obscenity. During his ill - fated bid to become Chief Justice, Justice Abe Fortas was attacked vigorously in Congress by conservatives such as Strom Thurmond for siding with the Warren Court majority in liberalizing protection for pornography. In his 1968 presidential campaign, Richard Nixon campaigned against the Warren Court, pledging to appoint "strict constructionists '' to the Supreme Court. In Miller v. California (1973), a five - person majority agreed for the first time since Roth as to a test for determining constitutionally unprotected obscenity, thereby superseding the Roth test. By the time Miller was considered in 1973, Justice Brennan had abandoned the Roth test and argued that "no formulation of this Court, the Congress, or the States can adequately distinguish obscene material unprotected by the First Amendment from protected expression. ''
who plays the bad guy in terminator 2
Robert Patrick - wikipedia Robert Hammond Patrick Jr. (born November 5, 1958) is an American actor, perhaps best known for his portrayals of villainous characters. He is a Saturn Award winner with four nominations. Patrick dropped out of college when drama class sparked his interest in acting, and entered film in 1986. After playing a supporting role in Die Hard 2 (1990), Patrick starred as the T - 1000, the main antagonist of Terminator 2: Judgment Day (1991) -- a role he reprised for cameo appearances in Wayne 's World (1992) and Last Action Hero (1993). Other notable film credits include Fire in the Sky (1993), Striptease (1996), Cop Land (1997), The Faculty (1998), Spy Kids (2001), Charlie 's Angels: Full Throttle (2003), Ladder 49 (2004), Walk the Line (2005), Flags of Our Fathers (2006), We Are Marshall (2006), Bridge to Terabithia (2007), The Men Who Stare at Goats (2009), and Safe House (2012). In television, Patrick is known for his portrayals of FBI Special Agent John Doggett in The X-Files and Colonel Tom Ryan in The Unit, and has played ongoing roles in series such as The Outer Limits, The Sopranos, Elvis, Avatar: The Last Airbender, Burn Notice, Last Resort, Sons of Anarchy and From Dusk Till Dawn: The Series. In 2015 he was cast in the role of DHS agent Cabe Gallo in the CBS drama series Scorpion. AllMovie journalist Tracie Cooper wrote that, by the conclusion of The X-Files in 2002, Patrick had "developed a solid reputation within the industry '', with critics, fans and co-stars alike praising his "work ethic, personality, and consistent performances. '' He was described by actor / director Jason Bateman as "one of the great heavies. '' Patrick was born in Marietta, Georgia, the oldest of five children born to Nadine (1932 --) and Robert Patrick Sr. (1928 --), a banker. Patrick is of English and Scots - Irish ancestry. His siblings are Richard Patrick (who is the lead singer for the rock band Filter), Cheri, Karen and Lewis. He spent his early life in Bay Village, a small suburb of Cleveland, Ohio, while he moved around the country. Patrick did not start to pursue an acting career until his mid-twenties. During his childhood years, Patrick did not like to act. In third grade he refused to wear a pair of green tights required for Peter Pan. He graduated from Farmington High School in Farmington, Michigan, in 1977. Patrick was a track and field and football athlete at Bowling Green State University, although he dropped out before graduating when he found interest in Drama and acting. After leaving college, Patrick secured a position as a house painter and continued as such until a boating accident in 1984 in Lake Erie. He swam for three hours in order to save the others still stranded on the accident site, nearly drowning in the attempt. After the accident, he moved from Ohio to Los Angeles, California, at age 26. His main income during the first years was as a bartender, and he often lived in his car. Patrick was then picked up for various small roles and cameos in low - budget films. Looking back, Patrick credited his early appearances in films to his "tough - looking exterior ''. Patrick made a short appearance in Die Hard 2 as a bit part henchman for Colonel Stuart before landing a role in Terminator 2: Judgment Day (1991) as the main villain, the T - 1000; it was his first starring role. James Cameron, the film 's director, said he chose Patrick for the role because of his physical appearance, which he felt fit the role. During the filming of Terminator 2: Judgment Day, Patrick was "broke '', living in a cheap apartment with his girlfriend, Barbara, whom he married during filming. He has credited the film with starting his career. After Terminator 2, Patrick landed roles in various feature films such as Last Action Hero, Fire in the Sky (both 1993) and Striptease (1996). Because of his fondness for martial arts, Patrick starred in two martial arts films titled Double Dragon and Hong Kong 97, both released in 1994. His performance in Fire in the Sky caught the attention of The X-Files creator, Chris Carter. After David Duchovny distanced himself from the show during the seventh season, Carter immediately contacted Patrick to audition for the role of John Doggett. Patrick 's brother, Richard, had previously worked for the series by adding music for the soundtrack album The X-Files: The Album. Patrick was cast as Doggett in 2000. The X-Files was canceled two seasons later, after Duchovny left the show following season 7, which resulted in low ratings for the show. Patrick made several appearances on many genre magazines, with TV Guide going so far as to label him one of the Ten Sexiest Men of Sci - Fi. In 2000, Patrick appeared in three episodes of The Sopranos ("The Happy Wanderer '', "Bust Out '' and "Funhouse '') as David Scatino, a store owner struggling with gambling debts owed to Richie Aprile and Tony Soprano. Four years later, he made a guest appearance in the pilot episodes for Sci - Fi Channel 's original series Stargate Atlantis, "Rising '', as the military component commander of the Atlantis expedition, Marshall Sumner. He accepted the role, since he had worked with the same crew on The Outer Limits, a series which he appeared in during the early 1990s. Patrick played Johnny Cash 's father, Ray Cash, in the film Walk the Line and Elvis 's father, Vernon Presley in the miniseries Elvis. He had a regular role on The Unit, and played Elvis Presley in Lonely Street (2008). In October 2006, he starred in the WWE Films production The Marine as Rome. He also appeared in We Are Marshall as Marshall University head coach Rick Tolley, who lost his life when Southern Airways Flight 932 crashed in 1970. His credits also include a guest starring role in the Lost episode "Outlaws '', as well as a recurring role as the voice of Master Piandao in Season 3 of the Nickelodeon animated series Avatar: The Last Airbender. Patrick played a supporting role in Firewall, a 2006 action movie starring Harrison Ford. He has also appeared in Meat Loaf 's music video "Objects in the Rear View Mirror May Appear Closer than They Are '' with Will Estes. Director McG, who directed Terminator Salvation, said that he wanted to reintroduce characters from the previous Terminator films: "I like the idea and the perspective for the next picture that you meet Robert Patrick the way he looks today, and he 's a scientist that 's working on, you know, improving cell replication so we can stay healthier and we can cure diabetes and do all these things that sound like good ideas, and to once again live as idealized expressions as ourselves. '' Patrick also starred in the psychological thriller The Black Water of Echo 's Pond, which was directed by Italian filmmaker Gabriel Bologna. In recent years, he has appeared in such television series as Burn Notice, NCIS and True Blood, among others. From 2012 to 2013, he also starred in Last Resort as Chief of the Boat Joseph Prosser. He played a supporting character in Identity Thief (2013). Starting in 2014 he also starred in Robert Rodriguez 's From Dusk Till Dawn: The Series as Jacob Fuller. He currently plays Agent Cabe Gallo on the CBS drama series Scorpion. On March 28, 2017, Patrick had been cast in upcoming Amazon Video horror anthology series Lore, which is based on the award - winning and critically acclaimed podcast of the same name. Lore recounts true stories of frightening and paranormal occurrences. The series will be executive produced by Aaron Mahnke. Lore will premier on October 13, 2017. Patrick married actress Barbara Hooper during the filming of Terminator 2: Judgment Day (1991). Patrick and Barbara have appeared together in various media releases such as Zero Tolerance and The X-Files. He has two children, a son, Samuel, and a daughter, Austin (named after the police officer the T - 1000 impersonates in Terminator 2). Almost every year, he does the Love Ride, a charity motorcycle ride held annually in Southern California. His brother is Richard Patrick, former guitarist of Nine Inch Nails and lead singer of the rock bands Filter and Army of Anyone. On October 22, 2010, the brothers sang guest vocals on the Filter song "So I Quit '' on stage in Dallas, Texas. Patrick is a devout Christian, of the Episcopalian denomination. Robert Patrick is a member of the Boozefighters motorcycle club.
abrar ul haq mili naghma mp3 free download
List of songs about Pakistan - wikipedia This is a list of songs about Pakistan (known as Milli naghmay in Urdu; ملی نغموں) listed in alphabetical order. The list includes songs by current and former solo - singers and musical bands. It also includes some film songs originally recorded for Pakistani films.
a brief history of birth control in the us
Birth control in the United States - wikipedia Birth control in the United States is a complicated issue with a long history. The practice of birth control was common throughout the U.S. prior to 1914, when the movement to legalize contraception began. Longstanding techniques included the rhythm method, withdrawal, diaphragms, contraceptive sponges, condoms, prolonged breastfeeding, and spermicides. Use of contraceptives increased throughout the nineteenth century, contributing to a 50 percent drop in the fertility rate in the United States between 1800 and 1900, particularly in urban regions. The only known survey conducted during the nineteenth century of American women 's contraceptive habits was performed by Clelia Mosher from 1892 to 1912. The survey was based on a small sample of upper - class women, and shows that most of the women used contraception (primarily douching, but also withdrawal, rhythm, condoms and pessaries) and that they viewed sex as a pleasurable act that could be undertaken without the goal of procreation. Although contraceptives were relatively common in middle - class and upper - class society, the topic was rarely discussed in public. The first book published in the United States which ventured to discuss contraception was Moral Physiology; or, A Brief and Plain Treatise on the Population Question, published by Robert Dale Owen in 1831. The book suggested that family planning was a laudable effort, and that sexual gratification -- without the goal of reproduction -- was not immoral. Owen recommended withdrawal, but he also discussed sponges and condoms. That book was followed by Fruits of Philosophy: The Private Companion of Young Married People, written in 1832 by Charles Knowlton, which recommended douching. Knowlton was prosecuted in Massachusetts on obscenity charges, and served three months in prison. Birth control practices were generally adopted earlier in Europe than in the United States. Knowlton 's book was reprinted in 1877 in England by Charles Bradlaugh and Annie Besant, with the goal of challenging Britain 's obscenity laws. They were arrested (and later acquitted) but the publicity of their trial contributed to the formation, in 1877, of the Malthusian League -- the world 's first birth control advocacy group -- which sought to limit population growth to avoid Thomas Malthus ' dire predictions of exponential population growth leading to worldwide poverty and famine. By 1930, similar societies had been established in nearly all European countries, and birth control began to find acceptance in most Western European countries, except Catholic Ireland, Spain, and France. As the birth control societies spread across Europe, so did birth control clinics. The first birth control clinic in the world was established in the Netherlands in 1882, run by the Netherlands ' first female physician, Aletta Jacobs. The first birth control clinic in England was established in 1921 by Marie Stopes, in London. Contraception was not restricted by law in the United States throughout most of the 19th century, but in the 1870s a social purity movement grew in strength, aimed at outlawing vice in general, and prostitution and obscenity in particular. Composed primarily of Protestant moral reformers and middle - class women, the Victorian - era campaign also attacked contraception, which was viewed as an immoral practice that promoted prostitution and venereal disease. Anthony Comstock, a grocery clerk and leader in the purity movement, successfully lobbied for the passage of the 1873 Comstock Act, a federal law prohibiting mailing of "any article or thing designed or intended for the prevention of conception or procuring of abortion '' as well as any form of contraceptive information. After passage of this first Comstock Act, he was appointed to the position of postal inspector Many states also passed similar state laws (collectively known as the Comstock laws), sometimes extending the federal law by additionally restricting contraceptives, including information about them and their distribution. Comstock was proud of the fact that he was personally responsible for thousands of arrests and the destruction of hundreds of tons of books and pamphlets. Comstock and his allies also took aim at the libertarians and utopians who comprised the free love movement -- an initiative to promote sexual freedom, equality for women, and abolition of marriage. The free love proponents were the only group to actively oppose the Comstock laws in the 19th century, setting the stage for the birth control movement. The efforts of the free love movement were not successful and, at the beginning of the 20th century, federal and state governments began to enforce the Comstock laws more rigorously. In response, contraception went underground, but it was not extinguished. The number of publications on the topic dwindled, and advertisements, if they were found at all, used euphemisms such as "marital aids '' or "hygienic devices ''. Drug stores continued to sell condoms as "rubber goods '' and cervical caps as "womb supporters ''. After World War II, the birth control movement had accomplished the goal of making birth control legal, and advocacy for reproductive rights began to focus on abortion, public funding, and insurance coverage. Birth control advocacy organizations around the world also began to collaborate. In 1946, Sanger helped found the International Committee on Planned Parenthood, which evolved into the International Planned Parenthood Federation and soon became the world 's largest non-governmental international family planning organization. In 1952, John D. Rockefeller III founded the influential Population Council. Fear of global overpopulation became a major issue in the 1960s, generating concerns about pollution, food shortages, and quality of life, leading to well - funded birth control campaigns around the world. The 1994 International Conference on Population and Development and the 1995 Fourth World Conference on Women addressed birth control and influenced human rights declarations which asserted women 's rights to control their own bodies. In the early 1950s, philanthropist Katharine McCormick had provided funding for biologist Gregory Pincus to develop the birth control pill, which was approved by the Food and Drug Administration (FDA) in 1960. The pill became very popular and had a major impact on society and culture. It contributed to a sharp increase in college attendance and graduation rates for women. New forms of intrauterine devices were introduced in the 1960s, increasing popularity of long acting reversible contraceptives. In 1965, the Supreme Court ruled in Griswold v. Connecticut that it was unconstitutional for the government to prohibit married couples from using birth control. Also in 1965, 26 states prohibited birth control for unmarried women. In 1967 Boston University students petitioned Bill Baird to challenge Massachusetts 's stringent "Crimes Against Chastity, Decency, Morality and Good Order '' law. On April 6, 1967 he gave a speech to 1,500 students and others at Boston University on abortion and birth control. He gave a female student one condom and a package of contraceptive foam. Baird was arrested and convicted as a felon, facing up to ten years in jail. He spent three months in Boston 's Charles Street Jail. During his challenge to the Massachusetts law, the Planned Parenthood League of Massachusetts stated that "there is nothing to be gained by court action of this kind. The only way to remove the limitations remaining in the law is through the legislative process. '' Despite this opposition, Baird fought for five years until Eisenstadt v. Baird legalized birth control for all Americans on March 22, 1972. Eisenstadt v. Baird, a landmark right to privacy decision, became the foundation for such cases as Roe v. Wade and the 2003 gay rights victory Lawrence v. Texas. In 1970, Congress removed references to contraception from federal anti-obscenity laws; and in 1973, the Roe v. Wade decision legalized abortion during the first trimester of pregnancy. Also in 1970, Title X of the Public Health Service Act was enacted as part of the war on poverty, to make family planning and preventive health services available to low - income and the uninsured. Without publicly funded family planning services, according to the Guttmacher Institute, the number of unintended pregnancies and abortions in the United States would be nearly two - thirds higher; the number of unintended pregnancies among poor women would nearly double. According to the United States Department of Health and Human Services, publicly funded family planning saves nearly $4 in Medicaid expenses for every $1 spent on services. In 1982, European drug manufacturers developed mifepristone, which was initially utilized as a contraceptive, but is now generally prescribed with a prostoglandin to induce abortion in pregnancies up to the fourth month of gestation. To avoid consumer boycotts organized by anti-abortion organizations, the manufacturer donated the U.S. manufacturing rights to Danco Laboratories, a company formed by pro-choice advocates, with the sole purpose of distributing mifepristone in the U.S, and thus immune to the effects of boycotts. In 1997, the FDA approved a prescription emergency contraception pill (known as the morning - after pill), which became available over the counter in 2006. In 2010, ulipristal acetate, an emergency contraceptive which is more effective after a longer delay was approved for use up to five days after unprotected sexual intercourse. Fifty to sixty percent of abortion patients became pregnant in circumstances in which emergency contraceptives could have been used. These emergency contraceptives, including Plan B and EllaOne, became another reproductive rights controversy. Opponents of emergency contraception consider it a form of abortion, because it may interfere with the ability of a fertilized embryo to implant in the uterus; while proponents contend that it is not abortion, because the absence of implantation means that pregnancy never commenced. In 2000, the Equal Employment Opportunity Commission ruled that companies that provided insurance for prescription drugs to their employees but excluded birth control were violating the Civil Rights Act of 1964. President Obama signed the Patient Protection and Affordable Care Act (ACA) on 23 March 2010. As of 1 August 2011, female contraception was added to a list of preventive services covered by the ACA that would be provided without patient co-payment. The federal mandate applied to all new health insurance plans in all states from 1 August 2012. Grandfathered plans did not have to comply unless they changed substantially. To be grandfathered, a group plan must have existed or an individual plan must have been sold before President Obama signed the law; otherwise they were required to comply with the new law. The Guttmacher Institute noted that even before the federal mandate was implemented, twenty - eight states had their own mandates that required health insurance to cover the prescription contraceptives, but the federal mandate innovated by forbidding insurance companies from charging part of the cost to the patient. Burwell v. Hobby Lobby, 573 U.S. ___ (2014), is a landmark decision by the United States Supreme Court allowing closely held for - profit corporations to be exempt from a law its owners religiously object to if there is a less restrictive means of furthering the law 's interest. It is the first time that the court has recognized a for - profit corporation 's claim of religious belief, but it is limited to closely held corporations. The decision is an interpretation of the Religious Freedom Restoration Act (RFRA) and does not address whether such corporations are protected by the free - exercise of religion clause of the First Amendment of the Constitution. For such companies, the Court 's majority directly struck down the contraceptive mandate under the Affordable Care Act (ACA) by a 5 -- 4 vote. The court said that the mandate was not the least restrictive way to ensure access to contraceptive care, noting that a less restrictive alternative was being provided for religious non-profits, until the Court issued an injunction 3 days later, effectively ending said alternative, replacing it with a government - sponsored alternative for any female employees of closely held corporations that do not wish to provide birth control. Zubik v. Burwell was a case before the United States Supreme Court on whether religious institutions other than churches should be exempt from the contraceptive mandate. Churches were already exempt. On May 16, 2016, the U.S. Supreme Court issued a per curiam ruling in Zubik v. Burwell that vacated the decisions of the Circuit Courts of Appeals and remanded the case "to the respective United States Courts of Appeals for the Third, Fifth, Tenth, and D.C. Circuits '' for reconsideration in light of the "positions asserted by the parties in their supplemental briefs ''. Because the Petitioners agreed that "their religious exercise is not infringed where they ' need to do nothing more than contract for a plan that does not include coverage for some or all forms of contraception ' '', the Court held that the parties should be given an opportunity to clarify and refine how this approach would work in practice and to "resolve any outstanding issues ''. The Supreme Court expressed "no view on the merits of the cases. '' In a concurring opinion, Justice Sotomeyer, joined by Justice Ginsburg noted that in earlier cases "some lower courts have ignored those instructions '' and cautioned lower courts not to read any signals in the Supreme Court 's actions in this case. In 2017, the Trump administration issued a ruling letting insurers and employers refuse to provide birth control if doing so went against their religious beliefs or moral convictions. However, later that same year federal judge Wendy Beetlestone issued an injunction temporarily stopping the enforcement of the Trump administration ruling. There are many types of contraceptive methods available. Hormonal methods which contain the hormones estrogen and progestin include oral contraceptive pills (there is also a progestin only pill), transdermal patch (OrthoEvra), and intravaginal ring (NuvaRing). Progestin only methods include an injectable form (Depo - Provera), a subdermal implant (Nexplanon), and the intrauterine device (Mirena). Non-hormonal contraceptive methods include the copper intrauterine device (ParaGard), male and female condoms, male and female sterilization, cervical diaphragms and sponges, spermicides, withdrawal, and fertility awareness. In 2006 - 2008, the most popular contraceptive methods among those at risk of unintended pregnancy were oral contraceptive pills (25 %), female sterilization (24.2 %), male condoms (14.5 %) and male sterilization (8.8 %). Intrauterine device (4.9 %), Withdrawal (4.6 %). Depo - Provera is used by 2.9 %, primarily younger women (7.5 % of those 15 - 19 and about 4.5 % of those 20 - 30). A 2013 Lancet systematic literature review found that among reproductive aged women in a marriage or union, 66 % worldwide and 77 % in the United States use contraception. Despite this unintended pregnancy remains high; just under half of pregnancies in the United States are unintended. 10.6 % of women at risk of unintended pregnancy did not use a contraceptive method, including 18.7 % of teens and 14.3 % of those 20 - 24. Women of reproductive age (15 to 44) who are not regarded as at risk for unintended pregnancy include those who are sterile, were sterilized for non-contraceptive reasons, were pregnant or trying to become pregnant, or had not had sex in the 3 months prior to the survey. When examining reasons why women do not use birth control, s 2007 Pregnancy Risk Monitoring Assessment System (PRAMS) survey of over 8000 women with a recent unintended pregnancy found that 33 % felt they could not get pregnant at the time of conception, 30 % did not mind if they got pregnant, 22 % stated their partner did not want to use contraception, 16 % cited side effects, 10 % felt they or their partner were sterile, 10 % reported access problems, and 18 % selected "other ''. Contraceptive use saves almost US $19 billion in direct medical costs each year. Access to Most Effective Birth Control Could Save $12 Billion a Year: Study... The cost savings from that drop? About $12 billion in public health care costs each year, according to the new analysis. Contraception has many benefits beyond preventing pregnancy. Combination estrogen - progestin contraceptives can successfully treat dysmenorrhea (painful periods), provide symptom relief from endometriosis, reduce heavy menstrual bleeding and improve anemia related to menstrual blood loss, reduce symptoms of premenstrual syndrome and premenstrual dysphoric disorder, reduce ovarian and colon cancer risk, reduce moderate acne, prevent of menstrual migraines, and reduce hirsutism (abnormal hair growth). The progestin containing intrauterine device can reduce heavy menstrual bleeding and protect against pre-cancerous changes or cancer in the uterus. Condoms (male or female) are the only contraceptive method which protects against acquisition of sexually transmitted infections. According to the New York Times this October 6 of 2017 "The Trump administration on Friday moved to expand the rights of employers to deny women insurance coverage for contraception and issued sweeping guidance on religious freedom that critics said could also erode civil rights protec (1) tions for lesbian, gay, bisexual and transgender people ''. According to the Department of Health and Human Services "two rules rolling back a federal requirement that employers must include birth control coverage in their health insurance plans. The rules offer an exemption to any employer that objects to covering contraception services on the basis of sincerely held religious beliefs or moral convictions ''. In two major legal cases that were planned in 2014, the attorneys made an issue of whether a for - profit corporation can be required to provide coverage for contraceptive services to its employees. As of 1 January 2016, women in Oregon will be eligible to purchase a one - year supply of oral contraceptive; this is the first such legislation in the United States and has attracted the attention of California, Washington state and New York. In 2017, the Department of Health and Human Services changed the previous federal requirement for employers to cover birth control in the health insurance plans for their employees. Under this new rule, hundreds of thousands of women would lose their ability to have their birth control costs covered for them. The new rule would let insurers and employers refuse to provide birth control if doing so went against their "religious beliefs '' or "moral convictions ''. However, later in 2017 federal judge Wendy Beetlestone issued an injunction temporarily stopping the enforcement of this new rule. In 2014, the Supreme Court decided that for - profit corporations may offer insurance plans that do not cover contraception, by the rationale that the owners may hold that certain contraceptives violate their religious beliefs. This was a setback for the federal government 's attempt to create a uniform set of health care insurance benefits.
who is rag n bone man backing singer
Rag'n'Bone Man - Wikipedia Rory Charles Graham (born 29 January 1985), better known as Rag'n'Bone Man, is an English singer - songwriter. His first hit single "Human '' was released in 2016 and his debut album, also named Human, was released in February 2017. At the 2017 Brit Awards he was named British Breakthrough Act and also received the Critics ' Choice Award. Graham was born in Uckfield, East Sussex on 29 January 1985. He attended the school Ringmer Community College, was expelled, and then enrolled at Uckfield Community Technology College. At the age of 15, he MCed with a drum and bass crew using the handle Rag ' N ' Bonez, inspired by watching repeats of the 1970s British sitcom Steptoe and Son. When he moved to Brighton, his friend Gi3mo formed the Rum Committee and invited him to join the group as well. He started performing at Slip - jam B, where he met a lot of people who helped him to start his musical career. Over the next few years, they supported hip hop artists Pharoahe Monch and KRS - One at Brighton 's Concorde 2, and released their own album through Bandcamp. Around 2011, Graham started working with UK hip hop label High Focus, releasing a number of recordings with them such as; a collaboration E.P with MC / Producer Leaf Dog under the name (Dog ' n Bone E.P (2013)) and also a project with MC / Producer Dirty Dike under the name (Put That Soul On Me (2014)). Shortly afterwards Graham began to collaborate with record producer Mark Crew, who at the time was working on Bastille 's debut album Bad Blood. In 2013 Graham signed a publishing deal with Warner Chappell which enabled him to pursue his music career full - time. In 2014, working in collaboration with Mark Crew, Graham released an EP, Wolves through Best Laid Plans Records, containing nine tracks with guests including rapper Vince Staples, Stig Of The Dump, and Kate Tempest. Graham, along with Skunk Anansie also featured on Bastille 's third mixtape, VS. (Other People 's Heartache, Pt. III), in the song "Remains ''. The follow up, in 2015 was the Disfigured EP, also released through Best Laid Plans Records. Lead track "Bitter End '' was supported, then play - listed on BBC Radio 1 Xtra, and made it onto BBC Radio 1 's ' In New Music We Trust ' playlist. His first hit single "Human '' was released on Columbia Records in July 2016. It peaked at number one in the Official Singles Charts in Austria, Germany, Belgium and Switzerland. It was certified Gold in Germany, Italy, Sweden, Switzerland, Austria, Belgium and the Netherlands. The song "Human '' was used as the theme music to the Amazon Prime series Oasis. The song was also used in the official launch trailer for EA, BioWare 's video game Mass Effect: Andromeda, as well as in the trailer for the 2017 film Thank You for Your Service and also in the Inhumans television series.. In Brazil this song was also used in the soundtrack of Pega Pega from Rede Globo. The song was also used for the sixth season trailers of Highway Thru Hell. His debut album, also titled Human, was released on 10 February 2017. The album opens with the song "Human '', features the single "Skin '', and has tracks produced variously by Two Inch Punch, Jonny Coffer and Mark Crew. In 2017, Rag'n'Bone Man collaborated with British virtual band Gorillaz, appearing on the song "The Apprentice '' from their fifth studio album Humanz deluxe edition. In 2017, Rag'n'Bone Man won the British Breakthrough Act and Choice Award at the 2017 Brit Awards. Rag'n'Bone Man also won International Newcomer and International Male Artist at the 2017 Echo Awards in Germany and was nominated for Best New Artist and Best Push Artist at the MTV European Music Awards 2017. In May 2017, Rag'n'Bone Man endorsed Labour Party leader Jeremy Corbyn in the 2017 UK general election. During an interview with Channel 4, he said he saw Corbyn as a "man that speaks with passion ''. He added that he could "relate to what he says and have never felt like that before ''. Graham is in a relationship with Beth Rouy. The couple welcomed their first child in early september 2017.
where did the inspiration for harry potter come from
Harry Potter influences and analogues - wikipedia Writer J.K. Rowling cites several writers as influences in her creation of her bestselling Harry Potter series. Writers, journalists and critics have noted that the books also have a number of analogues; a wide range of literature, both classical and modern, which Rowling has not openly cited as influences. This article is divided into three sections. The first section lists those authors and books which Rowling has suggested as possible influences on Harry Potter. The second section deals with those books which Rowling has cited as favourites without mentioning possible influences. The third section deals with those analogues which Rowling has not cited either as influences or as favourites but which others have claimed bear comparison with Harry Potter. Rowling has never openly credited any single author with inspiration, saying, "I have n't got the faintest idea where my ideas come from, or how my imagination works. I 'm just grateful that it does, because it gives me more entertainment than it gives anyone else. '' However, she has mentioned a number of favourite authors as probable influences in her creation of Harry Potter. The works are listed roughly in order of publication. Rowling has said, "I 've taken horrible liberties with folklore and mythology, but I 'm quite unashamed about that, because British folklore and British mythology is a totally bastard mythology. You know, we 've been invaded by people, we 've appropriated their gods, we 've taken their mythical creatures, and we 've soldered them all together to make, what I would say, is one of the richest folklores in the world, because it 's so varied. So I feel no compunction about borrowing from that freely, but adding a few things of my own. '' When an interviewer said that saving Cedric 's body resembled the actions of Hector, Achilles, and Patroclus in the Iliad, Rowling said, "That 's where it came from. That really, really, really moved me when I read that when I was 19. The idea of the desecration of a body, a very ancient idea... I was thinking of that when Harry saved Cedric 's body. '' A number of commentators have drawn attention to the Biblical themes and references in J.K. Rowling 's final Harry Potter novel, Harry Potter and the Deathly Hallows. In an August 2007 issue of Newsweek, Lisa Miller commented that Harry dies and then comes back to life to save humankind, like Christ. She points out the title of the chapter in which this occurs -- "King 's Cross '' -- a possible allusion to Christ 's cross. Also, she outlines the scene in which Harry is temporarily dead, pointing out that it places Harry in a very heaven - like setting where he talks to a father figure "whose supernatural powers are accompanied by a profound message of love. '' Jeffrey Weiss adds, in the Dallas Morning News, that the biblical quotation "And the last enemy that shall be destroyed is death '' (1 Corinthians 15: 26), featured on the tombstones of Harry 's parents, refers to Christ 's resurrection. The quotation on Dumbledore 's family tomb, "Where your treasure is, your heart will be also '', is from Matthew 6: 21, and refers to knowing which things in life are of true value. "They 're very British books '', Rowling revealed to an Open Book conference in October 2007, "So on a very practical note Harry was going to find biblical quotations on tombstones, (but) I think those two particular quotations he finds on the tombstones at Godric 's Hollow, they (...) almost epitomize the whole series. '' Deathly Hallows begins with a pair of epigraphs, one from Quaker leader William Penn 's More Fruits of Solitude and one from Aeschylus ' The Libation Bearers. "I really enjoyed choosing those two quotations because one is pagan, of course, and one is from a Christian tradition '', Rowling said. "I 'd known it was going to be those two passages since ' Chamber ' was published. I always knew (that) if I could use them at the beginning of book seven then I 'd cued up the ending perfectly. If they were relevant, then I went where I needed to go. They just say it all to me, they really do. '' In a July 2007 webchat hosted by her publisher Bloomsbury, Rowling stated that The Pardoner 's Tale of Geoffrey Chaucer 's Canterbury Tales was an inspiration for a folktale retold by Xenophilius Lovegood in Harry Potter and the Deathly Hallows. In the tale, three brothers outwit Death by magicing a bridge to cross a dangerous river. Death, angry at being cheated, offers to give them three gifts, the Deathly Hallows, as a reward for evading him. The first two die as a result of the gifts granted to them, but the third uses his gift wisely and dies in his bed an old man. In The Pardoner 's Tale, three rogues are told that if they look under a tree, they can find a means to defeat Death. Instead they find gold, and, overcome with greed, eventually kill each other to possess it. Rowling has cited William Shakespeare 's Macbeth as an influence. In an interview with The Leaky Cauldron and MuggleNet, when asked, "What if (Voldemort) never heard the prophecy? '', she said, "It 's the Macbeth idea. I absolutely adore Macbeth. It is possibly my favourite Shakespeare play. And that 's the question is n't it? If Macbeth had n't met the witches, would he have killed Duncan? Would any of it have happened? Is it fated or did he make it happen? I believe he made it happen. '' On her website, she referred to Macbeth again in discussing the prophecy: "the prophecy (like the one the witches make to Macbeth, if anyone has read the play of the same name) becomes the catalyst for a situation that would never have occurred if it had not been made. '' Rowling cites Jane Austen as her favourite author and a major influence. Rowling has said: "My attitude to Jane Austen is accurately summed up by that wonderful line from Cold Comfort Farm: ' One of the disadvantages of almost universal education was that all kinds of people gained a familiarity with one 's favourite books. It gave one a curious feeling; like seeing a drunken stranger wrapped in one 's dressing gown. ' '' The Harry Potter series is known for its twist endings, and Rowling has stated that, "I have never set up a surprise ending in a Harry Potter book without knowing I can never, and will never, do it anywhere near as well as Austen did in Emma. '' Cold Comfort Farm (1932) by Stella Gibbons is itself a notable and influential novel. Rowling frequently mentions E. Nesbit in interview, citing her "very real '' child characters. In 2000, she said, "I think I identify with E Nesbit more than any other writer '', and described Nesbit 's The Story of the Treasure Seekers as, "Exhibit A for prohibition of all children 's literature by anyone who can not remember exactly how it felt to be a child. '' In a 2007 reading for students in New Orleans, Rowling said that the first book to inspire her was Kenneth Grahame 's children 's fantasy The Wind in the Willows, read to her when she had the measles at age 4. Rowling has also cited the work of Christian essayist and mystery writer Dorothy L. Sayers as an influence on her work, saying "There 's a theory -- this applies to detective novels, and then Harry, which is not really a detective novel, but it feels like one sometimes -- that you should not have romantic intrigue in a detective book. Dorothy L. Sayers, who is queen of the genre said -- and then broke her own rule, but said -- that there is no place for romance in a detective story except that it can be useful to camouflage other people 's motives. That 's true; it is a very useful trick. I 've used that on Percy and I 've used that to a degree on Tonks in this book, as a red herring. But having said that, I disagree inasmuch as mine are very character - driven books, and it 's so important, therefore, that we see these characters fall in love, which is a necessary part of life. '' Rowling has said she was a fan of the works of C.S. Lewis as a child, and cites the influence of his Narnia chronicles on her work: "I found myself thinking about the wardrobe route to Narnia when Harry is told he has to hurl himself at a barrier in King 's Cross station -- it dissolves and he 's on platform Nine and Three - Quarters, and there 's the train for Hogwarts. '' She is, however, at pains to stress the differences between Narnia and her world: "Narnia is literally a different world '', she says, "whereas in the Harry books you go into a world within a world that you can see if you happen to belong. A lot of the humour comes from collisions between the magic and the everyday world. Generally there is n't much humour in the Narnia books, although I adored them when I was a child. I got so caught up I did n't think CS Lewis was especially preachy. Reading them now I find that his subliminal message is n't very subliminal. '' New York Times writer Charles McGrath notes the similarity between Dudley Dursley, the obnoxious son of Harry 's neglectful guardians, and Eustace Scrubb, the spoiled brat who torments the main characters until converted by Aslan. In an interview in The Scotsman in 2002, Rowling described Elizabeth Goudge 's The Little White Horse as having, "perhaps more than any other book... a direct influence on the Harry Potter books. The author always included details of what her characters were eating and I remember liking that. You may have noticed that I always list the food being eaten at Hogwarts. '' Rowling said in O that "Goudge was the only (author) whose influence I was conscious of. She always described exactly what the children were eating, and I really liked knowing what they had in their sandwiches. '' Rowling also cites the work of T.H. White, a grammar school teacher, and the author of the well - known children 's classic saga, The Once and Future King, which tells the story of King Arthur of Britain, from childhood to grave. Perhaps the best - known book from this saga is The Sword in the Stone (the first book) which was made into an animated movie by Walt Disney. Arthur (called Wart) is a small scruffy - haired orphan, who meets the wizard Merlin (who has an owl, Archimedes, and acts, much like Dumbledore, in the manner of an "absent - minded professor '') who takes him to a castle to educate him. As writer Phyllis Morris notes, "The parallels between Dumbledore and Merlin do not end with the protection of the hero in danger... In addition to both characters sporting long, flowing beards (and blue eyes, according to T.H. White), Merlin was King Arthur 's mentor and guide, as Dumbledore has been Harry 's guide and mentor. '' Rowling describes Wart as "Harry 's spiritual ancestor. '' Rowling is also a fan of Paul Gallico, "especially Manxmouse. That 's a great book. Gallico manages the fine line between magic and reality so skilfully, to the point where the most fantastic events feel plausible. '' In the Scotsman interview, Rowling described civil rights activist Jessica Mitford as "my most influential writer '', saying, "I love the way she never outgrew some of her adolescent traits, remaining true to her politics -- she was a self - taught socialist -- throughout her life. '' In a review of Decca -- The letters of Jessica Mitford, she went further saying, "Jessica Mitford has been my heroine since I was 14 years old, when I overheard my formidable great - aunt discussing how Mitford had run away at the age of 19 to fight with the Reds in the Spanish Civil War '', and claims what inspired her about Mitford was that she was "incurably and instinctively rebellious, brave, adventurous, funny and irreverent, she liked nothing better than a good fight, preferably against a pompous and hypocritical target. '' In 1999, while Rowling was on a tour of the United States, a bookseller handed her a copy of I Capture the Castle by Dodie Smith, saying she would love it. The book became one of her all time favourites. Rowling says that, "it is the voice of the narrator, in this case 17 - year - old Cassandra Mortmain, which makes a masterpiece out of an old plot. '' Also in 1999, Rowling said in interview that she was a great fan of Grimble by Clement Freud, saying: "Grimble is one of funniest books I 've ever read, and Grimble himself, who is a small boy, is a fabulous character. I 'd love to see a Grimble film. As far as I know, these last two fine pieces of literature are out of print, so if any publishers ever read this, could you please dust them off and put them back in print so other people can read them? '' On a number of occasions, Rowling has cited her admiration for French novelist Colette. Rowling said that the death of Charles Darnay in Charles Dickens 's A Tale of Two Cities, and the novel 's final line, "It is a far, far better thing that I do than I have ever done; it is a far, far better rest that I go to than I have ever known '', had a profound impact on her. In a 2000 interview with BBC Radio 4, Rowling revealed a deep love of Vladimir Nabokov 's controversial book Lolita, saying, "There just is n't enough time to discuss how a plot that could have been the most worthless pornography becomes, in Nabokov 's hands, a great and tragic love story, and I could exhaust my reservoir of superlatives trying to describe the quality of the writing. '' In an interview with O: The Oprah Magazine, Rowling described Irish author Roddy Doyle as her favourite living writer, saying, "I love all his books. I often talk about him and Jane Austen in the same breath. I think people are slightly mystified by that because superficially they 're such different writers. But they both have a very unsentimental approach to human nature. They can be profoundly moving without ever becoming mawkish. '' Many of Rowling 's named favorites decorate the links section of her personal webpage. The section is designed to look like a bookcase, and includes I Capture the Castle, The Little White Horse and Manxmouse, Jane Austen 's Pride and Prejudice, Sense and Sensibility and Emma, a book of fairy tales by E. Nesbit, The Commitments and The Van by Roddy Doyle, two books by Dorothy L. Sayers and a book by Katherine Mansfield. In January 2006, Rowling was asked by the Royal Society of Literature to nominate her top ten books every child should read. Included in her list were Wuthering Heights by Emily Brontë, Charlie and the Chocolate Factory by Roald Dahl, Robinson Crusoe by Daniel Defoe, David Copperfield by Charles Dickens, Hamlet by William Shakespeare, To Kill a Mockingbird by Harper Lee, Animal Farm by George Orwell, The Tale of Two Bad Mice by Beatrix Potter, The Catcher in the Rye by J.D. Salinger and Catch - 22 by Joseph Heller. There are a number of fictional works to which Harry Potter has been repeatedly compared in the media. Some of these Rowling has herself mentioned, others have been mentioned by Internet sites, journalists, critics or other authors. The works are listed roughly in order of creation. John Granger sees Chamber of Secrets as similar to a morality play like John Bunyan 's The Pilgrim 's Progress. He describes the climax, where Harry descends to the Chamber of Secrets to rescue Ginny Weasley as "the clearest Christian allegory of salvation history since Lewis 's The Lion, the Witch, and the Wardrobe.... Using only traditional symbols, from the ' Ancient of Days ' figure as God the Father to the satanic serpent and Christ - like phoenix (' the Resurrection Bird '), the drama takes us from the fall to eternal life without a hitch. '' In 2006, Rowling recommended Emily Brontë 's Gothic post-Romantic Wuthering Heights as number one of the top ten books every child should read. In her essay, "To Sir With Love '' in the book Mapping the World of Harry Potter, Joyce Millman suggests that Severus Snape, Harry Potter 's morally ambiguous potions master, is drawn from a tradition of Byronic heroes such as Wuthering Heights ' Heathcliff and that chapter two of Harry Potter and the Half - Blood Prince is reminiscent of the opening of Wuthering Heights when Heathcliff is coldly introduced and asks his servant Joseph to bring up wine for him and Lockwood. Snape commands the almost identical line to his servant Wormtail, with Snape described similarly to how Emily Brontë described Heathcliff. The Harry Potter series draws upon a long literary tradition of stories set in boarding schools. This school story genre originated in the Victorian era with Tom Brown 's Schooldays, by Thomas Hughes. Tom Brown 's Schooldays laid down a basic structure which has been widely imitated, for example in Anthony Buckeridge 's 1950s Jennings books. Both Tom Brown 's Schooldays and Harry Potter involve an average eleven - year - old, better at sport than academic study, who is sent to boarding school. Upon arrival, the boy gains a best friend (in Tom 's case, East, in Harry 's case, Ron Weasley) who helps him adjust to the new environment. They are set upon by an arrogant bully -- in Tom Brown 's case, Harry Flashman, in Harry 's case Draco Malfoy. Stephen Fry, who both narrates the British audio adaptations of the Harry Potter novels and has starred in a screen adaptation of Tom Brown, has commented many times about the similarities between the two books. "Harry Potter -- a boy who arrives in this strange school to board for the first time and makes good, solid friends and also enemies who use bullying and unfair tactics '', notes Fry, "then is ambiguous about whether or not he is going to be good or bad. His pluck and his endeavour, loyalty, good nature and bravery are the things that carry him through -- and that is the story of Tom Brown 's Schooldays ''. Fans of author J.R.R. Tolkien have drawn attention to the similarities between his novel The Lord of the Rings and the Harry Potter series; specifically Tolkien 's Wormtongue and Rowling 's Wormtail, Tolkien 's Shelob and Rowling 's Aragog, Tolkien 's Gandalf and Rowling 's Dumbledore, Tolkien 's Nazgûl and Rowling 's Dementors, Old Man Willow and the Whomping Willow and the similarities between both authors ' antagonists, Tolkien 's Dark Lord Sauron and Rowling 's Lord Voldemort (both of whom are sometimes within their respective continuities unnamed due to intense fear surrounding their names; both often referred to as ' The Dark Lord '; and both of whom are, during the time when the main action takes place, seeking to recover their lost power after having been considered dead or at least no longer a threat). Several reviews of Harry Potter and the Deathly Hallows noted that the locket used as a horcrux by Voldemort bore comparison to Tolkien 's One Ring, as it negatively affects the personality of the wearer. Rowling maintains that she had not read The Hobbit until after she completed the first Harry Potter novel (though she had read The Lord of the Rings as a teenager) and that any similarities between her books and Tolkien 's are "Fairly superficial. Tolkien created a whole new mythology, which I would never claim to have done. On the other hand, I think I have better jokes. '' Tolkienian scholar Tom Shippey has maintained that "no modern writer of epic fantasy has managed to escape the mark of Tolkien, no matter how hard many of them have tried ''. Many have drawn attention to the similarities between Rowling 's works and those of Roald Dahl, particularly in the depiction of the Dursley family, which echoes the nightmarish guardians seen in many of Dahl 's books, such as the Wormwoods from Matilda, Aunt Sponge and Aunt Spiker from James and the Giant Peach, and Grandma from George 's Marvellous Medicine. Rowling acknowledges that there are similarities, but believes that at a deeper level, her works are different from those of Dahl; in her words, more "moral ''. The Marvel Comics superhero team the X-Men, created by Stan Lee and Jack Kirby in 1963, are similar to Harry Potter in their examination of prejudice and intolerance. Comic book historian Michael Mallory examined the original premise of the comic, in which teenage mutants study under Professor X to learn how to control their abilities, safe from fearful Homo sapiens, and also battle less benign mutants like Magneto. He argued, "Think about (the comic) clad in traditional British university robes and pointy hats, castles and trains, and the image that springs to mind is Hogwarts School for Witchcraft and Wizard (ry), with Dumbledore, Voldemort and the class struggle between wizards and muggles. '' He acknowledged that while the X-Men was for the longest time "a phenomenon that was largely contained in the realm of comic book readers as opposed to the wider public (such as Rowling) '', he argued "nothing exists in a vacuum, least of all popular culture. Just as the creators of X-Men consciously or unconsciously tapped into the creative ether of their time for inspiration, so has the X-Men phenomenon had an effect on the books and films that has since followed. '' Lloyd Alexander 's five - volume Prydain Chronicles, begun in 1964 with The Book of Three and concluding in 1968 with "The High King '', features a young protagonist, an assistant pig keeper named Taran, who wishes to be a great hero in a world drawn from Welsh mythology. Entertainment Weekly cited Lloyd Alexander as a possible influence on Rowling when it named her its 2007 Entertainer of the Year. When Alexander died in 2007, his obituary in New York Magazine drew many comparisons between Harry Potter and Prydain and said that "The High King is everything we desperately hope Harry Potter and the Deathly Hallows will be. '' Susan Cooper 's Dark Is Rising sequence (which commenced with Over Sea, Under Stone in 1965 and now more commonly bound in a single volume) have been compared to the Harry Potter series. The second novel, also called The Dark Is Rising, features a young boy named Will Stanton who discovers on his eleventh birthday that he is in fact imbued with magical power; in Will 's case, that he is the last of the Old Ones, beings empowered by the Light to battle the Dark. The books open in much the same way, with Will finding that people are telling him strange things and that animals run from him. John Hodge, who wrote the screenplay for the film adaptation, entitled The Seeker, made substantial changes to the novel 's plot and tone to differentiate it from Harry Potter. The basic premise of Ursula K. Le Guin 's A Wizard of Earthsea (Parnassus, 1968), in which a boy with unusual aptitude for magic is recognised, and sent to a special school for wizards, resembles that of Harry Potter. Ged also receives a scar in his struggle with the shadow which hurts whenever the shadow is near him like Harry Potter 's scar when Voldemort is near him. Le Guin has claimed that she does n't feel Rowling "ripped her off '', but that she felt that Rowling 's books were overpraised for supposed originality, and that Rowling "could have been more gracious about her predecessors. My incredulity was at the critics who found the first book wonderfully original. She has many virtues, but originality is n't one of them. That hurt. '' Many critics have noted that Jill Murphy 's The Worst Witch series (first published in 1974 by Allison & Busby), is set in a school for girls, "Miss Cackle 's Academy for Witches '', reminiscent of Hogwarts. The story concerns an awkward pupil at a boarding - school for witches, who faces a scheming rival student. Her professors include a kindly and elderly headmistress and a bullying, raven - haired potions teacher. Murphy has commented on her frustration at constant comparisons between her work and Harry Potter: "It 's irritating... everyone asks the same question and I even get children writing to ask me whether I mind about the Hogwarts school of witchcraft and pointing out similarities. Even worse are reviewers who come across my books, or see the TV series, and, without taking the trouble to find out that it 's now over quarter of a century since I wrote my first book, make pointed remarks about ' clever timing ' -- or say things like ' the Worst Witch stories are not a million miles from JK Rowling 's books '. The implications are really quite insulting! '' The Harry Potter series shares many similarities with George Lucas 's Star Wars with respect to main characters, especially heroes and villains, as well as story plotlines. Scholar Deborah Cartmell states that Harry Potter 's story is based as much on Star Wars as it is on any other text. The life of Harry Potter, the main hero of the series, parallels that of Luke Skywalker, who is the main hero of the Original Star Wars trilogy with both characters living dull and ordinary lives until a later age when they are recruited by an older mentor. Harry Potter trains to become a wizard at his late childhood and mentored by Albus Dumbledore in facing his destiny and enemy Lord Voldemort; whereas Luke Skywalker trains to become a Jedi at his early adulthood and is mentored by Obi - Wan Kenobi in facing his destiny and enemy Darth Vader (also known as Lord Vader). Both characters were also brought at infancy to their foster families directly by their future mentors. The main villains of both the franchises also share many similarities. Tom Riddle was once also a student of the hero 's mentor, Dumbledore at Hogwarts, also studying to be a wizard before he turned evil and transformed into Voldemort. Likewise, Anakin Skywalker was also a student of the hero 's mentor, Obi - Wan Kenobi with the Jedi Order training to be a Jedi Knight before he turned to the Dark Side and transformed into Darth Vader. The mentors of the main heroes also share many parallels. Both also mentored the main villain of their stories before they turned bad and betrayed their respective mentor. Both mentors were also eventually killed when fighting their former students. Albus Dumbledore was betrayed by Tom Riddle before being eventually killed off by him as Lord Voldemort (through Draco Malfoy and Snape). Obi - Wan was betrayed by Anakin Skywalker before eventually being killed off by him as Darth Vader. Both also voluntarily allowed themselves to be killed and advised the hero from beyond the grave. Both stories have a "Dark Side '' the followers of which are the villains of the story as well as their own followers / apprentices. Both stories also have a prophesied "Chosen One '' who will destroy evil. In the Harry Potter series, it is Harry Potter who is the chosen one who would defeat the Dark Lord Voldemort. In Star Wars, it is presumed and appears to be Luke Skywalker, but actually revealed to be Anakin Skywalker as proclaimed in the Jedi prophecy who would destroy the Sith and bring balance to The Force. He does this after being redeemed by his son, Luke Skywalker. More recent theories contrast this and argue that Luke is indeed the chosen one who will bring balance to The Force. In Diana Wynne Jones ' Charmed Life (1977), two orphaned children receive magical education while living in a castle. The setting is a world resembling early 1900s Britain, where magic is commonplace. Diana Wynne Jones has stated in answer to a question on her webpage: "I think Ms Rowling did get quite a few of her ideas from my books -- though I have never met her, so I have never been able to ask her. My books were written many years before the Harry Potter books (Charmed Life was first published in 1977), so any similarities probably come from what she herself read as a child. Once a book is published, out in the world, it is sort of common property, for people to take ideas from and use, and I think this is what happened to my books. '' Before the arrival of J.K. Rowling, Britain 's bestselling author was comic fantasy writer Terry Pratchett. His Discworld books, beginning with The Colour of Magic in 1983, satirise and parody common fantasy literature conventions. Pratchett was repeatedly asked if he "got '' his idea for his magic college, the Unseen University, from Harry Potter 's Hogwarts, or if the young wizard Ponder Stibbons, who has dark hair and glasses, was inspired by Harry Potter. Both in fact predate Rowling 's work by several years; Pratchett jokingly claimed that he did steal them, though "I of course used a time machine. '' The BBC and other British news agencies emphasised a supposed rivalry between Pratchett and Rowling, but Pratchett said on record that, while he did not put Rowling on a pedestal, he did not consider her a bad writer, nor did he envy her success. Claims of rivalry were due to a letter he wrote to The Sunday Times, about an article published declaring that fantasy "looks backward to an idealised, romanticised, pseudofeudal world, where knights and ladies morris - dance to Greensleeves ''. Actually, he was protesting the ineptitude of journalists in that genre, many of whom did not research their work and, in this case, contradicted themselves in the same article. Science fiction author Orson Scott Card, in a fierce editorial in response to Rowling 's copyright lawsuit against the Harry Potter Lexicon, claimed that her assertion that she had had her "words stolen '' was rendered moot by the fact that he could draw numerous comparisons between her books and his own 1985 novel Ender 's Game; in his words, A young kid growing up in an oppressive family situation suddenly learns that he is one of a special class of children with special abilities, who are to be educated in a remote training facility where student life is dominated by an intense game played by teams flying in midair, at which this kid turns out to be exceptionally talented and a natural leader. He trains other kids in unauthorised extra sessions, which enrages his enemies, who attack him with the intention of killing him; but he is protected by his loyal, brilliant friends and gains strength from the love of some of his family members. He is given special guidance by an older man of legendary accomplishments who previously kept the enemy at bay. He goes on to become the crucial figure in a struggle against an unseen enemy who threatens the whole world. Chris Columbus, who directed the first two Harry Potter film adaptations, has cited the 1985 film Young Sherlock Holmes, which he wrote, as an influence in his direction for those films. "That was sort of a predecessor to this movie, in a sense '', he told the BBC in 2001, "It was about two young boys and a girl in a British boarding school who had to fight a supernatural force. '' Scenes from Young Sherlock Holmes were subsequently used to cast the first Harry Potter film. On 3 January 2010, Irish journalist Declan Lynch (writing in The Sunday Independent) stated that "there 's more than a hint of young Sherlock evident in Harry ''. The 1986 Charles Band - produced low - budget horror / fantasy film Troll, directed by John Carl Buechler and starring Noah Hathaway, Julia Louis - Dreyfus and Sonny Bono, features a character named "Harry Potter Jr. '' In an interview with M.J. Simpson, Band claimed: "I 've heard that J.K. Rowling has acknowledged that maybe she saw this low - budget movie and perhaps it inspired her. '' However, a spokesman for Rowling, responding to the rumors of a planned remake of the film, has denied that Rowling ever saw it before writing her book. Rowling has said on record multiple times that the name "Harry Potter '' was derived in part from a childhood friend, Ian Potter, and in part from her favourite male name, Harry. On 13 April 2008, The Mail on Sunday wrote a news article claiming that Warner Bros. had begun a legal action against Buechler; however, the story was denied and lawyers for Rowling demanded the article be removed. On 14 April 2008, John Buechler 's partner in the Troll remake, Peter Davy, said about Harry Potter, "In John 's opinion, he created the first Harry Potter. J.K. Rowling says the idea just came to her. John does n't think so. There are a lot of similarities between the theme of her books and the original Troll. John was shocked when she came out with Harry Potter. '' Groosham Grange (first published in 1988), a novel by best - selling British author Anthony Horowitz has been cited for its similarities with Harry Potter; the plot revolves around David Eliot, a young teenager mistreated by his parents who receive an unexpected call from an isolated boarding school, Groosham Grange, which reveals itself as a school for wizards and witches. Both books feature a teacher who is a ghost, a werewolf character named after the French word for "wolf '' (Lupin / Leloup), and passage to the school via railway train. Horowitz, however, while acknowledging the similarities, just thanked Rowling for her contribution to the development of the young adult fiction in the UK. Fans of the comic book series The Books of Magic, by Neil Gaiman (first published in 1990 by DC Comics) have cited similarities to the Harry Potter story. These include a dark - haired English boy with glasses, named Timothy Hunter, who discovers his potential as the most powerful wizard of the age upon being approached by magic - wielding individuals, the first of whom makes him a gift of a pet owl. Similarities led the British tabloid paper the Daily Mirror to claim Gaiman had made accusations of plagiarism against Rowling, which he went on the record denying, saying the similarities were either coincidence, or drawn from the same fantasy archetypes. "I thought we were both just stealing from T.H. White '', he said in an interview, "very straightforward. '' Dylan Horrocks, writer of the Books of Magic spin - off Hunter: The Age of Magic, has said they should be considered as similar works in the same genre and that both have parallels with earlier schoolboy wizards, like the 2000 AD character Luke Kirby. The text adventure game Spellcasting 101: Sorcerers Get All The Girls (1990) is the first installment of the Spellcasting series created by Steve Meretzky during his time at Legend Entertainment. All the three games in the series tell the story of young Ernie Eaglebeak, a bespectacled student at the prestigious Sorcerer University, as he progresses through his studies, learning the arcanes of magic, taking part in student life, occasionally saving the world as he knows it. Each separate game takes place during consecutive school years as well, much like the Harry Potter books. In 1991, the author Jane Yolen released a book called Wizard 's Hall, to which the Harry Potter series bears a resemblance. The main protagonist, Henry (also called Thornmallow), is a young boy who joins a magical school for young wizards. At the school "he must fulfill an ancient prophecy and help overthrow a powerful, evil wizard. '' However, Yolen has stated that "I 'm pretty sure she never read my book, '' attributing similarities to commonly - used fantasy tropes. In an interview with the magazine Newsweek, Yolen said, "I always tell people that if Ms. Rowling would like to cut me a very large check, I would cash it. '' Yolen stopped reading Harry Potter after the third book, and has expressed dislike for the writing style of Harry Potter, calling it "fantasy fast food ''. Eva Ibbotson 's The Secret of Platform 13 (first published in 1994) features a gateway to a magical world located in King 's Cross station in London. The protagonist belongs to the magical world but is raised in the normal world by a rich family who neglect him and treat him as a servant, while their fat and unpleasant biological son is pampered and spoiled. Amanda Craig is a journalist who has written about the similarities: "Ibbotson would seem to have at least as good a case for claiming plagiarism as the American author currently suing J.K. Rowling (i.e. Nancy Stouffer), but unlike her, Ibbotson says she would ' like to shake her by the hand. I think we all borrow from each other as writers. ' ''
the art and architectural features of kamakhya temple
Kamakhya temple - wikipedia The Kamakhya Temple also Kamrup - Kamakhya is a Hindu temple dedicated to the mother Goddess Kamakhya. It is one of the oldest of the 51 Shakti Pitha s. Situated on the Nilachal Hill in western part of Guwahati city in Assam, India, it is the main temple in a complex of individual temples dedicated to the ten Mahavidya s: Kali, Tara, Sodashi, Bhuvaneshwari, Bhairavi, Chhinnamasta, Dhumavati, Bagalamukhi, Matangi and Kamalatmika. Among these, Tripurasundari, Matangi and Kamala reside inside the main temple whereas the other seven reside in individual temples. It is an important pilgrimage destination for Hindus and especially for Tantric worshipers. A scholarly study of the Kamakhya Temple was authored by Kali Prasad Goswami, Adari Surendra. In July 2015, the Supreme Court of India transferred the administration of the Temple from the Kamakhya kumar sah Debutter Board to the Bordewri Samaj. The current structural temple, built and renovated many times in the period 8th - 17th century, gave rise to a hybrid indigenous style that is sometimes called the Nilachal type: a temple with a hemispherical dome on a cruciform base. The temple consists of four chambers: garbhagriha and three mandapas locally called calanta, pancharatna and natamandira aligned from east to west. The garbhagriha has a pancharatha plan that rests on plinth moldings that are similar to the Surya Temple at Tezpur. On top of the plinths are dados from a later period which are of the Khajuraho or the Central Indian type, consisting of sunken panels alternating with pilasters. The panels have delightful sculptured Ganesha and other Hindu gods and goddesses. Though the lower portion is of stone, the shikhara in the shape of a polygonal beehive - like dome is made of brick, which is characteristic of temples in Kamrup. The shikhara is circled by a number of minaret inspired angashikharas of Bengal type charchala. The Shikhara, angashikharas and other chambers were built in the 16th century and after. The inner sanctum, the garbhagriha, is below ground level and consists of no image but a rock fissure in the shape of a yoni (female genital): The garbhagriha is small, dark and reached by narrow steep stone steps. Inside the cave there is a sheet of stone that slopes downwards from both sides meeting in a yoni - like depression some 10 inches deep. This hollow is constantly filled with water from an underground perennial spring. It is the vulva - shaped depression that is worshiped as the goddess Kamakhya herself and considered as most important pitha (abode) of the Devi. The garbhaghrihas of the other temples in the Kamakhya complex follow the same structure -- a yoni - shaped stone, filled with water and below ground level. The temple consists of three additional chambers. The first to the west is the calanta, a square chamber of type atchala (similar to the 1659 Radha - Vinod Temple of Bishnupur). The entrance to the temple is generally via its northern door, that is of Ahom type dochala. It houses a small movable idol of the Goddess, a later addition, which explains the name. The walls of this chamber contain sculpted images of Naranarayana, related inscriptions and other gods. It leads into the garbhagriha via descending steps. The pancharatna to the west of calanta is large and rectangular with a flat roof and five smaller shikharas of the same style as the main skhikara. The middle;; shikhara;; is slightly bigger than the other four. The natamandira extends to the west of the pancharatna with an apsidal end and ridged roof of the Ranghar type Ahom style. Its inside walls bear inscriptions from Rajeswar Singha (1759) and Gaurinath Singha (1782), which indicate the period this structure was built. The earliest historical dynasty of Kamarupa, the Varmans (350 - 650), as well as Xuanzang, a 7th - century Chinese traveler ignore the Kamakhya; and it is assumed that the worship at least till that period was Kirata - based beyond the brahminical ambit. The first epigraphic notice of Kamakhya is found in the 9th - century Tezpur plates of Vanamalavarmadeva of the Mlechchha dynasty. Since the archaeological evidence too points to a massive 8th - 9th century temple, it can be safely assumed that the earliest temple was constructed during the Mlechchha dynasty. From the moldings of the plinth and the bandhana, the original temple was clearly of Nagara type, possibly of the Malava style. The later Palas of Kamarupa kings, from Indra Pala to Dharma Pala, were followers of the Tantrik tenet and about that period Kamakhya had become an important seat of Tantrikism. The Kalika Purana (10th century) was composed and Kamakhya soon became a renowned centre of Tantrik sacrifices, mysticism and sorcery. Mystic Buddhism, known as Vajrayana and popularly called the "Sahajia cult '', too rose in prominence Kamarupa in the tenth century. It is found from Tibetan records that some of the eminent Buddhist professors in Tibet, of the tenth and the eleventh centuries, hailed from Kamarupa. The Kalika Purana gives the Sanskritized names of most of the rivers and hills of Brahmaputra valley. It gives a full account of the Naraka legend, the physical description of the land and the old city of Pragjyotishpura as well as the special merit and sanctity of the Kamakhya Temple. There is a tradition that the temple was destroyed by Kalapahar, a general of Sulaiman Karrani (1566 -- 1572). Since the date of reconstruction (1565) precedes the possible date of destruction, and since Kalapahar is not known to have ventured so far to the east, it is now believed that the temple was destroyed not by Kalapahar but during Hussein Shah 's invasion of the Kamata kingdom (1498). The ruins of the temple was said to have been discovered by Vishwasingha (1515 -- 1540), the founder of the Koch dynasty, who revived worship at the site; but it was during the reign of his son, Nara Narayan (1540 -- 1587), that the temple reconstruction was completed in 1565. The reconstruction used material from the original temples that was lying scattered about, some of which still exists today. Banerji (1925) records that this structure was further built over by the rulers of the Ahom kingdom. According to historical records and epigraphic evidence, the main temple was rebuilt by Chilarai using the available stone ruins, with the brick dome being an innovation. The current final structure has been rebuilt during the Ahom times, with remnants of the earlier Koch temple carefully preserved. According to a legend, the Koch Bihar royal family was banned by Kamakhya Devi herself from offering puja at the temple. In fear of this curse, to this day no descendants of that family dares to even look upward towards the Kamakhya hill while passing by. Without the support of the Koch royal family the temple faced a lot of hardship. By the end of 1658, the Ahoms under king Jayadhvaj Singha had conquered the Kamrup and their interests in the temple grew. In the decades that followed the Ahom kings, all who were either devout Shaivite or Shakta continued to support the temple by rebuilding and renovating it. Rudra Singha (reign 1696 to 1714) was a devout Hindu and as he grew older he decided to formally embrace the religion and become an orthodox Hindu by being initiated or taking sharan of a Guru, who would teach him the mantras and become his spiritual guide. But, he could not bear the thought of humbling himself in front of a Brahmin who is his subject. He, therefore, sent envoys to Bengal and summoned Krishnaram Bhattacharyya, a famous mahant of Shakta sect who lived in Malipota, near Santipur in Nadia district. The mahant was unwilling to come, but consented on being promised to be given the care of the Kamakhya temple to him. Though the king did not take sharan (shelter), he satisfied the mahant by ordering his sons and the Brahmins in his entourage to accept him as their spiritual guru. When Rudra Singha died, his eldest son Siba Singha (reign 1714 to 1744), who became the king, gave the management of the Kamakhya temple and along with it large areas of land (Debottar land) to Mahant Krishnaram Bhattacharyya. The Mahant and his successors came to be known as Parbatiya Gosains, as they resided on top of the Nilachal hill. Many Kamakhya priests and modern Saktas of Assam are either disciples or descendants of the Parbatiya Gosains, or of the Nati and Na Gosains. It is likely that this is an ancient Khasi people sacrificial site, and worshipping here still includes sacrifices. Devotees come every morning with goats to offer to Shakti. The Kalika Purana, an ancient work in Sanskrit describes Kamakhya as the yielder of all desires, the young bride of Shiva, and the giver of salvation. Shakti is known as Kamakhya. Symbolic to this is a very special form of Sindoor, available here, made from rock and called Kamakhya Sindoor, which is believed to be a blessing bestowed by Kamakhya Devi herself, to the wearer. Tantra is basic to worship, in the precincts of this ancient temple of mother goddess Kamakhya. Kamakhya Devi Mantra कामाख्ये काम - संपन्ने, कामेश्वरी! हर - प्रिये । कामनां देहि में नत्यिं, कामेश्वरि! नमास्तु ते । । कामदे काम - रूपस्थे, सुभगे सुर - सेविते । करोमि दर्शनं देव्याः, सर्व - कामार्थ - सिद्धये । । Kamakhya Devi Tantra त्रीं त्रीं त्रीं हूं, हूं स्त्रीं स्त्रीं कामाख्ये प्रसीद स्त्रीं हूं हूं त्रीं त्रीं त्रीं स्वाहा!! (22 Letters) Kamakhye Varade Devi Neela Parvata Vasini Tvam Devi Jagatam Mata Yoni Mudre Namostute Kamakhye Kamasampanne Kameshwari Harapriye Kaamanaam Dehi Me Nityam Kaameshwari Namostute Kaamade Kamarupasthe Subhage Sura - sevite Karomi Darshanam Devyah sarva kaamaartha siddhaye The worship of all female deity in Assam symbolizes the "fusion of faiths and practices '' of Aryan and non-Aryan elements in Assam. The different names associated with the goddess are names of local Aryan and non-Aryan goddesses. The Yogini Tantra mentions that the religion of the Yogini Pitha is of Kirata origin. According to Banikanta Kakati, there existed a tradition among the priests established by Naranarayana that the Garos, a matrilineal people, offered worship at the earlier Kamakhya site by sacrificing pigs. The goddess is worshiped according to both the Vamachara (Left - Hand Path) as well as the Dakshinachara (Right - Hand Path) modes of worship. Offerings to the goddess are usually flowers, but might include animal sacrifices. In general female animals are exempt from sacrifice, a rule that is relaxed during mass sacrifices. According to the Kalika Purana, Kamakhya Temple denotes the spot where Sati used to retire in secret to satisfy her amour with Shiva, and it was also the place where her yoni (genital) fell after Shiva danced with the corpse of Sati. It mentions Kamakhya as one of four primary shakti peethas: the others being the Vimala Temple within the Jagannath Temple complex in Puri, Odisha; Tara Tarini) Sthana Khanda (Breasts), near Brahmapur, Odisha, and Dakhina Kalika in Kalighat, Kolkata, WEstate Bengal originated from the limbs of the Corpse of Mata Sati. This is not corroborated in the Devi Bhagavata, which lists 108 places associated with Sati 's body, though Kamakhya finds a mention in a supplementary list. The Yogini Tantra, a latter work, ignores the origin of Kamakhya given in Kalika Purana and associates Kamakhya with the goddess Kali and emphasizes the creative symbolism of the yoni. Vatsyayana, a Vedic Sage in Varanasi during the later first Century was approached by the King in the Himalayan region (now Nepal) to find a solution to convert the tribals and their rituals of human sacrifice to a more socially accepted worship. The Sage suggested the worship of a tantric goddess Tara that spread towards the eastern Himalayan belt till the Garo Hills where the tribals worshipped a fertility ' yoni ' goddess ' Kameke '. It was much later in the later Brahaminical period Kalika Purana that most tantric goddess were related to the legend of ' Shakti ' and began to be erroneously worshiped as a ' devi ' by the Hindus. Being the centre for Tantra worship this temple attracts thousands of tantra devotees in an annual festival known as the Ambubachi Mela. Another annual celebration is the Manasha Puja. Durga Puja is also celebrated annually at Kamakhya during Navaratri in the autumn. This five - day festival attracts several thousand visitors.
who won season 8 of food truck race
The Great food truck Race - wikipedia The Great Food Truck Race is a reality television and cooking series that originally aired on August 15, 2010, on Food Network, with Tyler Florence as the host. Billed as a cross between Cannonball Run and Top Chef, this late summer show features several competing teams of three who drive across the United States in their food trucks and make stops every week to sell food in different cities. Every season, between six and eight food truck teams compete in a race where they must cook, sell, and adapt to different challenges in the hopes of winning $50,000 (and in some cases, their very own food truck). Starting off on the west coast and driving east, every week the food truck that makes the least profit is eliminated and sent home, while the rest of the food trucks continue on to the next city. They 're usually given "seed money '' at the beginning of each episode that goes towards grocery shopping. The teams are assigned different challenges every week for a chance to earn more money (usually in the form of selling the most of a special dish or making a version of a local delicacy for Tyler and a guest judge). They 're also thrown obstacles that hinder their ability to make normal sales (for example, switching their menu to vegan food or being unable to restock supplies for the day). In the first two seasons of the Great Food Truck Race, the competitors were seasoned, professional food truck operators who were competing for a cash prize (first season was $50,000 and second season was $100,000). In the following seasons (save for season six), food trucks were provided to novices (from home cooks to former restaurateurs) who have dreamt of owning and operating their very own food truck. In seasons three, four, and five, the winning team got the money and got to keep the food truck they were provided with by the show. In season six and onward, the show reverted to awarding the winning teams only the $50,000. Bob Tuschman, general manager of the Food Network, had gotten several pitches for a food truck themed competition show before settling on the show that would become The Great Food Truck Race. He believed it to be ideal because it combined the Survivor - style reality show competition with the rising trend of food trucks. Tyler Florence was immediately on board and as the show grew and got renewed, so did the food truck scene. Florence believed the food truck trend grew in large part because of the economic slump around the early 2000s, and his show "helped invent an entirely new genre of restaurants ''.
who is responsible for protecting citizens from internal threats
Internal security - wikipedia Internal security, or IS, is the act of keeping peace within the borders of a sovereign state or other self - governing territories. generally by upholding the national law and defending against internal security threats. Responsibility for internal security may range from police to paramilitary forces, and in exceptional circumstances, the military itself. Threats to the general peace may range from low - level civil disorder, large scale violence, or even an armed insurgency. Threats to internal security may be directed at either the state 's citizens, or the organs and infrastructure of the state itself, and may range from petty crime, serious organised crime, political or industrial unrest, or even domestic terrorism. Foreign powers may also act as a threat to internal security, by either committing or sponsoring terrorism or rebellion, without actually declaring war. Governmental responsibility for internal security will generally rest with an interior ministry, as opposed to a defence ministry. Depending on the state, a state 's internal security will be maintained by either the ordinary police or law enforcement agencies or more militarised police forces (known as Gendarmerie or, literally, the Internal Troops.). Other specialised internal security agencies may exist to augment these main forces, such as border guards, special police units, or aspects of the state 's intelligence agencies. In some states, internal security may be the primary responsibility of a secret police force. The level of authorised force used by agencies and forces responsible for maintaining internal security might range from unarmed police to fully armed paramilitary organisations, or employ some level of less - lethal weaponry in between. For violent situations, internal security forces may contain some element of military type equipment such as non-military armored vehicles. Depending on the organisation of the state, internal security forces may have jurisdiction on national or federal levels. As the concept of internal security refers to the entity of the state and its citizens, persons who are threats to internal security may be designated as an enemy of the state or enemy of the people. Persons detained by internal security forces may either be dealt with by the normal criminal justice system, or for more serious crimes against internal security such as treason, they may face special measures such as secret trials. In times of extreme unrest, internal security actions may include measures such as internment (detention without trial). Depending on the nature of the specific state 's form of government, enforcing internal security will generally not be carried out by a country 's military forces, whose primary role is external defence, except in times of extreme unrest or other state of emergency, short of civil war. Often, military involvement in internal security is explicitly prohibited, or is restricted to authorised military aid to the civil power as part of the principle of civilian control of the military. Military special forces units may in some cases be put under the temporary command of civilian powers, for special internal security situations such as counter terrorism operations.
who ends up with who on baby daddy
List of Baby Daddy episodes - wikipedia Baby Daddy is an American sitcom that premiered on Freeform (then known as ABC Family) on June 20, 2012. The sitcom stars Jean - Luc Bilodeau as Ben Wheeler, a bartender, who while moving his brother Danny Wheeler (Derek Theler) into the apartment Ben shares with best friend Tucker Dobbs (Tahj Mowry), he 's surprised when Emma, a baby girl, is left on his doorstep by Angela, a girl with whom he had a one - night stand. He gets help from his mother Bonnie Wheeler (Melissa Peterman) and his close female friend Riley Perrin (Chelsea Kane) who also is in love with him. A total of 100 episodes of Baby Daddy were produced over six seasons, ending on May 22, 2017. On August 17, 2012, Baby Daddy was renewed for a second season, it premiered on May 29, 2013. Matt Dallas, who previously worked with Jean - Luc Bilodeau on the ABC Family series Kyle XY, appeared in a recurring role as Fitch Douglas, a love interest for Riley. Both Lacey Chabert and Grace Phipps also had recurring roles this season. Wayne Brady guest starred in the episode "There 's Something Fitchy Going On ''. On March 22, 2013, two months before the second season premiered, ABC Family renewed Baby Daddy for a third season. It premiered on January 15, 2014. Bruce Thomas, who previously worked with Jean - Luc Bilodeau on Kyle XY, guest starred in the episode "The Bet ''. Bilodeau 's other Kyle XY co-star Matt Dallas reprised his role as Fitch in the episode "Lights! Camera! No Action! '', Mary Hart also guest starred as herself in that episode. Other guest stars this season included Lucy Hale, David DeLuise, Phil Morris, Mark DeCarlo, Kelsey Chow and Dot Jones. On March 17, 2014, Baby Daddy was renewed for the fourth season. It premiered on October 22, 2014, with the series ' first Halloween episode "Strip or Treat ''. The second Christmas episode of the series entitled "It 's a Wonderful Emma '' premiered on December 10, 2014. Aisha Dee guest starred as Olivia, Tucker 's ex-wife. Christa B. Allen appeared in a recurring role as Robyn, a corporate lawyer working at Riley 's law firm, who later begins dating Danny. Eddie Cibrian appeared in a multi-episode arc as Ross, a guy that Riley falls for, later discovering that he is her boss. Jackée Harry guest starred as Judge Johnson in the episode "Lowering the Bar ''. The episode reunited Harry with Tahj Mowry, Mowry made numerous guest appearances on Harry 's 1990s sitcom Sister, Sister. In the episode "Home Is Where the Wheeler Is '', Alex Kapp Horner is now playing the role of Jennifer, Riley 's mom, which was originally played by Caroline Rhea in season 2 episode "On The Lamb - y ''. Reba McEntire guest starred in the season finale "It 's a Nice Day for a Wheeler Wedding '', reuniting with her former Reba co-star Melissa Peterman. Episode 20, "Till Dress Do Us Part '', is left off of Baby Daddy Season 4 on Netflix. On February 28, 2015, Baby Daddy was renewed for a fifth season. It premiered on February 3, 2016, under ABC Family 's new name Freeform. Allie Gonino was cast in a recurring role as Sam Saffe, a girl who applies for the manager position at the Bar on B. Ben also had a crush on her in high school and she was not very nice to Riley during that time. However, for unknown reasons, Daniella Monet replaced Gonino in the role. Production on the season began on August 17, 2015 and was temporarily halted on October 26, after Jean - Luc Bilodeau was hospitalized the weekend before. On June 27, 2016, Freeform renewed the series for a sixth season. It premiered on March 13, 2017. On May 13, 2017, it was announced that the series will end this season with the 100th episode.
name the two forms of polymorphism in cnidarian
Cnidaria - wikipedia Cnidaria (/ naɪˈdɛəriə /) is a phylum containing over 10,000 species of animals found exclusively in aquatic (freshwater and marine) environments: they are a predominantly marine species. Their distinguishing feature is cnidocytes, specialized cells that they use mainly for capturing prey. Their bodies consist of mesoglea, a non-living jelly - like substance, sandwiched between two layers of epithelium that are mostly one cell thick. They have two basic body forms: swimming medusae and sessile polyps, both of which are radially symmetrical with mouths surrounded by tentacles that bear cnidocytes. Both forms have a single orifice and body cavity that are used for digestion and respiration. Many cnidarian species produce colonies that are single organisms composed of medusa - like or polyp - like zooids, or both (hence they are trimorphic). Cnidarians ' activities are coordinated by a decentralized nerve net and simple receptors. Several free - swimming species of Cubozoa and Scyphozoa possess balance - sensing statocysts, and some have simple eyes. Not all cnidarians reproduce sexually, with many species having complex life cycles of asexual polyp stages and sexual medusae. Some, however, omit either the polyp or the medusa stage. Cnidarians were formerly grouped with ctenophores in the phylum Coelenterata, but increasing awareness of their differences caused them to be placed in separate phyla. Cnidarians are classified into four main groups: the almost wholly sessile Anthozoa (sea anemones, corals, sea pens); swimming Scyphozoa (jellyfish); Cubozoa (box jellies); and Hydrozoa, a diverse group that includes all the freshwater cnidarians as well as many marine forms, and has both sessile members, such as Hydra, and colonial swimmers, such as the Portuguese Man o ' War. Staurozoa have recently been recognised as a class in their own right rather than a sub-group of Scyphozoa, and the parasitic Myxozoa and Polypodiozoa were only firmly recognized as cnidarians in 2007. Most cnidarians prey on organisms ranging in size from plankton to animals several times larger than themselves, but many obtain much of their nutrition from dinoflagellates, and a few are parasites. Many are preyed on by other animals including starfish, sea slugs, fish, turtles, and even other cnidarians. Many scleractinian corals -- which form the structural foundation for coral reefs -- possess polyps that are filled with symbiotic photo - synthetic zooxanthellae. While reef - forming corals are almost entirely restricted to warm and shallow marine waters, other cnidarians can be found at great depths, in polar regions, and in freshwater. Recent phylogenetic analyses support monophyly of cnidarians, as well as the position of cnidarians as the sister group of bilaterians. Fossil cnidarians have been found in rocks formed about 580 million years ago, and other fossils show that corals may have been present shortly before 490 million years ago and diversified a few million years later. However, molecular clock analysis of mitochondrial genes suggests a much older age for the crown group of cnidarians, estimated around 741 million years ago, almost 200 million years before the Cambrian period as well as any fossils. Cnidarians form an animal phylum that are more complex than sponges, about as complex as ctenophores (comb jellies), and less complex than bilaterians, which include almost all other animals. However, both cnidarians and ctenophores are more complex than sponges as they have: cells bound by inter-cell connections and carpet - like basement membranes; muscles; nervous systems; and some have sensory organs. Cnidarians are distinguished from all other animals by having cnidocytes that fire harpoon like structures and are used mainly to capture prey. In some species, cnidocytes can also be used as anchors. Like sponges and ctenophores, cnidarians have two main layers of cells that sandwich a middle layer of jelly - like material, which is called the mesoglea in cnidarians; more complex animals have three main cell layers and no intermediate jelly - like layer. Hence, cnidarians and ctenophores have traditionally been labelled diploblastic, along with sponges. However, both cnidarians and ctenophores have a type of muscle that, in more complex animals, arises from the middle cell layer. As a result, some recent text books classify ctenophores as triploblastic, and it has been suggested that cnidarians evolved from triploblastic ancestors. Most adult cnidarians appear as either swimming medusae or sessile polyps, and many hydrozoan species are known to alternate between the two forms. Both are radially symmetrical, like a wheel and a tube respectively. Since these animals have no heads, their ends are described as "oral '' (nearest the mouth) and "aboral '' (furthest from the mouth). Most have fringes of tentacles equipped with cnidocytes around their edges, and medusae generally have an inner ring of tentacles around the mouth. Some hydroids may consist of colonies of zooids that serve different purposes, such as defense, reproduction and catching prey. The mesoglea of polyps is usually thin and often soft, but that of medusae is usually thick and springy, so that it returns to its original shape after muscles around the edge have contracted to squeeze water out, enabling medusae to swim by a sort of jet propulsion. In medusae the only supporting structure is the mesoglea. Hydra and most sea anemones close their mouths when they are not feeding, and the water in the digestive cavity then acts as a hydrostatic skeleton, rather like a water - filled balloon. Other polyps such as Tubularia use columns of water - filled cells for support. Sea pens stiffen the mesoglea with calcium carbonate spicules and tough fibrous proteins, rather like sponges. In some colonial polyps, a chitinous periderm gives support and some protection to the connecting sections and to the lower parts of individual polyps. Stony corals secrete massive calcium carbonate exoskeletons. A few polyps collect materials such as sand grains and shell fragments, which they attach to their outsides. Some colonial sea anemones stiffen the mesoglea with sediment particles. Cnidaria are diploblastic animals; in other words, they have two main cell layers, while more complex animals are triploblasts having three main layers. The two main cell layers of cnidarians form epithelia that are mostly one cell thick, and are attached to a fibrous basement membrane, which they secrete. They also secrete the jelly - like mesoglea that separates the layers. The layer that faces outwards, known as the ectoderm ("outside skin ''), generally contains the following types of cells: In addition to epitheliomuscular, nerve and interstitial cells, the inward - facing gastroderm ("stomach skin '') contains gland cells that secrete digestive enzymes. In some species it also contains low concentrations of cnidocytes, which are used to subdue prey that is still struggling. The mesoglea contains small numbers of amoeba - like cells, and muscle cells in some species. However, the number of middle - layer cells and types are much lower than in sponges. Polymorphism refers to the occurrence of structurally and functionally more than two different types of individuals within the same organism. It is a characteristic feature of Cnidarians, particularly the polyp and medusa forms, or of zooids within colonial organisms like those in Hydrozoa. In Hydrozoans, colonial individuals arising from individuals zooids will take on separate tasks. For example, in Obelia there are feeding individuals, the gastrozooids; the individuals capable of asexual reproduction only, the gonozooids, blastostyles and free - living or sexually reproducing individuals, the medusae. These "nettle cells '' function as harpoons, since their payloads remain connected to the bodies of the cells by threads. Three types of cnidocytes are known: The main components of a cnidocyte are: It is difficult to study the firing mechanisms of cnidocytes as these structures are small but very complex. At least four hypotheses have been proposed: Cnidocytes can only fire once, and about 25 % of a hydra 's nematocysts are lost from its tentacles when capturing a brine shrimp. Used cnidocytes have to be replaced, which takes about 48 hours. To minimise wasteful firing, two types of stimulus are generally required to trigger cnidocytes: nearby sensory cells detect chemicals in the water, and their cilia respond to contact. This combination prevents them from firing at distant or non-living objects. Groups of cnidocytes are usually connected by nerves and, if one fires, the rest of the group requires a weaker minimum stimulus than the cells that fire first. Medusae swim by a form of jet propulsion: muscles, especially inside the rim of the bell, squeeze water out of the cavity inside the bell, and the springiness of the mesoglea powers the recovery stroke. Since the tissue layers are very thin, they provide too little power to swim against currents and just enough to control movement within currents. Hydras and some sea anemones can move slowly over rocks and sea or stream beds by various means: creeping like snails, crawling like inchworms, or by somersaulting. A few can swim clumsily by waggling their bases. Cnidarians are generally thought to have no brains or even central nervous systems. However, they do have integrative areas of neural tissue that could be considered some form of centralization. Most of their bodies are innervated by decentralized nerve nets that control their swimming musculature and connect with sensory structures, though each clade has slightly different structures. These sensory structures, usually called rhopalia, can generate signals in response to various types of stimuli such as light, pressure, and much more. Medusa usually have several of them around the margin of the bell that work together to control the motor nerve net, that directly innervates the swimming muscles. Most Cnidarians also have a parallel system. In scyphozoans, this takes the form of a diffuse nerve net, which has modulatory effects on the nervous system. As well as forming the "signal cables '' between sensory neurons and motoneurons, intermediate neurons in the nerve net can also form ganglia that act as local coordination centers. Communication between nerve cells can occur by chemical synapses or gap junctions in hydrozoans, though gap junctions are not present in all groups. Cnidarians have many of the same neurotransmitters as many animals, including chemicals such as glutamate, GABA, and acetylcholine. This structure ensures that the musculature is excited rapidly and simultaneously, and can be directly stimulated from any point on the body, and it also is better able to recover after injury. Medusae and complex swimming colonies such as siphonophores and chondrophores sense tilt and acceleration by means of statocysts, chambers lined with hairs which detect the movements of internal mineral grains called statoliths. If the body tilts in the wrong direction, the animal rights itself by increasing the strength of the swimming movements on the side that is too low. Most species have ocelli ("simple eyes ''), which can detect sources of light. However, the agile box jellyfish are unique among Medusae because they possess four kinds of true eyes that have retinas, corneas and lenses. Although the eyes probably do not form images, Cubozoa can clearly distinguish the direction from which light is coming as well as negotiate around solid - colored objects. Cnidarians feed in several ways: predation, absorbing dissolved organic chemicals, filtering food particles out of the water, obtaining nutrients from symbiotic algae within their cells, and parasitism. Most obtain the majority of their food from predation but some, including the corals Hetroxenia and Leptogorgia, depend almost completely on their endosymbionts and on absorbing dissolved nutrients. Cnidaria give their symbiotic algae carbon dioxide, some nutrients, a place in the sun and protection against predators. Predatory species use their cnidocytes to poison or entangle prey, and those with venomous nematocysts may start digestion by injecting digestive enzymes. The "smell '' of fluids from wounded prey makes the tentacles fold inwards and wipe the prey off into the mouth. In medusae the tentacles round the edge of the bell are often short and most of the prey capture is done by "oral arms '', which are extensions of the edge of the mouth and are often frilled and sometimes branched to increase their surface area. Medusae often trap prey or suspended food particles by swimming upwards, spreading their tentacles and oral arms and then sinking. In species for which suspended food particles are important, the tentacles and oral arms often have rows of cilia whose beating creates currents that flow towards the mouth, and some produce nets of mucus to trap particles. Their digestion is both intra and extracellular. Once the food is in the digestive cavity, gland cells in the gastroderm release enzymes that reduce the prey to slurry, usually within a few hours. This circulates through the digestive cavity and, in colonial cnidarians, through the connecting tunnels, so that gastroderm cells can absorb the nutrients. Absorption may take a few hours, and digestion within the cells may take a few days. The circulation of nutrients is driven by water currents produced by cilia in the gastroderm or by muscular movements or both, so that nutrients reach all parts of the digestive cavity. Nutrients reach the outer cell layer by diffusion or, for animals or zooids such as medusae which have thick mesogleas, are transported by mobile cells in the mesoglea. Indigestible remains of prey are expelled through the mouth. The main waste product of cells ' internal processes is ammonia, which is removed by the external and internal water currents. There are no respiratory organs, and both cell layers absorb oxygen from and expel carbon dioxide into the surrounding water. When the water in the digestive cavity becomes stale it must be replaced, and nutrients that have not been absorbed will be expelled with it. Some Anthozoa have ciliated grooves on their tentacles, allowing them to pump water out of and into the digestive cavity without opening the mouth. This improves respiration after feeding and allows these animals, which use the cavity as a hydrostatic skeleton, to control the water pressure in the cavity without expelling undigested food. Cnidaria that carry photosynthetic symbionts may have the opposite problem, an excess of oxygen, which may prove toxic. The animals produce large quantities of antioxidants to neutralize the excess oxygen. All cnidarians can regenerate, allowing them to recover from injury and to reproduce asexually. Medusae have limited ability to regenerate, but polyps can do so from small pieces or even collections of separated cells. This enables corals to recover even after apparently being destroyed by predators. Cnidarian sexual reproduction often involves a complex life cycle with both polyp and medusa stages. For example, in Scyphozoa (jellyfish) and Cubozoa (box jellies) a larva swims until it finds a good site, and then becomes a polyp. This grows normally but then absorbs its tentacles and splits horizontally into a series of disks that become juvenile medusae, a process called strobilation. The juveniles swim off and slowly grow to maturity, while the polyp re-grows and may continue strobilating periodically. The adults have gonads in the gastroderm, and these release ova and sperm into the water in the breeding season. This phenomenon of succession of differently organized generations (one asexually reproducing, sessile polyp, followed by a free - swimming medusa or a sessile polyp that reproduces sexually) is sometimes called "alternation of asexual and sexual phases '' or "metagenesis '', but should not be confused with the alternation of generations as found in plants. Shortened forms of this life cycle are common, for example some oceanic scyphozoans omit the polyp stage completely, and cubozoan polyps produce only one medusa. Hydrozoa have a variety of life cycles. Some have no polyp stages and some (e.g. hydra) have no medusae. In some species, the medusae remain attached to the polyp and are responsible for sexual reproduction; in extreme cases these reproductive zooids may not look much like medusae. Meanwhile, life cycle reversal, in which polyps are formed directly from medusae without the involvement of sexual reproduction process, was observed in both Hydrozoa (Turritopsis dohrnii and Laodicea undulata) and Scyphozoa (Aurelia sp. 1). Anthozoa have no medusa stage at all and the polyps are responsible for sexual reproduction. Spawning is generally driven by environmental factors such as changes in the water temperature, and their release is triggered by lighting conditions such as sunrise, sunset or the phase of the moon. Many species of Cnidaria may spawn simultaneously in the same location, so that there are too many ova and sperm for predators to eat more than a tiny percentage -- one famous example is the Great Barrier Reef, where at least 110 corals and a few non-cnidarian invertebrates produce enough gametes to turn the water cloudy. These mass spawnings may produce hybrids, some of which can settle and form polyps, but it is not known how long these can survive. In some species the ova release chemicals that attract sperm of the same species. The fertilized eggs develop into larvae by dividing until there are enough cells to form a hollow sphere (blastula) and then a depression forms at one end (gastrulation) and eventually becomes the digestive cavity. However, in cnidarians the depression forms at the end further from the yolk (at the animal pole), while in bilaterians it forms at the other end (vegetal pole). The larvae, called planulae, swim or crawl by means of cilia. They are cigar - shaped but slightly broader at the "front '' end, which is the aboral, vegetal - pole end and eventually attaches to a substrate if the species has a polyp stage. Anthozoan larvae either have large yolks or are capable of feeding on plankton, and some already have endosymbiotic algae that help to feed them. Since the parents are immobile, these feeding capabilities extend the larvae 's range and avoid overcrowding of sites. Scyphozoan and hydrozoan larvae have little yolk and most lack endosymbiotic algae, and therefore have to settle quickly and metamorphose into polyps. Instead, these species rely on their medusae to extend their ranges. All known cnidaria can reproduce asexually by various means, in addition to regenerating after being fragmented. Hydrozoan polyps only bud, while the medusae of some hydrozoans can divide down the middle. Scyphozoan polyps can both bud and split down the middle. In addition to both of these methods, Anthozoa can split horizontally just above the base. Asexual reproduction makes the daughter cnidarian a clone of the adult. Cnidarians were for a long time grouped with Ctenophores in the phylum Coelenterata, but increasing awareness of their differences caused them to be placed in separate phyla. Modern cnidarians are generally classified into four main classes: sessile Anthozoa (sea anemones, corals, sea pens); swimming Scyphozoa (jellyfish) and Cubozoa (box jellies); and Hydrozoa, a diverse group that includes all the freshwater cnidarians as well as many marine forms, and has both sessile members such as Hydra and colonial swimmers such as the Portuguese Man o ' War. Staurozoa have recently been recognised as a class in their own right rather than a sub-group of Scyphozoa, and the parasitic Myxozoa and Polypodiozoa are now recognized as highly derived cnidarians rather than more closely related to the bilaterians. Stauromedusae, small sessile cnidarians with stalks and no medusa stage, have traditionally been classified as members of the Scyphozoa, but recent research suggests they should be regarded as a separate class, Staurozoa. The Myxozoa, microscopic parasites, were first classified as protozoans. Research then suggested that Polypodium hydriforme, a parasite within the egg cells of sturgeon, is closely related to the Myxozoa and that both Polypodium and the Myxozoa were intermediate between cnidarians and bilaterian animals. More recent research demonstrated that the previous identification of bilaterian genes reflected contamination of the Myxozoan samples by material from their host organism, and they are now firmly identified as heavily derived cnidarians, and more closely related to Hydrozoa and Scyphozoa than to Anthozoa. Some researchers classify the extinct conulariids as cnidarians, while others propose that they form a completely separate phylum. Current classification according to the World Register of Marine Species: Cerianthus filiformis (Ceriantharia) Sea anemones (Actinaria, part of Hexacorallia) Coral Acropora muricata (Scleractinia, part of Hexacorallia) Sea fan Gorgonia ventalina (Alcyonacea, part of Octocorallia) Bok jellyfish Carybdea branchi (Cubozoa) Siphonophore Physalia physalis (Hydrozoa) Myxobolus cerebralis (Myxozoa) Polypodium hydriforme (Polypodiozoa) Jellyfish Phyllorhiza punctata (Scyphozoa) Stalked jelly Haliclystus antarcticus (Staurozoa) Many cnidarians are limited to shallow waters because they depend on endosymbiotic algae for much of their nutrients. The life cycles of most have polyp stages, which are limited to locations that offer stable substrates. Nevertheless, major cnidarian groups contain species that have escaped these limitations. Hydrozoans have a worldwide range: some, such as Hydra, live in freshwater; Obelia appears in the coastal waters of all the oceans; and Liriope can form large shoals near the surface in mid-ocean. Among anthozoans, a few scleractinian corals, sea pens and sea fans live in deep, cold waters, and some sea anemones inhabit polar seabeds while others live near hydrothermal vents over 10 km (6.2 mi) below sea - level. Reef - building corals are limited to tropical seas between 30 ° N and 30 ° S with a maximum depth of 46 m (151 ft), temperatures between 20 ° C (68 ° F) and 28 ° C (82 ° F) high salinity and low carbon dioxide levels. Stauromedusae, although usually classified as jellyfish, are stalked, sessile animals that live in cool to Arctic waters. Cnidarians range in size from a mere handful of cells for myxozoan through Hydra 's 5 -- 20 mm (0.20 -- 0.79 in) long, to the Lion 's mane jellyfish, which may exceed 2 m (6.6 ft) in diameter and 75 m (246 ft) in length. Prey of cnidarians ranges from plankton to animals several times larger than themselves. Some cnidarians are parasites, mainly on jellyfish but a few are major pests of fish. Others obtain most of their nourishment from endosymbiotic algae or dissolved nutrients. Predators of cnidarians include: sea slugs, which can incorporate nematocysts into their own bodies for self - defense; starfish, notably the crown of thorns starfish, which can devastate corals; butterfly fish and parrot fish, which eat corals; and marine turtles, which eat jellyfish. Some sea anemones and jellyfish have a symbiotic relationship with some fish; for example clown fish live among the tentacles of sea anemones, and each partner protects the other against predators. Coral reefs form some of the world 's most productive ecosystems. Common coral reef cnidarians include both Anthozoans (hard corals, octocorals, anemones) and Hydrozoans (fire corals, lace corals). The endosymbiotic algae of many cnidarian species are very effective primary producers, in other words converters of inorganic chemicals into organic ones that other organisms can use, and their coral hosts use these organic chemicals very efficiently. In addition, reefs provide complex and varied habitats that support a wide range of other organisms. Fringing reefs just below low - tide level also have a mutually beneficial relationship with mangrove forests at high - tide level and seagrass meadows in between: the reefs protect the mangroves and seagrass from strong currents and waves that would damage them or erode the sediments in which they are rooted, while the mangroves and seagrass protect the coral from large influxes of silt, fresh water and pollutants. This additional level of variety in the environment is beneficial to many types of coral reef animals, which for example may feed in the sea grass and use the reefs for protection or breeding. The earliest widely accepted animal fossils are rather modern - looking cnidarians, possibly from around 580 million years ago, although fossils from the Doushantuo Formation can only be dated approximately. The identification of some of these as embryos of animals has been contested, but other fossils from these rocks strongly resemble tubes and other mineralized structures made by corals. Their presence implies that the cnidarian and bilaterian lineages had already diverged. Although the Ediacaran fossil Charnia used to be classified as a jellyfish or sea pen, more recent study of growth patterns in Charnia and modern cnidarians has cast doubt on this hypothesis, leaving only the Canadian polyp, Haootia, as the only bona - fide cnidarian body fossil from the Ediacaran. Few fossils of cnidarians without mineralized skeletons are known from more recent rocks, except in lagerstätten that preserved soft - bodied animals. A few mineralized fossils that resemble corals have been found in rocks from the Cambrian period, and corals diversified in the Early Ordovician. These corals, which were wiped out in the Permian - Triassic extinction about 251 million years ago, did not dominate reef construction since sponges and algae also played a major part. During the Mesozoic era rudist bivalves were the main reef - builders, but they were wiped out in the Cretaceous -- Paleogene extinction event 66 million years ago, and since then the main reef - builders have been scleractinian corals. Glass sponges Demosponges Calcareous sponges Ctenophora (comb jellies) Anthozoa (sea anemones and corals) Myxozoa Hydrozoa (Hydra, siphonophores, etc.) Cubozoa (box jellies) Staurozoa "Scyphozoa '' (jellyfish, excluding Staurozoa) Placozoa Bilateria It is difficult to reconstruct the early stages in the evolutionary "family tree '' of animals using only morphology (their shapes and structures), because the large differences between Porifera (sponges), Cnidaria plus Ctenophora (comb jellies), Placozoa and Bilateria (all the more complex animals) make comparisons difficult. Hence reconstructions now rely largely or entirely on molecular phylogenetics, which groups organisms according to similarities and differences in their biochemistry, usually in their DNA or RNA. It is now generally thought that the Calcarea (sponges with calcium carbonate spicules) are more closely related to Cnidaria, Ctenophora (comb jellies) and Bilateria (all the more complex animals) than they are to the other groups of sponges. In 1866 it was proposed that Cnidaria and Ctenophora were more closely related to each other than to Bilateria and formed a group called Coelenterata ("hollow guts ''), because Cnidaria and Ctenophora both rely on the flow of water in and out of a single cavity for feeding, excretion and respiration. In 1881, it was proposed that Ctenophora and Bilateria were more closely related to each other, since they shared features that Cnidaria lack, for example muscles in the middle layer (mesoglea in Ctenophora, mesoderm in Bilateria). However more recent analyses indicate that these similarities are rather vague, and the current view, based on molecular phylogenetics, is that Cnidaria and Bilateria are more closely related to each other than either is to Ctenophora. This grouping of Cnidaria and Bilateria has been labelled "Planulozoa '' because it suggests that the earliest Bilateria were similar to the planula larvae of Cnidaria. Within the Cnidaria, the Anthozoa (sea anemones and corals) are regarded as the sister - group of the rest, which suggests that the earliest cnidarians were sessile polyps with no medusa stage. However, it is unclear how the other groups acquired the medusa stage, since Hydrozoa form medusae by budding from the side of the polyp while the other Medusozoa do so by splitting them off from the tip of the polyp. The traditional grouping of Scyphozoa included the Staurozoa, but morphology and molecular phylogenetics indicate that Staurozoa are more closely related to Cubozoa (box jellies) than to other "Scyphozoa ''. Similarities in the double body walls of Staurozoa and the extinct Conulariida suggest that they are closely related. The position of Anthozoa nearest the beginning of the cnidarian family tree also implies that Anthozoa are the cnidarians most closely related to Bilateria, and this is supported by the fact that Anthozoa and Bilateria share some genes that determine the main axes of the body. However, in 2005 Katja Seipel and Volker Schmid suggested that cnidarians and ctenophores are simplified descendants of triploblastic animals, since ctenophores and the medusa stage of some cnidarians have striated muscle, which in bilaterians arises from the mesoderm. They did not commit themselves on whether bilaterians evolved from early cnidarians or from the hypothesized triploblastic ancestors of cnidarians. In molecular phylogenetics analyses from 2005 onwards, important groups of developmental genes show the same variety in cnidarians as in chordates. In fact cnidarians, and especially anthozoans (sea anemones and corals), retain some genes that are present in bacteria, protists, plants and fungi but not in bilaterians. The mitochondrial genome in the medusozoan cnidarians, unlike those in other animals, is linear with fragmented genes. The reason for this difference is unknown. Jellyfish stings killed about 1,500 people in the 20th century, and cubozoans are particularly dangerous. On the other hand, some large jellyfish are considered a delicacy in East and Southeast Asia. Coral reefs have long been economically important as providers of fishing grounds, protectors of shore buildings against currents and tides, and more recently as centers of tourism. However, they are vulnerable to over-fishing, mining for construction materials, pollution, and damage caused by tourism. Beaches protected from tides and storms by coral reefs are often the best places for housing in tropical countries. Reefs are an important food source for low - technology fishing, both on the reefs themselves and in the adjacent seas. However, despite their great productivity, reefs are vulnerable to over-fishing, because much of the organic carbon they produce is exhaled as carbon dioxide by organisms at the middle levels of the food chain and never reaches the larger species that are of interest to fishermen. Tourism centered on reefs provides much of the income of some tropical islands, attracting photographers, divers and sports fishermen. However, human activities damage reefs in several ways: mining for construction materials; pollution, including large influxes of fresh water from storm drains; commercial fishing, including the use of dynamite to stun fish and the capture of young fish for aquariums; and tourist damage caused by boat anchors and the cumulative effect of walking on the reefs. Coral, mainly from the Pacific Ocean has long been used in jewellery, and demand rose sharply in the 1980s. Some large jellyfish species of the Rhizostomae order are commonly consumed in Japan, Korea and Southeast Asia. In parts of the range, fishing industry is restricted to daylight hours and calm conditions in two short seasons, from March to May and August to November. The commercial value of jellyfish food products depends on the skill with which they are prepared, and "Jellyfish Masters '' guard their trade secrets carefully. Jellyfish is very low in cholesterol and sugars, but cheap preparation can introduce undesirable amounts of heavy metals. The "sea wasp '' Chironex fleckeri has been described as the world 's most venomous jellyfish and is held responsible for 67 deaths, although it is difficult to identify the animal as it is almost transparent. Most stingings by C. fleckeri cause only mild symptoms. Seven other box jellies can cause a set of symptoms called Irukandji syndrome, which takes about 30 minutes to develop, and from a few hours to two weeks to disappear. Hospital treatment is usually required, and there have been a few deaths. A number of Myxozoans are commercially important pathogens in salmonid aquaculture.
who scored the last winning goal for england against germany
England -- Germany football rivalry - wikipedia The England -- Germany football rivalry is considered to be mainly an English phenomenon -- in the run - up to any competition match between the two teams, many UK newspapers will print articles detailing results of previous encounters, such as those in 1966 and 1990. Football fans in England often consider Germany to be their main sporting rivals and care more about this rivalry than those with other nations, such as Argentina or Scotland. Most German fans consider the Netherlands or Italy to be their traditional footballing rivals, and as such, usually the rivalry is not taken quite as seriously there as it is in England. The English and German national football teams have played each other since the end of the 19th century, and officially since 1930. The teams met for the first time in November 1899, when England beat Germany in four straight matches. Notable matches between England and Germany (or West Germany) include the 1966 FIFA World Cup Final, and the semi-finals of the 1990 FIFA World Cup and UEFA Euro 1996. As of 2016, Germany has won four World Cups and three European Championships, and has played in a total of fourteen finals in those two tournaments. England has won one World Cup in the only final they ever reached in either tournament. The most recent encounter ended as a draw, the two sides drawing 0 - 0 in a friendly at Wembley Stadium. Football is a simple game; 22 men chase a ball for 90 minutes and at the end, the Germans win. In this article, references to the German football team include the former West Germany football team before German reunification. The Football Association instigated a four - game tour of Germany and Austria by a representative England team in November 1899. The England team played a representative German team in Berlin on 23 November 1899. The German side lost 13 -- 2. Two days later a slightly altered German side lost 10 -- 2. The third and fourth matches were played in Prague and Karlsruhe against a combined Austrian and German side, and England won 6 -- 0 and 7 -- 0. Those games can not be considered as "official '' because the German federation (DFB) was not founded until 28 January 1900. The first ever full international between the two teams was a friendly match played on Saturday 10 May 1930, in Berlin. England were 1 -- 0 and 2 -- 1 up in the game, but after losing a player to injury went behind 3 -- 2, before a late goal from David Jack brought the score to 3 -- 3, which was how the game finished. The next match between the two teams was played on 4 December 1935, at White Hart Lane in London, the first full international to take place between the teams in England and the first since the rise to power of Hitler and the Nazis in 1933. It was also the first match to stir up particular controversy, as The Observer newspaper reported protests by the British Trades Union Congress that the game could be used as a propaganda event by the Nazi regime. "No recent sporting event has been treated with such high seriousness in Germany as this match... Between 7,500 and 8,000 Germans will travel via Dover, and special trains will bring them to London. A description broadcast throughout Germany... Sir Walter Citrine, General Secretary of the TUC, in a further letter to Sir John Simon, the Home Secretary, said that ' such a large and carefully organised Nazi contingent coming to London might confirm the impression among people in this country that the event is being regarded as of some political importance by the visitors '. '' Of the match itself, however, which England won 3 -- 0, the same newspaper reported the following week that: "So chivalrous in heart and so fair in tackling were the English and German teams who played at Tottenham in mid-week that even the oldest of veterans failed to recall an international engagement played with such good manners by everybody. '' The next game between the two teams, and the last to be played before the Second World War, was again in Germany, a friendly at the Olympic Stadium in Berlin on 14 May 1938, played in front of a crowd of 110,000 people. It was the last time England played against a unified German team until the 1990s. This was the most controversial of all the early encounters between the two teams, as before kick - off the English players were ordered by the Foreign Office to line up and perform a Nazi salute in respect to their hosts. How compliant the players were with this situation has been a matter of debate, with a feature in The Observer in 2001 speculating that they were "perhaps merely indifferent players (who had undoubtedly become more reluctant, to the point of mutiny, by the time the post-war memoirs were published). '' A BBC News Online report published in 2003 reported that the salute was calculated to show: "that Germany, which two months earlier had annexed Austria, was not a pariah state. The friendly game effectively helped clear the way for Chamberlain 's "Peace for our time '' deal with Hitler, which, in turn, led to Germany 's invasion of Czechoslovakia. '' England won the match 6 -- 3, but according to German writer Ulrich Linder, author of the book Strikers for Hitler: "To lose to England at the time was nothing unusual because basically everybody lost to (them) at the time. For Hitler the propaganda effect of that game was more important than anything else. '' The two countries did not meet again on a football pitch for sixteen years. Two German states had been founded in 1949, with the Germany national football team continuing its tradition, based in the Federal Republic of Germany (West Germany) from 1949 to 1990. The German Democratic Republic (East Germany) fielded a separate national football team; although the English did play some matches against them, the rivalry never developed the same edge or high - profile. In a friendly at Wembley Stadium on 1 December 1954, England won 3 -- 1 against an under - strength West German side, who were at the time the champions of the world, having won the 1954 FIFA World Cup. England won further friendlies against West Germany in 1956 (3 -- 1 at the Olympic Stadium in Berlin) and 1965 (1 -- 0 in Nuremberg). England and Germany met at Wembley again on 23 February 1966, as part of their preparations for the 1966 FIFA World Cup, which was to be held in England. England again won 1 -- 0, with a goal from Nobby Stiles, and the match also saw the first appearance for England of West Ham United striker Geoff Hurst. Both countries had a successful World Cup in 1966, and met in the final played at Wembley on Saturday, 30 July 1966. This was and still is regarded by many as the most important match ever played between the two teams, and it was also the first time they had ever met in a competitive game, as opposed to the friendly matches they had played before. It was also a highly eventful and in some respects controversial game, which created the modern rivalry between the teams. England led 2 -- 1 until the very end of normal time, when a German goal levelled the scores and took the match into extra time. In the first period of extra time, England striker Geoff Hurst had a shot on goal which bounced down from the crossbar and then out of the goal, before being cleared away by the German defenders. The England players celebrated a goal, but the referee was unsure as to whether or not the ball had crossed the line when it hit the ground. After consulting with a linesman, Tofiq Bahramov, the referee awarded a goal to England. Bahramov, from the USSR, became famous and celebrated in English popular culture as "the Russian linesman '', although he was actually from Azerbaijan. When England played the Azerbaijan national team in a World Cup qualifier in October 2004 -- in a stadium named after Bahramov -- many England fans travelling to the game asked to be shown the grave of the official, who had died in 1993, so that they could place flowers on it, and before the game a ceremony honouring him was attended by Hurst and other footballing celebrities. Germany, however, did not believe that the ball had crossed the line, with commentators such as Robert Becker of Kicker magazine accusing the linesman of bias because the German team had eliminated the USSR in the semi-final. Modern studies using film analysis and computer simulation have suggested the ball never crossed the line -- both Duncan Gillies of the Visual Information Processing Group at Imperial College London and Ian Reid and Andrew Zisserman of the Department of Engineering Science at University of Oxford agree that the ball would have needed to travel a further 2.5 -- 6 cm to fully cross the line, and that therefore this was not a fair goal. In Germany it led to the creation of the expression "Wembley - Tor '', or "Wembley - Goal '', a phrase used to describe any goal scored in a similar fashion to Hurst 's. England, however, scored another controversial goal at the end of extra time, winning 4 -- 2. This goal came after fans began to spill onto the field, thinking the game was over, which should have stopped play. The goal, a third for Hurst (making him the only man ever to score a hat - trick in a World Cup final), was described by BBC Television commentator Kenneth Wolstenholme in a now - famous piece of commentary, "They think it 's all over... it is now! '', referring to the English fans who had spilled onto the field. The expression has become a celebrated part of English popular culture, indelibly linked with the game in the minds of the English public. The 1966 final 's influence on the culture surrounding the England team would not end there, however. Despite playing on their home soil, England wore their away kit of red shirts, white shorts and red socks, and since then England fans have had a special affinity for their team 's away kit, with retro 1966 shirts selling well in recent years. The game is often held as having been the height of English sporting achievement, but it also created some less favourable legacies; a common chant among England supporters at Germany games is "Two World Wars and One World Cup '' to the tune of "Camptown Races ''. Two years after the World Cup, on 1 June 1968, the two teams met again in another friendly match, this time in West Germany, in which the Germans won their first victory over an English team, 38 years after they had first played. The scoreline was 1 -- 0, Franz Beckenbauer scoring for West Germany, but as Hugh McIlvanney wrote in his match report for The Observer: "Comparing this miserable hour and a half (in which fouls far outnumbered examples of creative football) with the last great meeting between the countries is entirely fatuous. But that will not prevent the Germans from doing it. Their celebrations will not be inhibited by the knowledge that today 's losers were almost a reserve team, and even the agonies of boredom they shared with us will now seem worthwhile. They have beaten England, and that is enough. '' Far more noted and remembered, however, was the next competitive meeting between the two teams, in the quarter - finals of the 1970 FIFA World Cup in Mexico. England were 2 -- 0 up, but Beckenbauer and Uwe Seeler equalised at 2 -- 2 in the second half. In extra time, Geoff Hurst had a goal mysteriously ruled out and then Gerd Müller scored in extra time to win 3 -- 2. England had been weakened by losing their goalkeeper Gordon Banks to illness, and also substituted Bobby Charlton, one of their leading players, while the Germans were in the midst of their comeback. As McIlvanney put it when reflecting on the loss five days later, "Sir Alf Ramsey 's team are out because the best goalkeeper most people have ever seen turned sick, and one who is only slightly less gifted was overwhelmed by the suddenness of his promotion. In sport disaster often feeds upon itself but this was a sickeningly gluttonous example. '' The result was psychologically damaging for English morale -- as The Guardian newspaper described in a 2006 feature: "Four days later Harold Wilson blamed Labour 's loss in the general election on the defeat. This marked the start of two decades of German footballing dominance and England 's decline. '' Two years later the teams met once more, in the quarter - finals of the European Championship, which were at the time held on a home - and - away basis. England lost 3 -- 1 at Wembley on 29 April 1972 in the home leg, and on 13 May could only draw 0 -- 0 in West Germany, being knocked out of the competition. Said The Observer in 2001: "England may have been robbed of the chance in Mexico... but there were no shortage of excuses -- the heat, the hostile crowd, the food which had felled Banks, the errors of Bonnetti... It was a conspiracy of fate more than a footballing defeat. In 1972, there were no excuses at all. West Germany did not just knock England out of the European Championships, they came to Wembley and comprehensively outclassed England. '' McIlvanney wrote in his match report for The Observer: "No Englishman can ever again warm himself with the old assumption that, on the football field if nowhere else, the Germans are an inferior race. '' There were several friendly games played in the 1970s and 1980s, with wins for both nations, but the next competitive match -- a second round group game at the 1982 FIFA World Cup -- ended in a disappointing 0 -- 0 draw. England were later eliminated from that competition after drawing Spain 0 - 0. However, when the teams next met competitively, at the 1990 FIFA World Cup, it was a rather more dramatic and eventful clash in the semi-finals, the first time England had reached that far in the competition since their win in 1966. In summer 1990, the process of German reunification had advanced far, with the Deutsche Mark being introduced in the East two days before the semifinals on 3 July. Unlike in previous decades, East German fans could openly support the German team of the DFB which by then had an 80 + year tradition. The England team had started the event poorly and had not been expected to reach that stage of the competition, but in the game they could match the stronger German team, managed by Franz Beckenbauer. The Germans took the lead in the 59th minute when a free - kick from Andreas Brehme deflected off Paul Parker and over goalkeeper Peter Shilton. Gary Lineker equalised in the 80th minute, and then David Platt had a goal ruled out in extra time. The result was thus decided by a penalty shoot - out -- the England team 's first -- which West Germany won 4 -- 3 after misses from Stuart Pearce and Chris Waddle. West Germany went on to beat Argentina in the final. The match stayed heavily in the English popular consciousness -- not simply for the football and the dramatic manner of the defeat, but also for the reaction of star player Paul Gascoigne to receiving a yellow card. His second of the tournament, his realisation that this would see him suspended for the final should England make it prompted him to burst into tears on the pitch. Said The Observer in 2004, "There are half a dozen images that define this decade of change, which help to show why football widened its appeal. First, and most important, is the sight of Paul Gascoigne crying into his England shirt after being booked in the 1990 World Cup semi-final against West Germany. Unaggressive and emotional, a billboard image that helped to start an apparently unstoppable surge in popularity for the national team. '' Despite this rehabilitation of the image of football aided by the English national team 's success in the 1990 tournament, the narrow defeat by Germany helped to increase the antipathy felt towards the German team and the German nation in general. Mark Perryman wrote in 2006: "How could we expect to beat mighty (West) Germany, who had only narrowly lost the final four years previously? To my mind it is the fact that we so nearly did, then lost in the penalty shoot - out that explains the past 16 years of an increasingly bitter rivalry. '' Germany was reunited in October 1990. For the DFB team, few things changed apart from players previously capped for East Germany becoming eligible for the united German team. This made little difference to the tone and emotion of the rivalry. England 's first match against the unified Germany since 1938 was a friendly in 1991 at Wembley, which the Germans won 1 -- 0. Five years later, at the 1996 European Championships, England played a unified German team for the first time in a competitive fixture, when they met in the semi-finals. Like the 1966 World Cup, the tournament was being held in England, and the semi-final was played at Wembley Stadium. England 's fans and team were confident, particularly after wins in the group stage over Scotland (2 -- 0) and the Netherlands (4 -- 1) and their first ever penalty shoot - out victory, over Spain, in the quarter - finals. So vivid were the memories of 1966 for England fans that a media clamour ensued for England to wear red jerseys, instead of the unfamiliar - looking grey away kit that had been launched earlier that year (as England had not submitted details of any red kit to UEFA before the tournament, this was never going to be permitted, and England did wear grey). The build - up to the game was soured, however, by headlines in English tabloid newspapers which were regarded by many as overly nationalistic, and even racist in tone, as they had also been before the previous match against Spain. Particularly controversial was the Daily Mirror 's headline "Achtung! Surrender! For You Fritz, ze Euro 96 Championship is over '', accompanied by a mock article aping a report of the declaration of war between the two nations in 1939. The editor of the paper, Piers Morgan, subsequently apologised for the headline, particularly as it was at least partially blamed for violence following England 's defeat, including a riot in Trafalgar Square. England took the lead in only the third minute, through tournament top scorer Alan Shearer, but in the 16th minute Stefan Kuntz equalised, and despite many close shots and a disallowed goal from the Germans, the score remained level at 1 -- 1 until the end of extra time. The match was settled by another penalty shoot - out, as in 1990, and although this time all five of England 's initial penalty - takers were successful, so were all five German players. The shoot - out carried on to "sudden death '' kicks, with Gareth Southgate missing for England and Andreas Möller scoring for Germany to put the hosts out. As in 1990, Germany went on to win the tournament. England and Germany were drawn to meet each other in the first round group stage of the 2000 European Championship, held jointly by Belgium and the Netherlands, with the England -- Germany game taking place in Charleroi in Belgium. Before the game, played on 17 June 2000, there were violent incidents involving England fans in the town centre, although these were mostly brief and there were no violent confrontations with German fans. Nonetheless, reporting of the violence did to a degree overshadow the match result in some media coverage. The match itself was a scrappy affair that lacked the drama of many of the previous encounters, with England sneaking a 1 -- 0 win thanks to a second - half header by striker Alan Shearer. There was enthusiastic celebration of this result in England, particularly as this was the first time that England had won a competitive match against Germany since the 1966 World Cup final. The German reaction was more pessimistic. Rounding up the German media coverage, The Guardian reported: "' 0 -- 1! Germany weeps. Is it all over? ' asked the mass circulation Bild newspaper in a front - page banner headline. ' Shearer tells us to pack our bags, ' wrote Berlin 's Der Tagesspiegel. '' In the event, both England and Germany lost their final group matches and both were knocked out in the first round, finishing third and fourth respectively in their group. Before the 2000 European Championship, England and Germany had already been drawn together in the same qualifying group for the 2002 FIFA World Cup. England 's home match against Germany was played on Saturday 7 October 2000, and was significant as it was the last international fixture ever to be played at the old Wembley Stadium, before it was demolished and rebuilt. England lost 1 -- 0 to a German free kick scored by Dietmar Hamann. "It was the last refuge of the inadequate. Half - time neared, England were a goal down and a sizeable section of the crowd sullied the ever - dampening occasion. ' Stand up if you won the War, ' they sang '', wrote journalist Ian Ridley in his match report for The Observer. The result prompted the immediate resignation of England manager Kevin Keegan, and by the time the return match was played at the Olympic Stadium in Munich on 1 September 2001, England were now managed by their first ever foreign coach, Sven - Göran Eriksson. Expectations on the English side were low, but they surprisingly won the game 5 -- 1 with a hat - trick from striker Michael Owen, and eventually qualified for the World Cup as the winners of their group. During the game the father of German coach Rudi Völler suffered a heart attack inside the stadium, but was successfully resuscitated. Some Germans were shocked by the scale of the defeat, with former striker Karl - Heinz Rummenigge stating that "I have never seen such a terrible defeat... This is a new Waterloo for us. '' At the 2002 World Cup finals in Japan and South Korea, it was Germany who enjoyed more success, finishing second. England only reached the quarter - finals. Both teams were defeated by the competition winners, Brazil. The two teams did not meet in the next major contests, UEFA Euro 2004 and 2006 FIFA World Cup (England avoided a showdown with Germany in the last 16 by holding Sweden to a draw and finishing at the top of their group), and England did not qualify for Euro 2008. England and Germany next played on 22 August 2007, in a friendly at the newly - rebuilt Wembley Stadium. England lost the match 2 -- 1, their first defeat at the new Wembley. Then in an international friendly held on 19 November 2008, England inflicted Germany 's first defeat in Berlin in thirty five years with a victory of 2 -- 1. In the 2010 FIFA World Cup, the two teams met in the second round on Sunday, 27 June, after Germany won Group D and England finished second in Group C. Germany won the match 4 -- 1, knocking England out and advancing into the quarter - finals. This was the greatest defeat England ever suffered in their World Cup history. In the 38th minute, a shot by Frank Lampard controversially bounced off the crossbar well into the goal and back out again with Germany leading only 2 -- 1. However, neither the referee Jorge Larrionda nor the linesman saw it pass over the line. The decision drew immediate comparisons with Geoff Hurst 's goal during the 1966 World Cup Final. However, in the 2010 case there was no dispute about whether the ball had crossed the goal line, because the ball had clearly touched the grass well within the goal, and the television replay immediately showed this. The German women 's league is considered one of the strongest in the world, until the end of the 2013 -- 14 season out of a possible 13 Champions League titles, German clubs have won eight. England 's Arsenal has won the title in 2007 and is so far the only club to win a European club title with the men 's and the women 's team. The German women 's team is more popular in Germany than the English women 's team is in England. Germany matches are televised on national television and attract millions of viewers. The World Cup 2011 quarterfinal between Germany and Japan attracted over 17 million viewers, while England women 's matches struggle to even make it into television schedules. England 's group games in the World Cup 2011 were watched by up to four million viewers on German television, but less than a million on BBC, which means even with no German involvement, England games are at this point more popular in Germany than in the country the England team actually represents. Observers do not lead this gap in popularity back to a lack of gender equality in England, but to the simple fact, that the German women 's team is far more successful, and therefore women 's football is in the focus of media coverage in Germany. As of 2012, in nineteen matches between the two teams, they have drawn twice, Germany has won seventeen times, England 's side has yet to win. Consequently, England has not won a major title, their best result being Euro runners up in women 's Euro 1984 and in Euro 2009. Meanwhile, Germany 's women have won two World Cups, 2003 and 2007, and a total of eight European Championships in the years of 1989, 1991, 1995, 1997, 2001, 2005, 2009 and 2013. Germany is the only nation to win the FIFA World Cup with their male and their female athletes. Together with the three Euro wins and the four World Cup wins of the men 's team, Germany counts 17 major tournament titles, while England has one major tournament title so far. On 4 July 2015, England upset Germany 1 -- 0 in the third place match at the 2015 FIFA Women 's World Cup; this was their first ever victory against Germany in 21 matches. Since World War II, Britain has considered itself a rival to Germany in many areas, such as automobile production, naval forces, trade and economy -- this rivalry has also permeated into football. English football fans often deem Germany to be their traditional football rival and care more about this rivalry than those with other countries, such as Scotland or Argentina. In the run - up to any football match against Germany, many English tabloids publish articles that contain references to the Second World War, such as calling their opposition derogatory terms such as "krauts '' or "hun ''. Two days before the UEFA Euro 1996 semifinal, The Daily Mirror published an article on its front page that ran with the headline "Achtung! Surrender! '': another reference to the war. After the 5 -- 1 victory over Germany in 2001, the English news media were ecstatic. The Sunday Mirror drew more comparisons to World War II, by running an article about the game on the front page under the headline "BLITZED ''. Similarly, News of the World also referenced World War II in their front page 's headline, which read "Do n't mention the score! '' In January 2010, the British tabloid The Daily Star compared Germany 's new away kit to "Nazi - style black shirts '', warning that they would "conjure up memories of the notorious SS ''. England 's defeat of Germany in the 1966 World Cup has been often voted by the English as their greatest ever sporting moment, and the 5 -- 1 victory in 2001 has also regularly placed highly. England 's Manchester United defeating Germany 's Bayern Munich at the 1999 UEFA Champions League Final is also highly regarded by English football fans as a high point in their perceived rivalry. The rivalry has also made its way into various aspects of English popular culture. For example, in the BBC television series Whatever Happened to the Likely Lads?, the character Terry remarks that 14 June 1970, the day that England lost 3 -- 2 to West Germany, should be "indelibly printed on every true Englishman 's mind ''. As far back as the 1960s, the footballing rivalry between England and Germany has been considered mainly an English phenomenon; this has been observed by several commentators of both English and German origin. In June 2009, British comedian Stephen Fry stated on the BBC show QI that, unlike the English, German football fans do not care about their team 's loss at the 1966 World Cup final and may not even remember that they had made it that far. Instead, German fans consider their rivalry with the Netherlands to be their traditional footballing rivalry and care more about the matches against them, such as the 1974 FIFA World Cup final. Following their 5 -- 1 loss in 2001, many German fans were not particularly concerned, instead revelling in the Netherlands ' defeat by the Republic of Ireland the same day. Some sang directly after the loss to England: "We 're going to the World Cup without Holland! '' In 2010, during the lead - up to 2010 World Cup match, journalist Marina Hyde remarked in The Guardian that the rivalry between the England and Germany football teams was "quite obviously an illusion, existing only in the minds of those wishful to the point of insanity -- which is to say, the English ''. She added: "In a world that has changed bewilderingly in recent decades, England losing to Germany in major tournaments is one of the few certainties. '' Similarly, professor Peter J. Beck described Germany 's ambivalence to the rivalry, saying that "as far as the Germans are concerned, Sunday 's game is nothing more than another sporting contest ''. However, it would (of course) also be false to say that there is no rivalry at all between Germany and England; for one thing, the very fact that the English perceive it to be such can not go unnoticed, for another, there is the long - standing quarrel about the "Wembley goal '' (only somewhat silenced since a clear goal was not arwarded to the English in 2010). Germany vs. English matches, even friendlies, are always considered highly important sporting events (though of course, the tradition and usually the quality of both the teams may account for most of that), going so far that a popular radioplay series mocks people in love as "looking deep into each other 's eyes even if a Germany vs. England match is on TV ''. However, it is clear that any feeling of rivalry towards England, if existent, is entirely dwarfed by the German - Dutch rivalry. Overview: * Euro and World Cup matchups include qualifiers. PSO = penalty shoot outs Note: Since 1908, Germany is represented by the German Football Association (DFB) which fields the Germany national football team. During German division (1949 -- 1990), the team of the German Football Association based in Frankfurt, Federal Republic of Germany was colloquially called West Germany. Also, different flags were applied: England played four Friendly matches against the East Germany national football team which was fielded by the DFV in the German Democratic Republic which existed from 1949 to 1990: As well as the rivalry between the national sides, English and German club teams have also met on numerous occasions in the various European club competitions. 19 May 2012 saw the most recent encounter between two top level clubs from both nations where Bayern Munich met Chelsea in the 2012 UEFA Champions League Final. Having recently missed out on the Bundesliga title to their rivals Borussia Dortmund, Bayern Munich suffered defeat at the Allianz Arena; a game dubbed "Finale dahoam '' (Bavarian for "final at home '') as it marked the second time that any team played the tournament 's final at their home ground. The game ended as a 1 -- 1 draw after added extra time (aet) before being decided 4 -- 3 on penalties. Bayern Munich seemed like the more dominant of the two sides throughout, but a criticised Chelsea defence "parked the bus '', preventing many chances which eventually lead to their first Champions League win. Perhaps the most noteworthy encounter was the 1999 UEFA Champions League Final between Manchester United and Bayern Munich, during which the English club were trailing 1 -- 0 until injury time, then scoring two goals to win 2 -- 1. This result was celebrated by many in England who were not United fans as being another English victory over Germany. Other memorable matches were the controversial 1975 European Cup Final in which Bayern beat Leeds United after the latter had penalty claims turned down by a French referee who also disallowed a goal scored by Peter Lorimer with a shot from outside the area. Leeds would eventually eliminate a German team (VfB Stuttgart) in unexpected and bizarre circumstances. After the Germans had qualified, in the first round of the 1992 -- 93 competition, on the away goals rule, the return leg was awarded by UEFA 3 -- 0 to Leeds United because Stuttgart fielded an extra foreigner, thus infringing the European competition rules that were in place at the time. A replay was ordered as the aggregate stood at 3 -- 3. Leeds won the replay at Barcelona 's Camp Nou 2 -- 1. In 2000, a young and depleted Leeds United side, managed by David O'Leary, eliminated 1860 Munich from the Champions League beating them home and away in the preliminary round before reaching the semi-final. There were also famous wins by Liverpool, Nottingham Forest and Aston Villa in European Cup semifinals or finals. These were against the likes of Borussia Mönchengladbach, 1. FC Köln, Hamburg and FC Bayern Munich. Liverpool 's win against Borussia Mönchengladbach in Rome stands out for one special reason. It started a sequence of six consecutive English European Cup victories each time involving the elimination of a German club in the latter stages. The English hold the upperhand in club football encounters, although there were notable German wins such as Bayern 's revenge over Manchester United F.C. in 2001, winning home and away, and Bayer 04 Leverkusen 's elimination of Liverpool (a rarity for German club sides) and Manchester United in 2002, after they had received a 4 -- 1 drubbing at Arsenal (the Gunners -- who boast the best English record against Italian sides in the three European competitions -- have an unimpressive record against German opposition) in the second group phase. Both English sides exacted revenge over Leverkusen in subsequent Champions League encounters. Borussia Dortmund beat Manchester United 1 -- 0 both home and away in the semifinal of the 1996 -- 97 UEFA Champions League which they won, United having been guilty of squandering numerous chances in both legs, especially the return leg at Old Trafford. English club victories were often celebrated in a manner which evoked memories of the War. The outspoken Brian Clough is on record boasting that he never lost to a German side and that he took satisfaction from this for what the Germans had done to his father during the war. Clough memorably led Nottingham Forest to a 1 -- 0 win in Cologne following a spectacular 3 -- 3 draw at the City Ground in the 1979 semifinal en route to Forest winning their first European Cup. The following year, a Forest side minus star player, Trevor Francis, defeated Hamburg in the final by employing an Italian style catenaccio based on dogged defence and brilliant goalkeeping by Peter Shilton. One other famous manager who never tasted defeat against the Germans was Bob Paisley who led Liverpool to three of their five European Cup wins and one of their two UEFA Cup wins. Liverpool have a tremendous record against German opposition, from both sides of the East - West divide, and once famously hit 1860 Munich 8 -- 0 in an old Fairs Cup game, a treatment meted out to Hamburg (6 -- 0) when winning the first of their three European Super Cups, the second also against German opposition in the form of FC Bayern Munich. Liverpool 's encounters with Bayern and Borussia Mönchengladbach (known in Germany as the Gladbacher), the latter a force to be reckoned with in the 70s, are memorable. Bayern and Liverpool first met in the Fairs Cup (the forerunner to the UEFA Cup) in 1970 -- 71. Bayern had hit Coventry City for six in a previous round. Liverpool won the first leg 3 -- 0 with an Alun Evans hat - trick and drew 1 -- 1 in Munich. This was the Bayern team of Franz Beckenbauer, Maier, Gerd Müller, Schwarzenbeck and Breitner who turned the tables on Liverpool the following year in a UEFA Cup Winners ' Cup second round tie, drawing at Anfield and winning 3 -- 1 at home. The most important encounter between the two sides was in the European Cup semi-final of 1981 when a depleted Liverpool were held to a goalless draw at Anfield and then drew 1 -- 1 in Munich. They scored in the 83rd minute with a Ray Kennedy goal at the Olympia Stadion in Munich before Karl - Heinz Rummenigge equalised in the 88th minute to preserve Bayern 's then unbeaten home record against English opposition, even though Liverpool went through to win their third European Cup final. The two sides met again in the 2001 UEFA Super Cup when Liverpool, managed by Gérard Houllier, stormed to a three - goal lead before Bayern scored twice towards the end to make the score more respectable. Apart from the 1977 European Cup final, Liverpool beat Mönchengladbach, who had been eliminated on penalties by the other Mersey side, Everton, in the 1970 -- 1971 European Cup competition, in the 1973 UEFA Cup Final and the 1978 European Cup semi-final. The great Günter Netzer, now a pundit on German television, and midfield forager, Herbert Wimmer, played for Mönchengladbach in the encounters with Everton and the 1973 Cup final against Liverpool, then managed by Bill Shankly. That year Liverpool won the cup beating four German teams along the way, two from West Germany (Eintracht Frankfurt and the Gladbacher) and two from the DDR (Dynamo Dresden, who they also beat twice in later years, and SC Dynamo Berlin). Borussia Mönchengladbach 's two Champions League encounters with Liverpool involved Allan Simonsen, Berti Vogts, Herbert Wimmer, Rainer Bonhof and Jupp Heynckes. Borussia would eliminate an English club in 1979 en route to winning the UEFA Cup for the second time in their history. The English club was Manchester City whose manager, Malcolm Allison, had taken over a few months earlier from Tony Book and dismantled what seemed, in the earlier rounds, to be a star - studded side, to blood young wannabes. There were memorable encounters in the other European competitions. Borussia Dortmund 's wins over holders West Ham United and Liverpool (final in Glasgow) in the 1965 - 66 European Cup Winners ' Cup were memorable as were West Ham 's win over TSV 1860 Munich at Wembley Stadium in the final of the same competition a year earlier, Everton F.C. Everton 's semifinal elimination of Bayern in 1985 (they went on to win the Cup Winners Cup and the league) and Gianluca Vialli 's Chelsea 's win over VfB Stuttgart in the final of 1998. The UEFA Cup, which became a strong competition in the late seventies, eighties and 90s, before being devalued in recent years, threw up some wonderful Anglo - German encounters, among the most memorable of which would be Ipswich Town 's victories both home and away over 1. FC Köln in the semifinal of the 1981 competition which they won, Tottenham Hotspur 's 5 -- 1 aggregate mauling of Cologne in the 1974 competition and defeat of Bayern ten years later when winning the competition for the second time, debutant Watford 's comeback against Kaiserslautern in the first round of the 1983 -- 84 competition, Bayern 's thrashing of Nottingham Forest F.C. Nottingham Forest 7 -- 2 on aggregate -- after Forest had held Bayern to a 1 -- 1 draw in the first leg in Munich -- in 1996 en route to winning the cup, debutant Norwich City 's win at the Olympia Stadion in Munich before ousting Bayern at Carrow Road in 1993 and Kaiserlautern 's final minutes turn around against Tottenham Hotspur, managed by George Graham, in 1999. More recently, in 2009 Hamburg eliminated Manchester City who had earlier in the campaign beaten Schalke in Germany, a team they also beat 5 -- 1 in the quarter finals of the 1969 - 70 European Cup Winners ' Cup which City went on to win. Reinhard Libuda played for Schalke at that time (1969 -- 70) while City had the famous trio of Francis Lee, Colin Bell and Mike Summerbee. The English hold the upperhand even in these competitions. There were however some narrow escapes. Liverpool won their 1973 UEFA Cup Final first leg at Anfield 3 -- 0 only for Borussia Mönchengladbach to pull back to 3 -- 2 on aggregate by half - time. The Reds hung on in the second half. In 1976, Queens Park Rangers, also making their debut, with Stan Bowles, Dave Thomas and Don Givens in their ranks, took a 3 -- 0 lead to the Mungersdorfer Stadion in Cologne and increased their lead there only for the Germans to storm back with four goals and miss out on qualification on the away goals rule. There were also many encounters between English league sides and clubs from the DDR which mostly ended in favour of the English sides, although these confrontations were less spectacular than those involving clubs from West Germany. Newport County, then from the English third division but representing Wales in the European Cup Winners ' Cup in 1981, went tantalisingly close to eliminating Carl Zeiss Jena after a 2 -- 2 draw in East Germany but lost 0 -- 1 in the home leg after a blinding display by the East German keeper Hans - Ulrich Grapenthin. Jena made it to the final where they lost to Dynamo Tbilisi of Georgia, then part of the Soviet Union. Liverpool had three confrontations with Dynamo Dresden which they all won, including a splendid 5 -- 1 performance at Anfield in the second round of the 1977 - 78 European Cup competition. Nottingham Forest played SC Dynamo Berlin in the quarter finals of the 1979 - 80 European Cup. Forest, the 1979 European Cup holders, had a mountain to climb to hold on to the trophy, having lost the first leg at the City Ground 0 -- 1 to a goal by Hans - Jürgen Riediger. A Trevor Francis and John Robertson inspired Forest ran riot in Berlin in the second leg as Forest triumphed 3 -- 1. Brian Clough 's Forest then went on to beat Ajax and Hamburg to retain the trophy and a different Forest side, still managed by Clough, would eventually see off another East German side, Vorwärts Frankfurt / Oder in the first round of the 1983 - 84 UEFA Cup. As for Dynamo Berlin, they suffered another home defeat (1 -- 2) in the 1981 - 82 European Cup to another English side, Aston Villa. They managed to register another victory (1 -- 0) on English soil in the return leg only to be ousted on the away goal rule by Villa who went on to keep the European Cup in England for a sixth consecutive year, beating FC Bayern Munich in the Rotterdam final. Forest cast - away, Peter Withe, scored the only goal of the game against the run of play. For most of the second half, Bayern were camped inside the Villa half, hit the woodwork, went tantaisingly close on a number of occasions but found substitute rookie goalkeeper, Nigel Spink (who replaced veteran Jimmy Rimmer after only a few minutes), in inspiring form. The rivalry between the two nations has not prevented their respective nationals from playing in each other 's domestic leagues, in certain cases to high renown. Many German players have played in England, including Max Seeburg (who played for Chelsea, Tottenham Hotspur, Burnley, Grimsby Town and Reading), Bert Trautmann (Manchester City), Jürgen Klinsmann (Tottenham Hotspur), Christian Ziege (Liverpool, Middlesbrough and Tottenham Hotspur), Karlheinz Riedle (Liverpool and Fulham), Fredi Bobic (Bolton Wanderers), Dietmar Hamann (Newcastle United, Liverpool and Manchester City), Uwe Rösler, Eike Immel and Maurizio Gaudino (Manchester City), Markus Babbel (Liverpool), Jürgen Röber (Nottingham Forest) Robert Huth (Chelsea, Middlesbrough, Stoke City and Leicester City), Thomas Hitzlsperger and Stefan Beinlich (Aston Villa), Jens Lehmann (Arsenal), Moritz Volz (Arsenal, Fulham and Ipswich Town), Sascha Riether (Fulham), Michael Ballack (Chelsea), Mesut Özil (Arsenal), Per Mertesacker (Arsenal), Lukas Podolski (Arsenal), Jérôme Boateng (Manchester City) and Bastian Schweinsteiger (Manchester United). Trautmann was voted Football Writers ' Association Footballer of the Year in 1956 for continuing to play in goal for Manchester City in the 1956 FA Cup Final despite a neck injury. Klinsmann was voted the same accolade in 1995 while playing for Tottenham, where he pioneered the ' diving ' goal celebration. Far fewer Englishmen have played in Germany, the most famous being Kevin Keegan (Hamburger SV), David Watson (Werder Bremen), Tony Woodcock (1. FC Cologne and SC Fortuna Köln), and Michael Mancienne (Hamburg). Owen Hargreaves played for Bayern Munich for seven seasons before transferring to Manchester United in 2007. Keegan was twice European Footballer of the Year and a European Cup finalist during his time at Hamburg, where the German public nicknamed him "Mighty Mouse '', after a cartoon hero, because of his prolific scoring, his height (or lack thereof), his high level of mobility, and his ability to turn sharply and often while running at high speed. Woodcock was also a popular figure at Cologne.
where does maxwell house coffee beans come from
Maxwell House - wikipedia Maxwell House is a brand of coffee manufactured by a like - named division of Kraft Heinz. Introduced in 1892 by wholesale grocer Joel Owsley Cheek (1852 - 1935), it was named in honor of the Maxwell House Hotel in Nashville, Tennessee. For many years, until the late 1980s, it was the largest - selling coffee in the United States. The company 's slogan is "Good to the last drop '', which is often incorporated into their logo and is printed on their labels. In 1884 Joel Cheek moved to Nashville and met Roger Nolley Smith, a British coffee broker who could reportedly tell the origin of a coffee simply by smelling the green beans. Over the next few years, the two worked on finding the perfect blend, and in 1892 Cheek approached the food buyer for the Maxwell House Hotel and gave him 20 pounds of his special blend for free. After a few days, the coffee was gone, and the hotel returned to its usual brand until hearing of complaints from patrons and others who liked Cheek 's coffee better, the hotel bought Cheek 's blend exclusively Inspired by his success, Cheek resigned from his job as a coffee broker and formed a wholesale grocery distributor with partner John Neal, the Nashville Coffee and Manufacturing Company, specializing in coffee with Maxwell House Coffee, as it came to be known, as the central brand. Later, the Nashville Coffee and Manufacturing Company was renamed the Cheek - Neal Coffee Company. Over the next several years, the Maxwell House Coffee brand became a well - respected name that set it apart from the competition. In 1915 Cheek - Neal began using a "Good to the last drop '' slogan to advertise their Maxwell House Coffee. For several years, the ads made no mention of Theodore Roosevelt as the phrase 's originator. By the 1930s, however, the company was running advertisements that claimed that the former president had taken a sip of Maxwell House Coffee on a visit to Andrew Jackson 's estate, The Hermitage, near Nashville on October 21, 1907, and when served coffee, he proclaimed it to be "good to the last drop ''. During this time, Coca - Cola also used the slogan "Good to the last drop ''. Later, Maxwell House distanced itself from its original claim, admitting that the slogan was written by Clifford Spiller, former president of General Foods Corporation, and did not come from a Roosevelt remark overheard by Cheek - Neal. The phrase remains a registered trademark of the product and appears on its logo. The veracity of the Roosevelt connection to the phrase has never been historically established. In the local press coverage of Roosevelt 's October 21 visit, a story concerning Roosevelt and the cup of coffee he drank features a quote which does not resemble the slogan. The Maxwell House Company claimed in its own advertising that the Roosevelt story was true; in 2009, Maxwell House ran a commercial with Roosevelt repriser Joe Wiegand, who tells the "Last Drop '' story. In 1942 General Foods Corporation, successor to the Postum Company that Charles William "C.W. '' Post had established, began supplying instant coffee to the U.S. armed forces. Beginning in the fall of 1945, this product, which by that time had come to be branded as Maxwell House Instant Coffee, entered test markets in the eastern U.S. and began national distribution the following year. In 1966 the company introduced "Maxwell House ElectraPerk, '' developed specifically for electric percolators. In 1969 General Foods in the UK launched granulated coffee, using a pantomime stage format in the London Hilton for a show called "Once Upon a Coffee Time. '' In this story, the weak "Prince of Powdah '' and his mentor "Reschem '' travel the world in search of blends. Meeting and falling in love with "Princess Purity, '' and fighting the dragon "Old Hat, '' the young man emerges as "Prince Granulo, '' heir to the Kingdom of Maxwell. This show was written by Michael Ingrams, produced by the Mitchell Monkhouse Agency, and designed by Malcolm Lewis and Chris Miles of Media. In 1976 the product was joined by "Maxwell House A.D.C. '' coffee, the name reflecting its intended use in automatic drip coffee makers such as Mr. Coffee, which were in the process of pushing aside traditional coffee - preparation methods. In 1972, the company had introduced "Max - Pax '' ground coffee filter rings, aimed at the then still - strong market for drip coffee preparation. Although this method, too, has been eclipsed, the Max - Pax concept was subsequently adapted as Maxwell House Filter Packs, first so called in 1989, for use in automatic coffee makers. By the 1990s, the company had quietly discontinued formulations for specific preparation methods. The brand is now marketed in ground and measured forms, as well as in whole - bean, flavored, and varietal blends. A higher - yield ground coffee, "Maxwell House Master Blend, '' was introduced in 1981 and "Rich French Roast, '' "Colombian Supreme, '' described as being 100 % Colombian coffee, and "1892, '' which was called a "slow - roasted '' formulation, in 1989. In 1992, the company added cappuccino products to its line with Cappio Iced Cappuccino in that year and Maxwell House Cappuccino in 1993. In recent years, the names of these products have been modified by the company to present a more "uniform '' Maxwell House brand image. Although General Foods had been marketing decaffeinated coffee under various brand names such as "Sanka '' since 1927, and "Brim '' and "Maxim '', the latter a freeze - dried instant coffee, since the 1950s, it refrained from actually selling Maxwell House - labeled decaffeinated coffee products until 1983, when it introduced ground "Maxwell House Decaffeinated '' into East Coast markets. (At the same time, a decaffeinated version of its long - established, lighter - tasting "Yuban '' brand was introduced on the West Coast.) "Maxwell House Instant Decaffeinated Coffee '' finally came to store shelves in 1985. A further modification of the decaf theme, "Maxwell House Lite '', a "reduced - caffeine '' blend, was introduced nationally in 1992 and its instant form the following year. During the 1920s, the Maxwell House brand began to be extensively advertised across the US. Total advertising expenditures rose from $19,955 in 1921 to $276,894 in 1924, and consequently the brand was cited as the most well - known coffee brand in a 1925 study of consumer goods. Maxwell House was the sponsor of Maxwell House Coffee Time, which ran from 1937 to 1949 and featured Baby Snooks, Charles Ruggles, Frank Morgan, Topper, and George Burns and Gracie Allen over the years in a predominately radio comedy and variety format. Maxwell House also sponsored Molly Goldberg on radio and later on television. Maxwell House was also the sponsor of the radio version of Father Knows Best. Each episode began with the youngest daughter Kitten Anderson asking, "Mother, is Maxwell House really the best coffee in the whole world? '' To that question, her mother, Betty Anderson, would reply, "Well, your father says so, and Father Knows Best! '' It was later replaced by Postum, a hot decaffeinated beverage that was touted by "Father '' James "Jim '' Anderson as being a calming beverage that would neither keep you up nor make you jittery. (In the 1980s, series lead Robert Young was an endorser of the Sanka brand that Maxwell House Decaffeinated later superseded in the United States.) Canadian advertising of Maxwell House has differed. In the early 1980s, actor Ricardo Montalban promoted Maxwell House in commercials with the theme "Morning and Maxwell House. '' In 1985, the company switched to more upbeat ' Me and Max ' campaigns with the common tagline "Hugga Mugga Max '' and "Good to the last drop. '' Maxwell House was the long - time sponsor of the early television series Mama, based on the play and film I Remember Mama. It starred Peggy Wood as the matriarch of a Norwegian - American family. It ran on the CBS network from 1949 to 1957 and was perhaps the first example of product placement on a TV show, as the family frequently gathered around the kitchen table for a cup of Maxwell House coffee, though these segments, aired towards the end of each episode, were usually kept separate from the main storyline. Early television programs were frequently packaged by the advertising agencies of individual sponsors. As this practice became less common in the late 1950s, Maxwell House, like most national brands, turned to "spot '' advertising, with the agencies creating sometimes long - running campaigns in support of their products. One such 1970s campaign for Maxwell House featured the actress Margaret Hamilton, the former wicked witch in The Wizard of Oz, as Cora, the general store owner who proudly announced that Maxwell House was the only brand she sold. Maxwell House was also a well - known sponsor of the Burns and Allen radio show, during which Maxwell House spots were incorporated into the plots of the actual radio scripts. Along with television advertising, Maxwell House used various print campaigns, always featuring the tagline "good to the last drop. '' The publication of its Passover Haggadah by the Joseph Jacobs Advertising Agency, beginning in 1932, made Maxwell House a household name with many American Jewish families. This was part of a marketing strategy by advertiser Jacobs, who also hired an Orthodox rabbi to certify that the coffee bean was technically not "kitniyot '' (because it was more like a berry than a bean) and, consequently, kosher for Passover. Maxwell House was the first coffee roaster to target a Jewish demographic. The Maxwell House Haggadah was also the Haggadah of choice for the annual White House Passover Seder which President Barack Obama conducted from 2009 to 2016. Maxwell House coffee is produced in Jacksonville, Florida. A second plant located in San Leandro, California, was closed in 2016. A third plant (the oldest of the group), located in Hoboken, New Jersey, was closed in the early 1990s. Its enormous rooftop sign, proclaiming the brand name and a dripping coffee cup, was a landmark visible in New York City across the Hudson River from Manhattan. The plant was later sold and demolished, and a condominium was subsequently built to occupy the site. A fourth facility, located in Houston, was divested by Kraft Foods to Maximus Coffee Group LP in late 2006. In March 2007, the neon coffee cup sign that glowed like a beacon over the city 's East End was removed from the side of the 16 - story coffee roaster building. The company also had a roasting plant in New Orleans for many years. That plant was sold to Folgers, whose gourmet roasting plant it remains as of June 2016.
when did the first home video game come out
Video game console - wikipedia A video game console is an electronic, digital or computer device that outputs a video signal or visual image to display a video game that one or more people can play. The term "video game console '' is primarily used to distinguish a console machine primarily designed for consumers to use for playing video games, in contrast to arcade machines or home computers. An arcade machine consists of a video game computer, display, game controller (joystick, buttons, etc.) and speakers housed in large chassis. A home computer is a personal computer designed for home use for a variety of purposes, such as bookkeeping, accessing the Internet and playing video games. Unlike similar consumer electronics such as music players and movie players, which use industry - wide standard formats, video game consoles use proprietary formats which compete with each other for market share. There are various types of video game consoles, including home video game consoles, handheld game consoles, microconsoles and dedicated consoles. Although Ralph Baer had built working game consoles by 1966, it was nearly a decade before the Pong game made them commonplace in regular people 's living rooms. Through evolution over the 1990s and 2000s, game consoles have expanded to offer additional functions such as CD players, DVD players, Blu - ray disc players, web browsers, set - top boxes and more. The first video games appeared in the 1960s. They were played on massive computers connected to vector displays, not analog televisions. Ralph H. Baer conceived the idea of a home video game in 1951. In the late 1960s, while working for Sanders Associates, Baer created a series of video game console designs. One of these designs, which gained the nickname of the 1966 "Brown Box '', featured changeable game modes and was demonstrated to several TV manufacturers, ultimately leading to an agreement between Sanders Associates and Magnavox. In 1972, Magnavox released the Magnavox Odyssey, the first home video game console which could be connected to a TV set. Ralph Baer 's initial design had called for a huge row of switches that would allow players to turn on and off certain components of the console (the Odyssey lacked a CPU) to create slightly different games like tennis, volleyball, hockey, and chase. Magnavox replaced the switch design with separate cartridges for each game. Although Baer had sketched up ideas for cartridges that could include new components for new games, the carts released by Magnavox all served the same function as the switches and allowed players to choose from the Odyssey 's built - in games. The Odyssey initially sold about 100,000 units, making it moderately successful, and it was not until Atari 's arcade game Pong popularized video games that the public began to take more notice of the emerging industry. By autumn 1975, Magnavox, bowing to the popularity of Pong, canceled the Odyssey and released a scaled - down version that played only Pong and hockey, the Odyssey 100. A second, "higher end '' console, the Odyssey 200, was released with the 100 and added on - screen scoring, up to four players, and a third game -- Smash. Almost simultaneously released with Atari 's own home Pong console through Sears, these consoles jump - started the consumer market. All three of the new consoles used simpler designs than the original Odyssey did with no board game pieces or extra cartridges. In the years that followed, the market saw many companies rushing similar consoles to market. After General Instrument released their inexpensive microchips, each containing a complete console on a single chip, many small developers began releasing consoles that looked different externally, but internally were playing exactly the same games. Most of the consoles from this era were dedicated consoles playing only the games that came with the console. These video game consoles were often just called video games because there was little reason to distinguish the two yet. While a few companies like Atari, Magnavox, and newcomer Coleco pushed the envelope, the market became flooded with simple, similar video games. Fairchild released the Fairchild Video Entertainment System (VES) in 1976. While there had been previous game consoles that used cartridges, either the cartridges had no information and served the same function as flipping switches (the Odyssey) or the console itself was empty (Coleco Telstar) and the cartridge contained all of the game components. The VES, however, contained a programmable microprocessor so its cartridges only needed a single ROM chip to store microprocessor instructions. RCA and Atari soon released their own cartridge - based consoles, the RCA Studio II and the Atari 2600 (originally branded as the Atari Video Computer System), respectively. The first handheld game console with interchangeable cartridges was the Microvision designed by Smith Engineering, and distributed and sold by Milton - Bradley in 1979. Crippled by a small, fragile LCD display and a very narrow selection of games, it was discontinued two years later. The Epoch Game Pocket Computer was released in Japan in 1984. The Game Pocket Computer featured an LCD screen with 75 X 64 resolution and could produce graphics at about the same level as early Atari 2600 games. The system sold very poorly, and as a result, only five games were made for it. Nintendo 's Game & Watch series of dedicated game systems proved more successful. It helped to establish handheld gaming as popular and lasted until 1991. Many Game & Watch games were later re-released on Nintendo 's subsequent handheld systems. The VES continued to be sold at a profit after 1977, and both Bally (with their Home Library Computer in 1977) and Magnavox (with the Odyssey2 in 1978) brought their own programmable cartridge - based consoles to the market. However, it was not until Atari released a conversion of the golden age arcade hit Space Invaders in 1980 for the Atari 2600 that the home console industry took off. Many consumers bought an Atari console so they could play Space Invaders at home. The unprecedented success of Space Invaders started the trend of console manufacturers trying to get exclusive rights to arcade titles, and the trend of advertisements for game consoles claiming to bring the arcade experience home. Throughout the early 1980s, other companies released video game consoles of their own. Many of the video game systems (e.g. ColecoVision) were technically superior to the Atari 2600, and marketed as improvements over the Atari 2600. However, Atari dominated the console market in the early 1980s. In 1983, the video game business suffered a much more severe crash. A flood of low - quality video games by smaller companies (especially for the 2600), industry leader Atari hyping games such as E.T and a 2600 version of Pac - Man that were poorly received, and a growing number of home computer users caused consumers and retailers to lose faith in video game consoles. Most video game companies filed for bankruptcy, or moved into other industries, abandoning their game consoles. A group of employees from Mattel Electronics formed the INTV Corporation and bought the rights for the Intellivision. INTV alone continued to manufacture the Intellivision in small quantities and release new Intellivision games until 1991. All other North American game consoles were discontinued by 1984. Revenues generated by the video game industry fell by 97 % during the crash. In 1983, Nintendo released the Family Computer (or Famicom) in Japan. The Famicom supported high - resolution sprites, larger color palettes, and tiled backgrounds. This allowed Famicom games to be longer and have more detailed graphics. Nintendo began attempts to bring their Famicom to the U.S. after the video game market had crashed. In the U.S., video games were seen as a fad that had already passed. To distinguish its product from older game consoles, Nintendo released their Famicom as the Nintendo Entertainment System (NES) which used a front - loading cartridge port similar to a VCR, included a plastic "robot '' (R.O.B.), and was initially advertised as a toy. The NES was the highest selling console in the history of North America and revitalized the video game market. Mario of Super Mario Bros. became a global icon starting with his NES games. Nintendo took a somewhat unusual stance with third - party developers for its console. Nintendo contractually restricted third - party developers to three NES titles per year and forbade them from developing for other video game consoles. The practice ensured Nintendo 's market dominance and prevented the flood of trash titles that had helped kill the Atari, but was ruled illegal late in the console 's lifecycle. Sega 's Master System was intended to compete with the NES, but never gained any significant market share in the US or Japan and was barely profitable. It fared notably better in PAL territories. In Europe and South America, the Master System competed with the NES and saw new game releases even after Sega 's next - generation Mega Drive was released. In Brazil where strict importation laws and rampant piracy kept out competitors, the Master System outsold the NES by a massive margin and remained popular into the 1990s. Jack Tramiel, after buying Atari, downsizing its staff, and settling its legal disputes, attempted to bring Atari back into the home console market. Atari released a smaller, sleeker, cheaper version of their popular Atari 2600. They also released the Atari 7800, a console technologically comparable with the NES and backward compatible with the 2600. Finally, Atari repackaged its 8 - bit XE home computer as the XEGS game console. The new consoles helped Atari claw its way out of debt, but failed to gain much market share from Nintendo. Atari 's lack of funds meant that its consoles saw fewer releases, lower production values (both the manuals and the game labels were frequently black and white), and limited distribution. Additionally, two popular 8 - bit computers, the Commodore 64 and Amstrad CPC, were repackaged as the Commodore 64 Games System and Amstrad GX4000 respectively, for entry into the console market. In the latter part of the third generation, Nintendo introduced the Game Boy and Atari released the Atari Lynx portable game consoles, pioneering and solidifying the handheld video game industry. NEC brought the first fourth - generation console to market with their PC Engine (or TurboGrafx16) when Hudson Soft approached them with an advanced graphics chip. Hudson had previously approached Nintendo, only to be rebuffed by a company still raking in the profits of the NES. The TurboGrafx used the unusual HuCard format to store games. The small size of these proprietary cards allowed NEC to re-release the console as a handheld game console. The PC Engine enjoyed brisk sales in Japan, but its North American counterpart, the TurboGrafx, lagged behind the competition. The console never saw an official release in Europe, but clones and North American imports were available in some markets starting in 1990. NEC advertised their console as "16 - bit '' to highlight its advances over the NES. This started the trend of all subsequent fourth generations consoles being advertised as 16 bit. Many people still refer to this generation as the 16 - bit generation and often refer to the third generation as "8 - bit ''. Sega scaled down and adapted their Sega System 16 (used to power arcade hits like Altered Beast and Shinobi) into the Mega Drive (sold as the Genesis in North America) and released it with a near arcade - perfect port of Altered Beast. Sega 's console met lukewarm sales in Japan, but skyrocketed to first place in PAL markets, and made major inroads in North America. Propelled by its effective "Genesis does what Nintendo n't '' marketing campaign, Sega capitalized on the Genesis 's technological superiority over the NES, faithful ports of popular arcade games, and competitive pricing. The arcade gaming company SNK developed the high end Neo Geo MVS arcade system which used interchangeable cartridges similar to home consoles. Building on the success of the MVS, SNK repackaged the NeoGeo as the Neo Geo AES home console. Though technologically superior to the other fourth - generation consoles, the AES and its games were prohibitively expensive, which kept sales low and prevented it from expanding outside its niche market and into serious competition with Nintendo and Sega. The AES did, however, amass a dedicated cult following, allowing it to see new releases into the 2000s. Fourth generation graphics chips allowed these consoles to reproduce the art styles that were becoming popular in arcades and on home computers. These games often featured lavish background scenery, huge characters, broader color palettes, and increased emphasis on dithering and texture. Games written specifically for the NES, like Megaman, Shatterhand, and Super Mario Bros. 3 were able to work cleverly within its limitations. Ports of the increasingly detailed arcade and home computer games came up with various solutions. For example, when Capcom released Strider in the arcade they created an entirely separate Strider game for the NES that only incorporated themes and characters from the arcade. In 1990, Nintendo finally brought their Super Famicom to market and brought it to the United States as the Super NES (SNES) a year later. Its release marginalized the TurboGrafx and the Neo Geo, but came late enough for Sega to sell several million consoles in North America and gain a strong foothold. The same year the SNES was released Sega released Sonic the Hedgehog, which spiked Genesis sales, similar to Space Invaders on the Atari. Also, by 1992 the first fully licensed NFL Football game was released: NFL Sports Talk Football ' 93, which was available only on the Genesis. This impact on Genesis sales and the overall interest of realistic sports games would start the trend of licensed sports games being viewed as necessary for the success of a console in the US. While Nintendo enjoyed dominance in Japan and Sega in Europe, the competition between the two was particularly fierce and close in North America. Ultimately, the SNES outsold the Genesis, but only after Sega discontinued the Genesis to focus on the next generation of consoles. One trait that remains peculiar to the fourth generation is the huge number of exclusive games. Both Sega and Nintendo were very successful and their consoles developed massive libraries of games. Both consoles had to be programmed in assembly to get the most out of them. A game optimized for the Genesis could take advantage of its faster CPU and sound chip. A game optimized for the SNES could take advantage of its graphics and its flexible, clean sound chip. Some game series, like Castlevania, saw separate system exclusive releases rather than an attempt to port one game to disparate platforms. When compact disc (CD) technology became available midway through the fourth generation, each company attempted to integrate it into their existing consoles in different ways. NEC and Sega released CD add - ons to their consoles in the form of the TurboGrafx - CD and Sega CD, but both were only moderately successful. NEC also released the TurboDuo which combined the TurboGrafx - 16 and its TurboGrafx - CD add - on (along with the RAM and BIOS upgrade from the Super System Card) into one unit. SNK released a third version of the NeoGeo, the Neo Geo CD, allowing the company to release its games on a cheaper medium than the AES 's expensive cartridges, but it reached the market after Nintendo and Sega had already sold tens of millions of consoles each. Nintendo partnered with Sony to work on a CD add - on for the SNES, but the deal fell apart when they realized how much control Sony wanted. Sony would use their work with Nintendo as the basis for their PlayStation game console. While CDs became an increasingly visible part of the market, CD - reading technology was still expensive in the 1990s, limiting NEC 's and Sega 's add - ons ' sales. The first handheld game console released in the fourth generation was the Game Boy, on April 21, 1989. It went on to dominate handheld sales by an extremely large margin, despite featuring a low - contrast, unlit monochrome screen while all three of its leading competitors had color. Three major franchises made their debut on the Game Boy: Tetris, the Game Boy 's killer application; Pokémon; and Kirby. With some design (Game Boy Pocket, Game Boy Light) and hardware (Game Boy Color) changes, it continued in production in some form until 2008, enjoying a better than 18 - year run. The Atari Lynx included hardware - accelerated color graphics, a backlight, and the ability to link up to sixteen units together in an early example of network play when its competitors could only link 2 or 4 consoles (or none at all), but its comparatively short battery life (approximately 4.5 hours on a set of alkaline cells, versus 35 hours for the Game Boy), high price, and weak games library made it one of the worst - selling handheld game systems of all time, with less than 500,000 units sold. The third major handheld of the fourth generation was the Game Gear. It featured graphics capabilities roughly comparable to the Master System (better colours, but lower resolution), a ready made games library by using the "Master - Gear '' adapter to play cartridges from the older console, and the opportunity to be converted into a portable TV using a cheap tuner adaptor, but it also suffered some of the same shortcomings as the Lynx. While it sold more than twenty times as many units as the Lynx, its bulky design - slightly larger than even the original Game Boy; relatively poor battery life - only a little better than the Lynx; and later arrival in the marketplace - competing for sales amongst the remaining buyers who did n't already have a Game Boy - hampered its overall popularity despite being more closely competitive to the Nintendo in terms of price and breadth of software library. Sega eventually retired the Game Gear in 1997, a year before Nintendo released the first examples of the Game Boy Color, to focus on the Nomad and non-portable console products. Other handheld consoles released during the fourth generation included the TurboExpress, a handheld version of the TurboGrafx - 16 released by NEC in 1990, and the Game Boy Pocket, an improved model of the Game Boy released about two years before the debut of the Game Boy Color. While the TurboExpress was another early pioneer of color handheld gaming technology and had the added benefit of using the same game cartridges or ' HuCards ' as the TurboGrafx16, it had even worse battery life than the Lynx and Game Gear - about three hours on six contemporary AA batteries - selling only 1.5 million units. During this time home computers gained greater prominence as a way of playing video games. The gaming console industry nonetheless continued to thrive alongside home computers, due to the advantages of much lower prices, easier portability, circuitry specifically dedicated towards gaming, the ability to be played on a television set (which PCs of the time could not do in most cases), and intensive first party software support from manufacturers who were essentially banking their entire future on their consoles. The first fifth - generation consoles were the Amiga CD32, 3DO and the Atari Jaguar. Although all three consoles were more powerful than the fourth generation systems, none of them would become serious threats to Sega or Nintendo. The 3DO initially generated a great deal of hype in part because of a licensing scheme where 3DO licensed the manufacturing of its console out to third parties, similar to VCR or DVD players. However, unlike its competitors who could sell their consoles at a loss, all 3DO manufacturers had to sell for profit. The Jaguar had three processors and no C libraries to help developers cope with it. Atari was ineffective at courting third parties and many of their first party games were poorly received. Many of the Jaguar 's games used mainly the slowest (but most familiar) of the console 's processors, resulting in titles that could easily have been released on the SNES or Genesis. To compete with emerging next gen consoles, Nintendo released Donkey Kong Country which could display a wide range of tones (something common in fifth - generation games) by limiting the number of hues onscreen, and Star Fox which used an extra chip inside of the cartridge to display polygon graphics. Sega followed suit, releasing Vectorman and Virtua Racing (the latter of which used the Sega Virtua Processor). Sega also released the 32X, an add - on for the Genesis, while their Sega Saturn was still in development. Despite public statements from Sega claiming that they would continue to support the Genesis / 32X throughout the next generation, Sega Enterprises forced Sega of America to abandon the 32X. The 32X 's brief and confusing existence damaged public perception of the coming Saturn and Sega as a whole. While the fourth generation had seen NEC 's TurboGrafx - CD and Sega 's Sega CD add - ons, it was not until the fifth generation that CD - based consoles and games began to seriously compete with cartridges. CD - ROMs were significantly cheaper to manufacture and distribute than cartridges were, and gave developers room to add cinematic cut - scenes, pre-recorded soundtracks, and voice acting that made more serious storytelling possible. NEC had been developing a successor to the TurboGrafx - 16 as early as 1990, and presented a prototype, dubbed the "Iron Man, '' to developers in 1992, but shelved the project as the CD - ROM2 System managed to extend the console 's market viability in Japan into the mid-90s. When sales started to dry up, NEC rushed its old project to the market. The PC - FX, a CD - based, 32 - bit console, had highly advanced, detailed 2D graphics capabilities, and better full - motion video than any other system on the market. It was, however, incapable of handling 3D graphics, forfeiting its chances at seriously competing with Sony and Sega. The console was limited to a niche market of dating sims and visual novels in Japan, and never saw release in Western markets. After the abortive 32X, Sega entered the fifth generation with the Saturn. Sega released several highly regarded titles for the Saturn, but a series of bad decisions alienated many developers and retailers. While the Saturn was technologically advanced, it was also complex, difficult, and unintuitive to write games for. In particular, programming 3D graphics that could compete with those on Nintendo and Sony 's consoles proved exceptionally difficult for third - party developers. Because the Saturn used quadrilaterals, rather than triangles, as its basic polygon, cross platform games had to be completely rewritten to see a Saturn port. The Saturn was also a victim of internal politics at Sega. While the Saturn sold comparably well in Japan, Sega 's branches in North America and Europe refused to license localizations of many popular Japanese titles, holding they were ill - suited to Western markets. First - party hits like Sakura Taisen never saw Western releases, while several third - party titles released on both PlayStation and Saturn in Japan, like Grandia and Castlevania: Symphony of the Night, were released in North America and Europe as PlayStation exclusives. Born from a failed attempt to create a console with Nintendo, Sony 's PlayStation would not only dominate its generation but become the first console to sell over 100 million units by expanding the video game market. Sony actively courted third parties and provided them with convenient c libraries to write their games. Sony had built the console from the start as a 3D, disc - based system, and emphasized its 3D graphics that would come to be viewed as the future of gaming. The PlayStation 's CD technology won over several developers who had been releasing titles for Nintendo and Sega 's fourth generation consoles, such as Konami, Namco, Capcom, and Square. CDs were far cheaper to manufacture and distribute than cartridges were, meaning developers could release larger batches of games at higher profit margins; Nintendo 's console, on the other hand, used cartridges, unwittingly keeping third - party developers away. The PlayStation 's internal architecture was simpler and more intuitive to program for, giving the console an edge over Sega 's Saturn. Nintendo was the last to release a fifth generation console with their Nintendo 64, and when they finally released their console in North America, it came with only two launch titles. Partly to curb piracy and partly as a result of Nintendo 's failed disc projects with Sony (as SNES - CD) and Philips, Nintendo used cartridges for their console. The higher cost of cartridges drove many third party developers to the PlayStation. The Nintendo 64 could handle 3D polygons better than any console released before it, but its games often lacked the cut - scenes, soundtracks, and voice - overs that became standard on PlayStation discs. Nintendo released several highly acclaimed titles, such as Super Mario 64 and The Legend of Zelda: Ocarina of Time, and the Nintendo 64 was able to sell tens of millions of units on the strength of first - party titles alone, but its constant struggles against Sony would make the Nintendo 64 the last home console to use cartridges as a medium for game distribution until the Nintendo Switch in 2017. For handheld game consoles, the fifth generation began with the release of the Virtual Boy on July 21, 1995. Nintendo extensively advertised the Virtual Boy, and claimed to have spent US $ 25 million on early promotional activities. The Virtual Boy was discontinued in late 1995 in Japan and in early 1996 in North America. Nintendo discontinued the system without fanfare, avoiding an official press release. Taken as a whole, the marketing campaign was commonly thought of as a failure. The Virtual Boy was overwhelmingly panned by critics and was a commercial failure. The Virtual Boy failed for a number of reasons, among them "its high price, the discomfort caused by play (...) and what was widely judged to have been a poorly handled marketing campaign. '' The Nomad was released in October 1995 in North America only. The release was five years into the market span of the Genesis, with an existing library of more than 500 Genesis games. According to former Sega of America research and development head Joe Miller, the Nomad was not intended to be the Game Gear 's replacement and believes that there was little planning from Sega of Japan for the new handheld. Sega was supporting five different consoles: Saturn, Genesis, Game Gear, Pico, and the Master System, as well as the Sega CD and 32X add - ons. In Japan, the Mega Drive had never been successful and the Saturn was more successful than Sony 's PlayStation, so Sega Enterprises CEO Hayao Nakayama decided to focus on the Saturn. By 1999, the Nomad was being sold at less than a third of its original price. Meanwhile, the commercial failure of the Virtual Boy reportedly did little to alter Nintendo 's development approach and focus on innovation. According to Game Over, Nintendo laid blame for the machine 's faults directly on its creator, Gunpei Yokoi. The commercial failure of the Virtual Boy was said by members of the video game press to be a contributing factor to Yokoi 's withdrawal from Nintendo, although he had planned to retire years prior and finished another more successful project for the company, the Game Boy Pocket, which was released shortly before his departure. In 1996, Nintendo released the Game Boy Pocket: a smaller, lighter unit that required fewer batteries. It has space for two AAA batteries, which provide approximately 10 hours of game play. Although, like its predecessor, the Game Boy Pocket has no backlight to allow play in a darkened area, it did notably improve visibility and pixel response - time (mostly eliminating ghosting). The Game Boy Pocket was not a new software platform and played the same software as the original Game Boy model. First released in Japan on October 21, 1998, the Game Boy Color (abbreviated as GBC) added a (slightly smaller) color screen to a form factor similar in size to the Game Boy Pocket. It also has double the processor speed, three times as much memory, and an infrared communications port. Technologically, it was likened to the 8 - bit NES video game console from the 1980s although the Game Boy Color has a much larger color palette (56 simultaneous colors out of 32,768 possible) which had some classical NES ports and newer titles. It comes in seven different colors; Clear purple, purple, red, blue, green, yellow and silver for the Pokemon edition. Like the Game Boy Light, the Game Boy Color takes on two AA batteries. It was the final handheld to have 8 - bit graphics. Despite of Nintendo 's domination of handheld console market, some competing consoles such as Neo Geo Pocket, WonderSwan, Neo Geo Pocket Color, WonderSwan Color appeared in late 90s and discontinued several years later after their appearance in handheld console market. The sixth generation witnessed a shift towards using DVDs for video game media. This brought games that were both longer and more visually appealing. Adding furthermore features with online console gaming and implementing both flash and hard drive storage for game data. During the sixth generation era, the handheld game console market expanded with the introduction of new devices from many different manufacturers. Nintendo maintained its dominant share of the handheld market with the release in 2001 of the Game Boy Advance, which featured many upgrades and new features over the Game Boy. Two redesigns of this system followed, the Game Boy Advance SP in 2003 and the Game Boy Micro in 2005. Also introduced were the Neo Geo Pocket Color in 1998 and Bandai 's WonderSwan Color, launched in Japan in 1999. South Korean company Game Park introduced its GP32 handheld in 2001, and with it came the dawn of open source handheld consoles. The Game Boy Advance line of handhelds has sold 81.51 million units worldwide as of September 30, 2010. A major new addition to the market was the trend for corporations to include a large number of "non-gaming '' features into their handheld consoles, including cell phones, MP3 players, portable movie players, and PDA - like features. The handheld that started this trend was Nokia 's N - Gage, which was released in 2003 and doubled primarily as a mobile phone. It went through a redesign in 2004 and was renamed the N - Gage QD. A second handheld, the Zodiac from Tapwave, was released in 2004; based on the Palm OS, it offered specialized gaming - oriented video and sound capabilities, but it had an unwieldy development kit due to the underlying Palm OS foundation. With more and more PDAs arriving during the previous generation, the difference between consumer electronics and traditional computing began to blur and cheap console technology grew as a result. It was said of PDAs that they were "the computers of handheld gaming '' because of their multi-purpose capabilities and the increasingly powerful computer hardware that resided within them. This capability existed to move gaming beyond the last generation 's 16 - bit limitations; however, PDAs were still geared towards the typical businessman and lacked new, affordable software franchises to compete with dedicated handheld gaming consoles. Video game consoles had become an important part of the global IT infrastructure. It is estimated that video game consoles represented 25 % of the world 's general - purpose computational power in the year 2007. The features introduced in this generation include the support of new disc formats: Blu - ray Disc, utilized by the PlayStation 3, and HD DVD supported by the Xbox 360 via an optional $200 external accessory addition, that was later discontinued as the format war closed. Another new technology is the use of motion as input, and IR tracking (as implemented on the Wii). Also, all seventh generation consoles support wireless controllers. This generation also introduced the Nintendo DS, and the Nintendo DSi, which brought touchscreens into the mainstream for and added cameras to portable gaming. For handheld game consoles, the seventh generation began with the release of the Nintendo DS on November 21, 2004. This handheld was based on a design fundamentally different from the Game Boy and other handheld video game systems. The Nintendo DS offered new modes of input over previous generations such as a touch screen, the ability to connect wirelessly using IEEE 802.11 b, as well as a microphone to speak to in - game NPCs. On December 12, 2004, Sony released its first handheld, PlayStation Portable (PSP). The PlayStation Portable was marketed at launch to an above 25 - year - old or "core gamer '' market, while the Nintendo DS proved to be popular with both core gamers and new customers. Nokia revived its N - Gage platform in the form of a service for selected S60 devices. This new service launched on April 3, 2008. Other less - popular handheld systems released during this generation include the Gizmondo (launched on March 19, 2005 and discontinued in February 2006) and the GP2X (launched on November 10, 2005 and discontinued in August 2008). The GP2X Wiz, Pandora, and Gizmondo 2 were scheduled for release in 2009. Another aspect of the seventh generation was the beginning of direct competition between dedicated handheld gaming devices, and increasingly powerful PDA / cell phone devices such as the iPhone and iPod Touch, and the latter being aggressively marketed for gaming purposes. Simple games such as Tetris and Solitaire had existed for PDA devices since their introduction, but by 2009 PDAs and phones had grown sufficiently powerful to where complex graphical games could be implemented, with the advantage of distribution over wireless broadband. Aside from the usual hardware enhancements, consoles of the eighth generation focus on further integration with other media and increased connectivity. The Wii U introduced a controller / tablet hybrid whose features include the possibility of augmented reality in gaming. The PlayStation 4 is Sony 's eighth generation console, featuring a "share '' button to stream video game content between devices, released on November 15, 2013. Microsoft released their next generation console, the Xbox One, on November 22, 2013. On March 3, 2017, following poor sales of the Wii U, Nintendo released the Nintendo Switch, a ' hybrid ' console consisting of a tablet with controller attachments that can be used as a mobile device or connected to a television via a dock. Game systems in the eighth generation also faced increasing competition from mobile device platforms such as Apple 's iOS and Google 's Android operating systems. Smartphone ownership was estimated to reach roughly a quarter of the world 's population by the end of 2014. The proliferation of low - cost games for these devices, such as Angry Birds with over 2 billion downloads worldwide, presents a new challenge to classic video game systems. Microconsoles, cheaper stand - alone devices designed to play games from previously established platforms, also increased options for consumers. Many of these projects were spurred on by the use of new crowdfunding techniques through sites such as Kickstarter. Notable competitors include the GamePop, OUYA, GameStick Android - based systems, the PlayStation TV, the NVIDIA SHIELD and Steam Machines. Despite the increased competition, the sales for major console manufacturers featured strong starts. The PlayStation 4 sold 1 million consoles within 24 hours in 2 countries, whilst the Xbox One sold 1 million consoles within 24 hours in 13 countries. As of December 6, 2016, over 50 million PlayStation 4 consoles have been sold worldwide, and 10 million Xbox One units have shipped to retailers (by the end of 2014), both outpacing sales of their seventh generation systems. In contrast, the Wii U was a commercial failure and ceased production in January 2017, having sold only 13.56 million units after four years on the market. The Nintendo Switch sold 2.74 million in its first month, making it the strongest hardware launch in the history of the company, and surpassed the Wii U by the end of 2017. Game cartridges consist of a printed circuit board housed inside of a plastic casing, with a connector allowing the device to interface with the console. The circuit board can contain a wide variety of components. All cartridge games contain at the minimum, read only memory with the software written on it. Many cartridges also carry components that increase the original console 's power, such as extra RAM or a coprocessor. Components can also be added to extend the original hardware 's functionality (such as gyroscopes, rumble packs, tilt - sensors, light sensors, etc.); this is more common on handheld consoles where the user does not interact with the game through a separate video game controller. Cartridges were the first external media to be used with home consoles and remained the most common until continued improvements in capacity in 1995 (the Nintendo 64, released in 1996, was the last mainstream game console to use cartridges). Nevertheless, the relatively high manufacturing costs and limited data capacity compared to optical media at the time saw them completely replaced by the latter for home consoles by the early 21st century, although they are still in use in some handheld video game consoles and in the Nintendo Switch. Due to the aforementioned capabilities of cartridges such as more memory and coprocessors, those factors make it harder to reverse engineer consoles to be used on emulators. Several consoles such as the Master System and the TurboGrafx - 16 have used different types of smart cards as an external medium. These cards function similar to simple cartridges. Information is stored on a chip that is housed in plastic. Cards are more compact and simpler than cartridges, though. This makes them cheaper to produce and smaller, but limits what can be done with them. Cards can not hold extra components, and common cartridge techniques like bank switching (a technique used to create very large games) were impossible to miniaturize into a card in the late 1980s. Compact Discs reduced much of the need for cards. Optical Discs can hold more information than cards, and are cheaper to produce. The Nintendo GameCube and the PlayStation 2 use memory cards for storage, but the PlayStation Vita, Nintendo 3DS, and Nintendo Switch are currently the only modern systems to use cards for game distribution. Nintendo has long used cartridges with their Game Boy line of hand held consoles because of their durability, small size, stability (not shaking and vibrating the handheld when it is in use), and low battery consumption. Nintendo switched to cards starting with the DS, because advances in memory technology made putting extra memory on the cartridge unnecessary. The PlayStation Vita uses Sony 's own proprietary flash - memory Vita cards as one method of game distribution. Home computers have long used magnetic storage devices. Both tape drives and floppy disk drives were common on early microcomputers. Their popularity is in large part because a tape drive or disk drive can write to any material it can read. However, magnetic media is volatile and can be more easily damaged than game cartridges or optical discs. Among the first consoles to use magnetic media were the Bally Astrocade and APF - M1000, both of which could use cassette tapes through expansions. In Bally 's case, this allowed the console to see new game development even after Bally dropped support for it. While magnetic media remained limited in use as a primary form of distribution, three popular subsequent consoles also had expansions available to allow them to use this format. The Starpath Supercharger can load Atari 2600 games from audio cassettes; Starpath used it to cheaply distribute their own games from 1982 to 1984 and today it is used by many programmers to test, distribute, and play homebrew software. The Disk System, a floppy disk - reading add - on to the Famicom (as the NES was known in Japan), was released by Nintendo in 1986 for the Japanese market. Nintendo sold the disks cheaply and sold vending machines where customers could have new games written to their disks up to 500 times. In 1999, Nintendo released another Japan - only floppy disk add - on, the Nintendo 64DD, for the Nintendo 64. In the mid-1990s, various manufacturers shifted to optical media, specifically CD - ROM, for games. Although they were slower at loading game data than the cartridges available at that time, they were significantly cheaper to manufacture and had a larger capacity than the existing cartridge technology. NEC released the first CD - based gaming system, the TurboGrafx - CD (an add - on for the TurboGrafx - 16), in December 4, 1988 in Japan and August 1, 1990 in the United States. Sega followed suit with the Sega CD (an add - on for the Sega Genesis) in Japan on December 12, 1991; Commodore stepped into the ring shortly after with the Amiga - CD32, the first 32 - bit game console, on September 17, 1993. During the later half of the 1990s, optical media began to supplant cartridges due to their greater storage capacity and cheaper manufacturing costs, with the CD - based PlayStation significantly outpacing the cartridge - based Nintendo 64 in terms of sales. By the early 21st century, all of the major home consoles used optical media, usually DVD - ROM or similar discs, which are widely replacing CD - ROM for data storage. The PlayStation 3, PlayStation 4, and Xbox One systems use even higher - capacity Blu - ray optical discs for games and movies, while the Xbox 360 formerly used HD DVDs in the form of an external USB player add - on for video playback before it was discontinued. However, Microsoft still supports those who bought the accessory. Nintendo 's GameCube, Wii, and Wii U, meanwhile, use proprietary disc formats based on then - current industry standard discs -- the GameCube 's discs are based on mini-DVDs, the Wii 's on DVDs and the Wii U 's are believed to be based on Blu - rays. These discs offer somewhat smaller storage capacities compared to the formats they are based on, though the difference is significantly smaller compared to the gap between the N64 's cartridges and CDs. All seventh and eighth generation consoles offer some kind of Internet games distribution service, allowing users to download games for a fee onto some form of non-volatile storage, typically a hard disk or flash memory. Recently, the console manufacturers have been taking advantage of internet distribution with games, video streaming services like Netflix, Hulu Plus and film trailers being available. Each new generation of console hardware made use of the rapid development of processing technology. Newer machines could output a greater range of colors, more sprites, and introduced graphical technologies such as scaling, and vector graphics. One way console makers marketed these advances to consumers was through the measurement of "bits ''. The TurboGrafx - 16, Genesis, and Super NES were among the first consoles to advertise the fact that they contained 16 - bit processors. This fourth generation of console hardware was often referred to as the 16 - bit era and the previous generation as the 8 - bit. The bit - value of a console referred to the word length of a console 's processor (although the value was sometimes misused, for example, the TurboGrafx 16 had only an 8 - bit CPU, and the Genesis / Mega Drive had the 16 / 32 - bit Motorola 68000, but both had a 16 - bit dedicated graphics processor). As the graphical performance of console hardware is dependent on many factors, using bits was a crude way to gauge a console 's overall ability. For example, the NES, Commodore 64, Apple II, and Atari 2600 all used a very similar 8 - bit CPU. The difference in their processing power is due to other causes. For example, the Commodore 64 contains 64 kilobytes of RAM and the Atari 2600 has much less at 128 bytes of RAM. The jump from 8 - bit machines to 16 - bit machines to 32 - bit machines made a noticeable difference in performance, so consoles from certain generations are frequently referred to as 8 - bit or 16 - bit consoles. However, the "bits '' in a console are no longer a major factor in their performance. The Nintendo 64, for example, has been outpaced by several 32 - bit machines. Aside from some "128 Bit '' advertising slogans at the beginning of the sixth generation, marketing with bits largely stopped after the fifth generation.
when did illinois raise the drinking age to 21
U.S. history of alcohol minimum purchase age by State - wikipedia The alcohol laws of the United States regarding minimum age for purchase have changed over time. The history is given in the table below. Unless otherwise noted, if different alcohol categories have different minimum purchase ages, the age listed below is set at the lowest age given (e.g. if the purchase age is 18 for beer and 21 for wine or spirits, as was the case in several states, the age in the table will read as "18 '', not "21 ''). In addition, the purchase age is not necessarily the same as the minimum age for consumption of alcoholic beverages, although they have often been the same. As one can see in the table below, there has been much volatility in the states ' drinking ages since the repeal of Prohibition in 1933. Shortly after the ratification of the 21st amendment in December, most states set their purchase ages at 21 since that was the Voting age at the time. Most of these limits remained constant until the early 1970s. From 1969 to 1976, some 30 states lowered their purchase ages, generally to 18. This was primarily because the voting age was lowered from 21 to 18 in 1971 with the 26th amendment. A lot of states started to lower their minimum drinking age in response, most of this occurring in 1972 or 1973. Twelve states kept their purchase ages at 21 since repeal of Prohibition and never changed them. From 1976 to 1983, several states voluntarily raised their purchase ages to 19 (or, less commonly, 20 or 21), in part to combat drunk driving fatalities. In 1984, Congress passed the National Minimum Drinking Age Act, which required states to raise their ages for purchase and public possession to 21 by October 1986 or lose 10 % of their federal highway funds. By mid-1988, all 50 states and the District of Columbia had raised their purchase ages to 21 (but not Puerto Rico, Guam, or the Virgin Islands, see Additional Notes below). South Dakota and Wyoming were the final two states to comply with the age 21 mandate. The current drinking age of 21 remains a point of contention among many Americans, because of it being higher than the age of majority (18 in most states) and higher than the drinking ages of most other countries. The National Minimum Drinking Age Act is also seen as a congressional sidestep of the tenth amendment. Although debates have not been highly publicized, a few states have proposed legislation to lower their drinking age, while Guam has raised its drinking age to 21 in July 2010. 94. Citation for Wisconsin drinking law: https://www.revenue.wi.gov/Pages/FAQS/ise-atundrg.aspx
where are the maximum security prisons in canada
List of prisons in Canada - wikipedia This is a list of prisons in Canada. (Note: Some provincial facilities, though predominantly male, may house a small number of remanded female inmates.)
where did cream of tartar get its name
Potassium bitartrate - wikipedia Potassium bitartrate, also known as potassium hydrogen tartrate, with formula K C H O, is a byproduct of winemaking. In cooking it is known as cream of tartar. It is the potassium acid salt of tartaric acid (a carboxylic acid). It can be used in baking or as a cleaning solution (when mixed with an acidic solution such as lemon juice or white vinegar). Potassium bitartrate crystallizes in wine casks during the fermentation of grape juice, and can precipitate out of wine in bottles. The crystals (wine diamonds) will often form on the underside of a cork in wine - filled bottles that have been stored at temperatures below 10 ° C (50 ° F), and will seldom, if ever, dissolve naturally into the wine. These crystals also precipitate out of fresh grape juice that has been chilled or allowed to stand for some time. To prevent crystals forming in homemade grape jam or jelly, the prerequisite fresh grape juice should be chilled overnight to promote crystallization. The potassium bitartrate crystals are removed by filtering through two layers of cheesecloth. The filtered juice may then be made into jam or jelly. In some cases they adhere to the side of the chilled container, making filtering unnecessary. The crude form (known as beeswing) is collected and purified to produce the white, odorless, acidic powder used for many culinary and other household purposes. In food, potassium bitartrate is used for: Additionally it is used as a component of: A similar acid salt, sodium acid pyrophosphate, can be confused with cream of tartar because of their common function as a component of baking powder. Potassium bitartrate can be mixed with an acidic liquid such as lemon juice or white vinegar to make a paste - like cleaning agent for metals such as brass, aluminum or copper, or with water for other cleaning applications such as removing light stains from porcelain. This mixture is sometimes mistakenly made with vinegar and sodium bicarbonate (baking soda), which actually react to neutralize each other, creating carbon dioxide and a sodium acetate solution. Cream of tartar was often used in traditional dyeing where the complexing action of the tartrate ions were used to adjust the solubility and hydrolysis of mordant salts such as tin chloride and alum. Cream of tartar, when mixed into a paste with hydrogen peroxide, can be used to clean rust from some hand tools, notably hand files. The paste is applied and allowed to set for a few hours and then washed off with a baking soda / water solution. After another rinse with water and thorough drying, a thin application of oil will protect the file from further rusting. Slowing the set time of plaster of Paris products (most widely used in gypsum plaster wall work and artwork casting) is typically achieved by the simple introduction of almost any acid diluted into the mixing water. A commercial retardant premix additive sold by USG to trade interior plasterers includes at least 40 % potassium bitartrate. The remaining ingredients are the same plaster of Paris and quartz - silica aggregate already prominent in the main product. This means that the only active ingredient is the cream of tartar. In many households, one of the most common uses for cream of tartar is for homemade play dough. Cream of tartar has been used internally as a purgative. Use as a purgative is dangerous because an excess of potassium, or hyperkalemia, may occur. Potassium bitartrate is the National Institute of Standards and Technology 's primary reference standard for a pH buffer. Using an excess of the salt in water, a saturated solution is created with a pH of 3.557 at 25 ° C (77 ° F). Upon dissolution in acid, potassium bitartrate will dissociate into acid tartrate, tartrate, and potassium ions. Thus, a saturated solution creates a buffer with standard pH. Before use as a standard, it is recommended that the solution be filtered or decanted between 22 ° C (72 ° F) and 28 ° C (82 ° F). Potassium carbonate can be made by igniting cream of tartar producing "pearl ash ''. This process is now obsolete but produced a higher quality (reasonable purity) than "potash '' extracted from wood or other plant ashes.
how old is kevin in the movie 1 mile to you
1 Mile to You - Wikipedia 1 Mile to You is a 2017 American sports romantic drama film directed by Leif Tilden and starring Billy Crudup, Graham Rogers, Liana Liberato, Stefanie Scott and Tim Roth. It is based on Jeremy Jackson 's 2002 novel Life at These Speeds. When a teenager loses his girlfriend in a horrible and devastating accident, he finds that his running keeps him connected to her during his "runner 's high '' moments in which his heart elates and becomes ecstatic. Chasing her memory drives him to run faster and win races for his new coach. Before long, his newfound notoriety attracts the attention of a whip - smart new girl who is determined to find out what 's really going on inside him. S. Jhoanna Robledo of Common Sense Media gave the film two stars out of five.
robin hood and evil queen once upon a time
Robin Hood (once Upon a Time) - Wikipedia Maid Marian (deceased) Zelena (alternate reality) Robin of Locksley, later known as Robin Hood, is a fictional character in ABC 's television series Once Upon a Time. He is portrayed by British actor / singer Sean Maguire, who became a series regular in the fifth season after making recurring appearances in the third and fourth season. He is the second actor to play the role in the series, as it was first played by Tom Ellis in the second season, but scheduling conflicts prevented Ellis from reprising the role, resulting in Maguire taking the role afterwards. In the Enchanted Forest, Robin of Locksley is a skilled archer who robs from the rich to give to the poor with his group of Merry Men. He is also the father to his and Maid Marian 's son Roland. Prior to his life as a thief, Robin was a tavern owner who was faced with having his establishment taken over by the Sheriff of Nottingham. Robin later becomes the Prince of Thieves and gives himself the name Robin Hood, giving stolen money to the townspeople and setting up his band of Merry Men in Sherwood Forest with his close group of friends. One of his allies Will Scarlet betrays Robin, stealing for himself alone and exiting the group. After his pregnant wife Marian falls ill, Robin uses a necklace which contains a glamour spell from the Land of Oz to change his face when he breaks into Rumplestiltskin 's castle to steal a magic wand. Rumplestiltskin catches and tortures Robin with the intention of killing him, though Belle frees him and he heals Marian. After Regina loses her love Daniel, Tinker Bell leads her to Robin Hood, claiming he is her true love. However, Regina is scared to love again and refuses to meet him. Marian is later killed by Regina (later known as the Evil Queen) for supporting Snow White, leaving Robin to raise Roland alone. Whilst looking for a globe belonging to Rumplestiltskin, Neal and Mulan encounter Robin Hood in Rumplestiltskin 's castle. He reluctantly uses Roland as bait to call Peter Pan 's shadow to allow Neal to travel to Neverland. When Peter Pan 's curse takes everyone from Storybrooke back to their original worlds, he rescues Regina and Mary Margaret in the Enchanted Forest from a flying monkey. He gradually forms a bond with Regina, and after Snow and Charming reluctantly enact a curse to save their baby, Robin is sent to Storybrooke with no memories of the past year. He again likens to Regina and the two later begin a romantic relationship. However, this relationship is shattered after Emma, who traveled to the past with Captain Hook, unintentionally saved Marian before she could be executed by Regina and brought her back to the present. Robin Hood and Roland are reunited with Marian, and Robin decides he must remain loyal to Marian, leaving Regina. However, after Marian is cursed with a freezing spell, Robin asks for Regina 's help to save her, with his love for Regina overpowering him. Robin then assists Regina in finding the Author of Henry 's story book in order to find Regina 's happy ending. After Marian is saved by Regina, Robin chooses to be with the latter, though after Marian again appears to suffer with remnants of the spell, she is forced to be taken over the town line to cure herself. Regina informs Robin that he must assist her into the real world, reluctantly giving up her happiness to save Robin 's family. In New York City, Robin learns that Marian is in fact Zelena, the Wicked Witch of the West, and Regina 's half - sister, who killed Marian and posed as her. However, before this knowledge, Zelena announces she is pregnant with Robin 's child. Regina brings Robin, Zelena and Roland back to Storybrooke, and places Zelena in the psychiatric ward of Storybrooke General Hospital. Robin joins everyone in the quest to find Merlin and vanquish the Darkness forever. However, he is nearly killed by a rogue member of the Knights of the Round Table. He is saved by Emma, but being the Dark One, she accidentally summons a Fury onto him. Back in Storybrooke, he is saved from being sent to the Underworld by Regina, David, Leory, King Arthur and Mary Margaret, who offer portions of their lives to pay off the price of saving his life. When Hook, also a Dark One, is tricked into resurrecting the previous Dark Ones, Robin is one of those marked with Charon 's emblem to be sent back in their place, however Hook has a change of heart and kills himself along with the Dark Ones. Robin later joins everyone with their rescue mission to the Underworld. When his daughter is dragged there by accident, he remains in hiding to keep her safe, refusing to even name her while in the Underworld, since Hades could use that to have power over her. Though his life is threatened by the deranged Prince James, he is saved by Emma and David. Upon returning to Storybrooke, Robin gives his life to save Regina from Hades, surprisingly being avenged by Zelena. He is given a funeral, leaving Regina broken - hearted once again. At his funeral, Zelena and Regina agreed to name his daughter after him. In season 6 episode "Wish You Were Here '', the Evil Queen used one of her genie wishes to wish that Emma Swan had never become the savior. The wish created a whole new duplicate realm, known as the Wish Realm, which consists of alternate versions of the Enchanted Forest characters that come with different backstories for the 28 years after the point in which Regina never got to cast her curse. This includes an alternate version of Robin, who never married Marian and never became an honorable thief. This wish version of Robin is officially known as Robin of Locksley, which was also the original Robin 's title before he made himself known as "Robin Hood. '' In the Wish Realm, Regina and Emma encounter an alternate version of Robin. This new Robin tries to rob Regina and Emma, distracting the former long enough for them to miss the portal to Storybrooke. After he robs them he leaves and is later found by Regina drinking at a tavern and is asked by her if he 's happy believing if it was best if they never met back in the real world. After the Sheriff of Nottingham captures both of them this Robin reveals he steals from the rich to make himself rich, Marian died before they were married and his name remained "Robin of Locksley '', so Regina realizes that in the Wish Realm his life is unhappy. After Rumplestilskin rescues them he throws them into his dungeon after the Evil Queen in the Wish Realm left Belle for dead in the tower, Robin then is told about the other realm and how he 's dead and asks did the "other '' Robin had a good life. Robin and Regina both manage to escape and head back to Pinocchio 's house where he 's finished the wardrobe to send them back to Storybrooke and gives Regina a lucky feather from one of his arrows which is what the "other '' Robin wanted her to have and after talking with Emma, Regina asks Robin if he wants to come along and he agrees. Once they return Regina starts to lose hope as Robin takes long to appear but finally arrives and is hugged by Regina and officially welcomed to Storybrooke. Regina tries to help Robin adjust to his new life in Storybrooke and after a heated agrument with Zelena about Robin not allowed any custody of his daughter, Robin leaves to kill Keith who is the Sheriff of Nottingham. After Regina stops him she tells him about the "other '' Robin 's children and complications he had, this causes Robin to argue with Regina that this new life is not what he wanted and after talking things through they kiss but Regina does n't feel the love and Robin later breaks into her vault and steals a box with potions. Not wanting to live in Storybrooke anymore Robin teams up with Zelena to break the barrier spell to escape to New York, but the attempt to leave fails as the potions do not work and Regina saddened by Robin promises to break barrier so he can leave. Afterwards when Robin is alone in the woods the Evil Queen snake escapes from her cage and bites Robin 's hand which contains a spell from a potion earlier and turns her back to normal and she tells him she has big plans for Storybrooke and offers to give him a tour from her point of view. But later, he 's sent back to the Wish Realm where he meets the Evil Queen, who now has repented and changed, and they soon starts a new life with each other. However, in the Wish Realm they are chased by angry villagers and Prince Henry who want the Evil Queen dead so they escape to the Enchanted Forest and start to steal from the rich and give to the poor. He is later mentioned to be with Friar Tuck when the heroes return to the Enchanted Forest and later shoots an arrow to the Evil Queen with a ring attached to it asking for her hand in marriage and to start a new beginning. At first, the producers wanted Ellis to continue in the role, but scheduling conflicts prevented Ellis from reprising, resulting in Maguire taking the role afterwards.
who sang do you feel like i do
Do You Feel Like We Do - wikipedia "Do You Feel Like We Do '' is a song by Peter Frampton originally appearing on the Frampton 's Camel album that he released in 1973. The song became one of the highlights of his live performances in the following years, and it became one of the three hit singles released from his Frampton Comes Alive! album, released in 1976. The live version was recorded at the State University of New York Plattsburgh 's Memorial Hall. This live version is featured in Guitar Hero 5 and as downloadable content for Rock Band 3. The studio version of the song is available as downloadable content for Rocksmith 2014. The song was written and composed in the early 1970s with members of Frampton 's band, then called "Frampton 's Camel. '' It was released on the 1973 Frampton 's Camel album. This version was shorter than the duration of the live version (approximately 14 minutes), with the studio recording totaling 6 minutes and 44 seconds, and it was not released as a single. The closing notes of the studio version features a guitar riff that has a strong resemblance to The Beatles 's "Baby 's in Black ''. After the lack of success of his "Camel, '' Frampton performed under his own name and began touring the United States extensively for the next two years, supporting acts such as The J. Geils Band and ZZ Top, as well as performing his own shows at smaller venues. As a result, he developed a strong live following while his albums sold moderately and his singles failed to chart. "Do You Feel Like We Do '' became the closing number of his set and one of the highlights of his show. His concert version was considerably longer, with the version recorded on Frampton Comes Alive! alone exceeding 14 minutes, 4 of which are spent in the rock intro, 4 in the loud rock subito fortissimo outro, and 6 in the long, quiet bridge, featuring several instrumental solos utilizing Bob Mayo 's keyboard and Frampton 's guitar and talk box skills. Most famously of these were the aforementioned talk box solos, which were performed using an effects pedal that redirects a guitar 's sound through a tube coming from the performer 's mouth, allowing the guitar to mimic human speech, similarly to a vocoder. Inspiration for the talk box came from Frampton listening to the call letters of Radio Luxembourg. Following the success of the talk box solos, Frampton subsequently marketed such talk boxes under his own "Framptone '' brand. To this day, Frampton is considered an exemplary talk box performer, with his solos arguably being the selling point of some of his albums and songs. As a result of the strength of Frampton 's live show, A&M Records decided to release a live album taped when Frampton performed at Winterland in San Francisco. Frampton Comes Alive! was originally going to be a single album until Jerry Moss asked, "Where 's the rest? '' "Do You Feel Like We Do '' was one of the tracks added to the album as a result of the decision to expand the album into a double album. The selection had been recorded live on November 22, 1975 on the college campus of SUNY Plattsburgh in Plattsburgh, New York. "Do You Feel Like We Do '' was released as the third single from Frampton Comes Alive! in September 1976. On September 8, U.S. President Gerald Ford invited him to stay at the White House as a result of the success of Frampton Comes Alive! It was edited down extensively for the 45 RPM single and promo single for pop radio stations, but the said single version was still 7 minutes long. Many radio stations were known to edit the song down even further, to make it fit into the then - tightly - programmed AM radio formats. It reached number 10 on the US pop charts and number 39 in the UK, making it one of the longest songs to reach the US top 10. Many album - oriented rock stations played the full 14 - minute version, most notably WBCN in Boston, Massachusetts (now WWBX). WBCN is credited with being the first album rock station to play the full length of the album on air. Frampton continues to play this song live to close his concerts, and he played the song in his solo spot while playing with Ringo Starr & His All - Starr Band in 1997 and 1998 with a piano solo of Gary Brooker bass solo of Jack Bruce and the version length was about 19 minutes. The Simpsons episode "Homerpalooza '' featured the song with the London Symphony Orchestra supporting Frampton. Frampton appeared in a 2009 Geico commercial (part of their series coupling an actual Geico customer with a celebrity), playing talk box guitar commentary with a woman describing her auto accident and Geico 's help in getting a settlement. Frampton closed the commercial with an improv riff from the song. Frampton plays the opening notes of the song for a 2012 TV commercial for the 2012 Buick Verano. The title of the song is "Do You Feel Like We Do, '' although the lyrics read, "Do you feel like I do? '' Only after Bob Mayo 's keyboard solo in the Frampton Comes Alive! version does Frampton sing, "Do you feel like we do? '' He then sings "Do you feel like we do? '' through the talk box in the midst of his extended guitar solo. This song was covered by Tesla, whose version can be found on Disk Two of their 2007 album Real to Reel, and by Night Ranger, whose version can be found on their album Feeding off the Mojo, where it is coupled with their cover of the Beatles ' "Tomorrow Never Knows. '' Local H recorded a very faithful cover of the Frampton Comes Alive! version of the song, complete with lengthy talk box solo and fake crowd noise. It can be found on the "Eddie Vedder '' CD single. On the Strong Bad Main Page of the Homestar Runner website (accessible through the game Homestar Talker), this song is referenced when the viewer scrolls over the Characters button. A clip from the song plays, and Strong Bad says, "Oh, check it out, this guy 's guitar is totally talking! '' A cover by Warren Haynes is included on the iTunes deluxe edition of Frampton Comes Alive!
what is one of the differences between jainism and hinduism
Jainism and Hinduism - Wikipedia Jainism and Hinduism are two ancient Indian religions. There are some similarities and differences between the two religions. Temples, gods, rituals, fasts and other religious components of Jainism are different from those of Hinduism. "Jain '' is derived from the word Jina, referring to a human being who has conquered all inner passions (like anger, attachment, greed and pride) and possesses Kevala Jnana (pure infinite knowledge). Followers of the path shown by the Jinas are called Jains. Followers of Hinduism are called Hindus. Jainism and Hinduism have many similar characteristic features, including the concepts of samsara, karma and moksha. However, they differ over the precise nature and meaning of these concepts. The doctrine of Jainism has minor similarities with the Nyaya - Vaisheshika and samkhya school. The Jain doctrine teaches atomism which is also found in the Vaisheshika system and atheism which is found in Samkhya. Within the doctrine of Jainism, there exist many metaphysical concepts which are not known in Hinduism, some of which are dharma and Adharma tattva (which are seen as substances within the Jain metaphysical system), Gunasthanas and Lesyas. The epistemological concepts of Anekantavada and Syadvada are not found in the Hindu system. There were, in the past, attempts made to merge the concepts of Hindu gods and the Tirthankara of Jainism. The cosmography of Hindus resembles that of the Jains and there are similar names of heavenly gods within these systems. In the Upanishads, there also occur the first statements of the view, dominant in Jainist teachings and elsewhere, that rebirth is undesirable and that it is possible by controlling or stopping one 's actions to put an end to it and attain a state of deliverance (moksha) which lies beyond action. In Hinduism, moksha means merging of soul with universal soul or eternal being and escaping the cycle of births and deaths; in Jainism, it is action-less and peaceful existence. In Vedic philosophy or Sanatana Dharma, salvation is giving up the sense of being a doer and realizing Self to be the same as Universe and God. In Jainism, salvation can be achieved only through self - effort and is considered to be the right of human beings. In Jainism, one definite path to attain liberation (moksha) is prescribed. The prescribed threefold path consists of the three jewels of Jainism (Right belief, Right knowledge, Right conduct). In Hinduism, no one definite path to salvation is prescribed. According to Jainist cosmology, the universe is eternal. In Hinduism, it is believed to be infinite, but cyclical. It was made by a creator, and destroyed by God, to be created again. Karma is an invisible force in Hinduism, whereas in Jainism it is a form of matter which can stick to the soul. In Hinduism, Gods are worshiped in several ways and for several reasons such as knowledge, peace, wisdom, health, and it also believed to be one 's duty to pray god as God is considered as our maker (as we originated from them and we are staying in them and at last will merge with them), for moksham (by attaining moksham we can join him and escape this Maya or dualistic nature which makes us think that we are different from the universal soul) and are also offered food as a respect, etc. In Jainism, the Tirthankaras represent the true goal of all human beings, their qualities are worshiped by the Jains. The religion of Jains included women in their fourfold sangha; the religious order of Jain laymen, laywomen, monks and nuns. There was a disagreement between early Hinduism, and ascetic movements such as Jainism with the scriptural access to women. However, the early svetambara scriptures prevented pregnant women, young women or those who have a small child, to enter to the ranks of nun. Regardless, the number of nuns given in those texts were always double the number of monks. Parshvanatha and Mahavira, the two historical teachers of Jainism, had large numbers of female devotees and ascetics. Tirthankara Mahavira and Jain monks are credited with raising the status of women. Hindus do not accept any Jain text and Jains do not recognize any Hindu scripture. The scriptures known as Vedas are regarded by Hindus as one of the foundations of Hinduism. Those who rejected the Vedas as the prime source of religious knowledge were labeled "nāstika ''. As a consequence, Jainism and Buddhism were categorized as nāstika darśana. The orthodox schools of Hinduism, such as Vedanta, Mimamsa and Samkhya, claim the Sruti do not have any author and hence are supreme to other religious scriptures. This position was countered by Jains who said that saying Vedas are authorless was equivalent to saying that anonymous poems are written by nobody. Jain scriptures, on the contrary, were believed by them to be of human origin, brought through omniscient teachers, and hence claimed greater worth. According to Jains, the origin of Vedas lies with Marichi, the son of Bharata Chakravarti, who was the son of the first Tirthankara Rishabha. Jains maintain that these scriptures were later modified. Jains pointed that Hindus do not know their own scriptures since they were unaware of the names of tirthankaras present in Vedas. Jains had a long - standing debate with Mimamsa school of Hinduism. Kumarila Bhatta, a proponent of Mimamsa school, argued that the Vedas are the source of all knowledge and it is through them that humans can differentiate between right and wrong. Jain monks, such as Haribhadra, held that humans are already in possession of all the knowledge, which only needs to be illuminated or uncovered in order to gain the status of omniscience. The practice of Vedic animal sacrifices was opposed by Jains. Hemachandra, a Jain monk, cites passages from Manusmriti, one of the law book of Hindus, to demonstrate how, in light of false scriptures, Hindus have resorted to violence. Akalanka, another Jain monk, sarcastically said that if killing can result in enlightenment, one should become a hunter or fisherman. The rejection of Hindu epics and scriptures were dominant in Jainism since very early times. The central Hindu scriptures and epics like Vedas, Mahabharata and Ramayana are categorized as false scriptures in Nandi - sutra, one of the svetambara 's canonical literature. Later, Jains adapted various Hindu epics in accordance with their own system. There were disputes between Jains and Hindus in form of these epics. Within the doctrine of Jainism, the tirthankara holds the highest status. Hemachandra says that a deva (roughly god) is the one who has conquered his internal desires and passions. This requirement, according to him, was fulfilled only by the tirthankara. The gods of Hindus were considered as the one in whom the desires and the internal passion are not present as they are the one who created them, but they act like as if they have been affected by the three Guna 's (satvika human, rajjo human and thamo gunam) inorder to teach us dharma in the form of epics. Some personage mentioned in the Vedas and Jain scriptures are same. There is mention of the first tirthankara, Rishabhanatha in Rig Veda and Vishnu Purana. Rig Veda, X. 12. 166 states - Vishnu Purāna mentions: In the Skanda Purana (chapter 37) it is stated that "Rishabha was the son of Nabhiraja, and Rishabha had a son named Bharata, and after the name of this Bharata, this country is known as Bharata - varsha. '' In the "Brahmottara - candam '' section of the Brahma Purana, the narrator Suta describes many matters relating to Shaivism and in the 16th portion, there is a story about Bhadrabahu receiving instructions in a mantra from Rishabha yogi. The Linga Purana mentions that in every kali yuga, Lord Shiva has incarnated, and that in one kali yuga he was a Yogeshwara (one of His 28 incarnations) named Rishabha. Jainism is considered by some to be distinct from Vedic religion and from a pre-Aryan tradition viz. Sramana or Aarahata tradition. Jains and Hindus have coexisted in Tamil country since at least the second century BCE. Competition between Jains and Brahmans, between Jains and Shaivas, is a frequent motif of all medieval western Indian narratives, but the two communities for the most part coexisted and coprospered. Shaiva kings patronised Jain mendicants, and Jain officials patronised Brahmana poets. Around the 8th century CE, Hindu philosopher Ādi Śaṅkarācārya tried to restore the Vedic religion. Śaṅkarācārya brought forward the doctrine of Advaita. The Vaishnavism and Shaivism also began to rise. This was particularly in the southern Indian states. According to a Saivite legend, the Pandya king Koon Pandiyan ordered a massacre of 8,000 Jain monks. This event is depicted graphically in walls of Tivatur in North Arcot. However, this legend is not found in any Jain text, and is believed to be a fabrication made up by the Saivites to prove their dominance. In due course of time, some Jain temples like Trikkur Mahadeva Temple and Padmakshi Temple have been converted into Hindu temples in India, particularly in South India. Jain scholars and some monks in general allowed a sort of cautious integration with the Hindu society. In today 's date, there are a lot of common aspects in social and cultural life of Hindus and Jains. It is quite difficult to differentiate a lay Jain from a lay Hindu. The Jain code of conduct is quite similar to that which is found in Hindu Dharmasashtra, Manusmriti and other Law books of Brahmans. Many Jains now worship Hindu gods and celebrate Hindu festivals. The difference in the rituals of practitioners of the two religions would be that the Jains do not give any importance to bathing in holy water, cremating or burying ascetics. According to religious scholar M. Whitney Kelting, some of the "names and narratives '' in the Hindu 's list of satis are also found in the Jain tradition. In the Hindu context, a sati is a virtuous wife who protects her husband and his family and has the "intention to die before, or with, '' her husband. Kelting notes that those satis who die on the funeral pyre of their husband, or who "intended to die '' but were prevented from death, may attain a status called satimata. Kelting says that the Jain tradition, due to principle of non-violence and equanimity, does n't allow self - immolation. They, instead, see renunciation rather than self - sacrifice as the highest ideal for a Jain sati. Hindus think Jainism is simply another branch of Hinduism. Jain apologist like Champat Rai Jain, held that Hindus are Jaina allegorists who have allegorised the Jain teachings. However, such claims are not supported by historical facts. With the onset of British colonialism, select groups of Indians developed responses to the British dominance and the British critique of Hinduism. In this context, various responses toward Jainism developed. The Arya Samaj was founded by Dayanand Saraswati (1824 - 1883), who "was the solitary champion of Vedic authority and infallibility ''. Swami Dayanand Saraswati authored Satyarth Prakash, a book containing the basic teachings of Saraswati and the Arya Samaj. It contains "Dayananda 's bitter criticisms of the major non-Vedic religions of Indian origins. '' In the Satyarth Prakash, he writes that he regarded Jainism as "the most dreadful religion '', and that Jains are "possessed of defective and childish understanding. '' A recent strategy, exemplified by Rajiv Malhotra, is the use of the term dharma as a common denominator, which also includes Jainism and Buddhism. Dharmasthala Temple shows the communal harmony between Jains and Hindus, as the priests of the temple are Shivalli Brahmins, who are Vaishnava, and the administration is run by a Jain Bunt family.
where did the atomic bomb hit in hiroshima
Atomic bombings of Hiroshima and Nagasaki - wikipedia Southeast Asia Burma Southwest Pacific North America Japan Manchuria During the final stage of World War II, the United States detonated two nuclear weapons over the Japanese cities of Hiroshima and Nagasaki on August 6 and 9, 1945, respectively. The United States dropped the bombs after obtaining the consent of the United Kingdom, as required by the Quebec Agreement. The two bombings killed 129,000 -- 226,000 people, most of whom were civilians. They remain the only use of nuclear weapons in the history of warfare. In the final year of the war, the Allies prepared for what was anticipated to be a very costly invasion of the Japanese mainland. This undertaking was preceded by a conventional and firebombing campaign that destroyed 67 Japanese cities. The war in Europe had concluded when Germany signed its instrument of surrender on May 8, 1945. As the Allies turned their full attention to the Pacific War, the Japanese faced the same fate. The Allies called for the unconditional surrender of the Imperial Japanese armed forces in the Potsdam Declaration on July 26, 1945 -- the alternative being "prompt and utter destruction ''. The Japanese rejected the ultimatum and the war continued. By August 1945, the Allies ' Manhattan Project had produced two types of atomic bombs, and the 509th Composite Group of the United States Army Air Forces (USAAF) was equipped with the specialized Silverplate version of the Boeing B - 29 Superfortress that could deliver them from Tinian in the Mariana Islands. Orders for atomic bombs to be used on four Japanese cities were issued on July 25. On August 6, one of its B - 29s dropped a Little Boy uranium gun - type bomb on Hiroshima. Three days later, on August 9, a Fat Man plutonium implosion - type bomb was dropped by another B - 29 on Nagasaki. The bombs immediately devastated their targets. Over the next two to four months, the acute effects of the atomic bombings killed 90,000 -- 146,000 people in Hiroshima and 39,000 -- 80,000 people in Nagasaki; roughly half of the deaths in each city occurred on the first day. Large numbers of people continued to die from the effects of burns, radiation sickness, and other injuries, compounded by illness and malnutrition, for many months afterward. In both cities, most of the dead were civilians, although Hiroshima had a sizable military garrison. Japan announced its surrender to the Allies on August 15, six days after the bombing of Nagasaki and the Soviet Union 's declaration of war. On September 2, the Japanese government signed the instrument of surrender, effectively ending World War II. The ethical and legal justification for the bombings is still debated to this day. In 1945, the Pacific War between the Empire of Japan and the Allies entered its fourth year. Most Japanese military units fought fiercely, ensuring that the Allied victory would come at an enormous cost. The 1.25 million battle casualties incurred in total by the United States in World War II included both military personnel killed in action and wounded in action. Nearly one million of the casualties occurred during the last year of the war, from June 1944 to June 1945. In December 1944, American battle casualties hit an all - time monthly high of 88,000 as a result of the German Ardennes Offensive. America 's reserves of manpower were running out. Deferments for groups such as agricultural workers were tightened, and there was consideration of drafting women. At the same time, the public was becoming war - weary, and demanding that long - serving servicemen be sent home. In the Pacific, the Allies returned to the Philippines, recaptured Burma, and invaded Borneo. Offensives were undertaken to reduce the Japanese forces remaining in Bougainville, New Guinea and the Philippines. In April 1945, American forces landed on Okinawa, where heavy fighting continued until June. Along the way, the ratio of Japanese to American casualties dropped from 5: 1 in the Philippines to 2: 1 on Okinawa. Although some Japanese soldiers were taken prisoner, most fought until they were killed or committed suicide. Nearly 99 % of the 21,000 defenders of Iwo Jima were killed. Of the 117,000 Okinawan and Japanese troops defending Okinawa in April -- June 1945, 94 % were killed; 7,401 Japanese soldiers surrendered, an unprecedented large number. As the Allies advanced towards Japan, conditions became steadily worse for the Japanese people. Japan 's merchant fleet declined from 5,250,000 gross tons in 1941 to 1,560,000 tons in March 1945, and 557,000 tons in August 1945. Lack of raw materials forced the Japanese war economy into a steep decline after the middle of 1944. The civilian economy, which had slowly deteriorated throughout the war, reached disastrous levels by the middle of 1945. The loss of shipping also affected the fishing fleet, and the 1945 catch was only 22 % of that in 1941. The 1945 rice harvest was the worst since 1909, and hunger and malnutrition became widespread. U.S. industrial production was overwhelmingly superior to Japan 's. By 1943, the U.S. produced almost 100,000 aircraft a year, compared to Japan 's production of 70,000 for the entire war. By the middle of 1944, the U.S. had almost a hundred aircraft carriers in the Pacific, far more than Japan 's twenty - five for the entire war. In February 1945, Prince Fumimaro Konoe advised Emperor Hirohito that defeat was inevitable, and urged him to abdicate. Even before the surrender of Nazi Germany on May 8, 1945, plans were underway for the largest operation of the Pacific War, Operation Downfall, the Allied invasion of Japan. The operation had two parts: Operation Olympic and Operation Coronet. Set to begin in October 1945, Olympic involved a series of landings by the U.S. Sixth Army intended to capture the southern third of the southernmost main Japanese island, Kyūshū. Operation Olympic was to be followed in March 1946 by Operation Coronet, the capture of the Kantō Plain, near Tokyo on the main Japanese island of Honshū by the U.S. First, Eighth and Tenth Armies, as well as a Commonwealth Corps made up of Australian, British and Canadian divisions. The target date was chosen to allow for Olympic to complete its objectives, for troops to be redeployed from Europe, and the Japanese winter to pass. Japan 's geography made this invasion plan obvious to the Japanese; they were able to predict the Allied invasion plans accurately and thus adjust their defensive plan, Operation Ketsugō, accordingly. The Japanese planned an all - out defense of Kyūshū, with little left in reserve for any subsequent defense operations. Four veteran divisions were withdrawn from the Kwantung Army in Manchuria in March 1945 to strengthen the forces in Japan, and 45 new divisions were activated between February and May 1945. Most were immobile formations for coastal defense, but 16 were high quality mobile divisions. In all, there were 2.3 million Japanese Army troops prepared to defend the home islands, backed by a civilian militia of 28 million men and women. Casualty predictions varied widely, but were extremely high. The Vice Chief of the Imperial Japanese Navy General Staff, Vice Admiral Takijirō Ōnishi, predicted up to 20 million Japanese deaths. A study from June 15, 1945, by the Joint War Plans Committee, who provided planning information to the Joint Chiefs of Staff, estimated that Olympic would result in between 130,000 and 220,000 U.S. casualties, of which U.S. dead would be in the range from 25,000 to 46,000. Delivered on June 15, 1945, after insight gained from the Battle of Okinawa, the study noted Japan 's inadequate defenses due to the very effective sea blockade and the American firebombing campaign. The Chief of Staff of the United States Army, General of the Army George Marshall, and the Army Commander in Chief in the Pacific, General of the Army Douglas MacArthur, signed documents agreeing with the Joint War Plans Committee estimate. The Americans were alarmed by the Japanese buildup, which was accurately tracked through Ultra intelligence. Secretary of War Henry L. Stimson was sufficiently concerned about high American estimates of probable casualties to commission his own study by Quincy Wright and William Shockley. Wright and Shockley spoke with Colonels James McCormack and Dean Rusk, and examined casualty forecasts by Michael E. DeBakey and Gilbert Beebe. Wright and Shockley estimated the invading Allies would suffer between 1.7 and 4 million casualties in such a scenario, of whom between 400,000 and 800,000 would be dead, while Japanese fatalities would have been around 5 to 10 million. Marshall began contemplating the use of a weapon that was "readily available and which assuredly can decrease the cost in American lives '': poison gas. Quantities of phosgene, mustard gas, tear gas and cyanogen chloride were moved to Luzon from stockpiles in Australia and New Guinea in preparation for Operation Olympic, and MacArthur ensured that Chemical Warfare Service units were trained in their use. Consideration was also given to using biological weapons against Japan. While the United States had developed plans for an air campaign against Japan prior to the Pacific War, the capture of Allied bases in the western Pacific in the first weeks of the conflict meant that this offensive did not begin until mid-1944 when the long - ranged Boeing B - 29 Superfortress became ready for use in combat. Operation Matterhorn involved India - based B - 29s staging through bases around Chengdu in China to make a series of raids on strategic targets in Japan. This effort failed to achieve the strategic objectives that its planners had intended, largely because of logistical problems, the bomber 's mechanical difficulties, the vulnerability of Chinese staging bases, and the extreme range required to reach key Japanese cities. Brigadier General Haywood S. Hansell determined that Guam, Tinian, and Saipan in the Mariana Islands would better serve as B - 29 bases, but they were in Japanese hands. Strategies were shifted to accommodate the air war, and the islands were captured between June and August 1944. Air bases were developed, and B - 29 operations commenced from the Marianas in October 1944. These bases were easily resupplied by cargo ships. The XXI Bomber Command began missions against Japan on November 18, 1944. The early attempts to bomb Japan from the Marianas proved just as ineffective as the China - based B - 29s had been. Hansell continued the practice of conducting so - called high - altitude precision bombing, aimed at key industries and transportation networks, even after these tactics had not produced acceptable results. These efforts proved unsuccessful due to logistical difficulties with the remote location, technical problems with the new and advanced aircraft, unfavorable weather conditions, and enemy action. Hansell 's successor, Major General Curtis LeMay, assumed command in January 1945 and initially continued to use the same precision bombing tactics, with equally unsatisfactory results. The attacks initially targeted key industrial facilities but much of the Japanese manufacturing process was carried out in small workshops and private homes. Under pressure from United States Army Air Forces (USAAF) headquarters in Washington, LeMay changed tactics and decided that low - level incendiary raids against Japanese cities were the only way to destroy their production capabilities, shifting from precision bombing to area bombardment with incendiaries. Like most strategic bombing during World War II, the aim of the air offensive against Japan was to destroy the enemy 's war industries, kill or disable civilian employees of these industries, and undermine civilian morale. Over the next six months, the XXI Bomber Command under LeMay firebombed 67 Japanese cities. The firebombing of Tokyo, codenamed Operation Meetinghouse, on March 9 -- 10 killed an estimated 100,000 people and destroyed 16 square miles (41 km) of the city and 267,000 buildings in a single night. It was the deadliest bombing raid of the war, at a cost of 20 B - 29s shot down by flak and fighters. By May, 75 % of bombs dropped were incendiaries designed to burn down Japan 's "paper cities ''. By mid-June, Japan 's six largest cities had been devastated. The end of the fighting on Okinawa that month provided airfields even closer to the Japanese mainland, allowing the bombing campaign to be further escalated. Aircraft flying from Allied aircraft carriers and the Ryukyu Islands also regularly struck targets in Japan during 1945 in preparation for Operation Downfall. Firebombing switched to smaller cities, with populations ranging from 60,000 to 350,000. According to Yuki Tanaka, the U.S. fire - bombed over a hundred Japanese towns and cities. These raids were devastating. The Japanese military was unable to stop the Allied attacks and the country 's civil defense preparations proved inadequate. Japanese fighters and antiaircraft guns had difficulty engaging bombers flying at high altitude. From April 1945, the Japanese interceptors also had to face American fighter escorts based on Iwo Jima and Okinawa. That month, the Imperial Japanese Army Air Service and Imperial Japanese Navy Air Service stopped attempting to intercept the air raids in order to preserve fighter aircraft to counter the expected invasion. By mid-1945 the Japanese only occasionally scrambled aircraft to intercept individual B - 29s conducting reconnaissance sorties over the country, in order to conserve supplies of fuel. In July 1945, the Japanese had 1,156,000 US barrels (137,800,000 l) of avgas stockpiled for the invasion of Japan. About 604,000 US barrels (72,000,000 l) had been consumed in the home islands area in April, May and June 1945. While the Japanese military decided to resume attacks on Allied bombers from late June, by this time there were too few operational fighters available for this change of tactics to hinder the Allied air raids. The discovery of nuclear fission by German chemists Otto Hahn and Fritz Strassmann in 1938, and its theoretical explanation by Lise Meitner and Otto Frisch, made the development of an atomic bomb a theoretical possibility. Fears that a German atomic bomb project would develop atomic weapons first, especially among scientists who were refugees from Nazi Germany and other fascist countries, were expressed in the Einstein - Szilard letter. This prompted preliminary research in the United States in late 1939. Progress was slow until the arrival of the British MAUD Committee report in late 1941, which indicated that only 5 to 10 kilograms of isotopically enriched uranium - 235 were needed for a bomb instead of tons of natural uranium and a neutron moderator like heavy water. The 1943 Quebec Agreement merged the nuclear weapons projects of the United Kingdom and Canada, Tube Alloys and the Montreal Laboratory, with the Manhattan Project, under the direction of Major General Leslie R. Groves, Jr., of the U.S. Army Corps of Engineers. Groves appointed J. Robert Oppenheimer to organize and head the project 's Los Alamos Laboratory in New Mexico, where bomb design work was carried out. Two types of bombs were eventually developed, both named by Robert Serber. Little Boy was a gun - type fission weapon that used uranium - 235, a rare isotope of uranium separated at the Clinton Engineer Works at Oak Ridge, Tennessee. The other, known as a Fat Man device, was a more powerful and efficient, but more complicated, implosion - type nuclear weapon that used plutonium created in nuclear reactors at Hanford, Washington. There was a Japanese nuclear weapon program, but it lacked the human, mineral and financial resources of the Manhattan Project, and never made much progress towards developing an atomic bomb. The 509th Composite Group was constituted on December 9, 1944, and activated on December 17, 1944, at Wendover Army Air Field, Utah, commanded by Colonel Paul Tibbets. Tibbets was assigned to organize and command a combat group to develop the means of delivering an atomic weapon against targets in Germany and Japan. Because the flying squadrons of the group consisted of both bomber and transport aircraft, the group was designated as a "composite '' rather than a "bombardment '' unit. Working with the Manhattan Project at Los Alamos, Tibbets selected Wendover for his training base over Great Bend, Kansas, and Mountain Home, Idaho, because of its remoteness. Each bombardier completed at least 50 practice drops of inert or conventional explosive pumpkin bombs and Tibbets declared his group combat - ready. The 509th Composite Group had an authorized strength of 225 officers and 1,542 enlisted men, almost all of whom eventually deployed to Tinian. In addition to its authorized strength, the 509th had attached to it on Tinian 51 civilian and military personnel from Project Alberta, known as the 1st Technical Detachment. The 509th Composite Group 's 393d Bombardment Squadron was equipped with 15 Silverplate B - 29s. These aircraft were specially adapted to carry nuclear weapons, and were equipped with fuel - injected engines, Curtiss Electric reversible - pitch propellers, pneumatic actuators for rapid opening and closing of bomb bay doors and other improvements. The ground support echelon of the 509th Composite Group moved by rail on April 26, 1945, to its port of embarkation at Seattle, Washington. On May 6 the support elements sailed on the SS Cape Victory for the Marianas, while group materiel was shipped on the SS Emile Berliner. The Cape Victory made brief port calls at Honolulu and Eniwetok but the passengers were not permitted to leave the dock area. An advance party of the air echelon, consisting of 29 officers and 61 enlisted men flew by C - 54 to North Field on Tinian, between May 15 and May 22. There were also two representatives from Washington, D.C., Brigadier General Thomas Farrell, the deputy commander of the Manhattan Project, and Rear Admiral William R. Purnell of the Military Policy Committee, who were on hand to decide higher policy matters on the spot. Along with Captain William S. Parsons, the commander of Project Alberta, they became known as the "Tinian Joint Chiefs ''. In April 1945, Marshall asked Groves to nominate specific targets for bombing for final approval by himself and Stimson. Groves formed a Target Committee, chaired by himself, that included Farrell, Major John A. Derry, Colonel William P. Fisher, Joyce C. Stearns and David M. Dennison from the USAAF; and scientists John von Neumann, Robert R. Wilson and William Penney from the Manhattan Project. The Target Committee met in Washington on April 27; at Los Alamos on May 10, where it was able to talk to the scientists and technicians there; and finally in Washington on May 28, where it was briefed by Tibbets and Commander Frederick Ashworth from Project Alberta, and the Manhattan Project 's scientific advisor, Richard C. Tolman. The Target Committee nominated five targets: Kokura, the site of one of Japan 's largest munitions plants; Hiroshima, an embarkation port and industrial center that was the site of a major military headquarters; Yokohama, an urban center for aircraft manufacture, machine tools, docks, electrical equipment and oil refineries; Niigata, a port with industrial facilities including steel and aluminum plants and an oil refinery; and Kyoto, a major industrial center. The target selection was subject to the following criteria: These cities were largely untouched during the nightly bombing raids and the Army Air Forces agreed to leave them off the target list so accurate assessment of the damage caused by the atomic bombs could be made. Hiroshima was described as "an important army depot and port of embarkation in the middle of an urban industrial area. It is a good radar target and it is such a size that a large part of the city could be extensively damaged. There are adjacent hills which are likely to produce a focusing effect which would considerably increase the blast damage. Due to rivers it is not a good incendiary target. '' The Target Committee stated that "It was agreed that psychological factors in the target selection were of great importance. Two aspects of this are (1) obtaining the greatest psychological effect against Japan and (2) making the initial use sufficiently spectacular for the importance of the weapon to be internationally recognized when publicity on it is released... Kyoto has the advantage of the people being more highly intelligent and hence better able to appreciate the significance of the weapon. Hiroshima has the advantage of being such a size and with possible focussing from nearby mountains that a large fraction of the city may be destroyed. The Emperor 's palace in Tokyo has a greater fame than any other target but is of least strategic value. '' Edwin O. Reischauer, a Japan expert for the U.S. Army Intelligence Service, was incorrectly said to have prevented the bombing of Kyoto. In his autobiography, Reischauer specifically refuted this claim: ... the only person deserving credit for saving Kyoto from destruction is Henry L. Stimson, the Secretary of War at the time, who had known and admired Kyoto ever since his honeymoon there several decades earlier. On May 30, Stimson asked Groves to remove Kyoto from the target list due to its historical, religious and cultural significance, but Groves pointed to its military and industrial significance. Stimson then approached President Harry S. Truman about the matter. Truman agreed with Stimson, and Kyoto was temporarily removed from the target list. Groves attempted to restore Kyoto to the target list in July, but Stimson remained adamant. On July 25, Nagasaki was put on the target list in place of Kyoto. It was a major military port, one of Japan 's largest shipbuilding and repair centers, and an important producer of naval ordnance. In early May 1945, the Interim Committee was created by Stimson at the urging of leaders of the Manhattan Project and with the approval of Truman to advise on matters pertaining to nuclear energy. During the meetings on May 31 and June 1, scientist Ernest Lawrence had suggested giving the Japanese a non-combat demonstration. Arthur Compton later recalled that: It was evident that everyone would suspect trickery. If a bomb were exploded in Japan with previous notice, the Japanese air power was still adequate to give serious interference. An atomic bomb was an intricate device, still in the developmental stage. Its operation would be far from routine. If during the final adjustments of the bomb the Japanese defenders should attack, a faulty move might easily result in some kind of failure. Such an end to an advertised demonstration of power would be much worse than if the attempt had not been made. It was now evident that when the time came for the bombs to be used we should have only one of them available, followed afterwards by others at all - too - long intervals. We could not afford the chance that one of them might be a dud. If the test were made on some neutral territory, it was hard to believe that Japan 's determined and fanatical military men would be impressed. If such an open test were made first and failed to bring surrender, the chance would be gone to give the shock of surprise that proved so effective. On the contrary, it would make the Japanese ready to interfere with an atomic attack if they could. Though the possibility of a demonstration that would not destroy human lives was attractive, no one could suggest a way in which it could be made so convincing that it would be likely to stop the war. The possibility of a demonstration was raised again in the Franck Report issued by physicist James Franck on June 11 and the Scientific Advisory Panel rejected his report on June 16, saying that "we can propose no technical demonstration likely to bring an end to the war; we see no acceptable alternative to direct military use. '' Franck then took the report to Washington, D.C., where the Interim Committee met on June 21 to re-examine its earlier conclusions; but it reaffirmed that there was no alternative to the use of the bomb on a military target. Like Compton, many U.S. officials and scientists argued that a demonstration would sacrifice the shock value of the atomic attack, and the Japanese could deny the atomic bomb was lethal, making the mission less likely to produce surrender. Allied prisoners of war might be moved to the demonstration site and be killed by the bomb. They also worried that the bomb might be a dud since the Trinity test was of a stationary device, not an air - dropped bomb. In addition, although more bombs were in production, only two would be available at the start of August, and they cost billions of dollars, so using one for a demonstration would be expensive. For several months, the U.S. had warned civilians of potential air raids by dropping more than 63 million leaflets across Japan. Many Japanese cities suffered terrible damage from aerial bombings; some were as much as 97 % destroyed. LeMay thought that leaflets would increase the psychological impact of bombing, and reduce the international stigma of area - bombing cities. Even with the warnings, Japanese opposition to the war remained ineffective. In general, the Japanese regarded the leaflet messages as truthful, with many Japanese choosing to leave major cities. The leaflets caused such concern that the government ordered the arrest of anyone caught in possession of a leaflet. Leaflet texts were prepared by recent Japanese prisoners of war because they were thought to be the best choice "to appeal to their compatriots ''. In preparation for dropping an atomic bomb on Hiroshima, the Oppenheimer - led Scientific Panel of the Interim Committee decided against a demonstration bomb and against a special leaflet warning. Those decisions were implemented because of the uncertainty of a successful detonation and also because of the wish to maximize shock in the leadership. No warning was given to Hiroshima that a new and much more destructive bomb was going to be dropped. Various sources gave conflicting information about when the last leaflets were dropped on Hiroshima prior to the atomic bomb. Robert Jay Lifton wrote that it was July 27, and Theodore H. McNelly wrote that it was July 30. The USAAF history noted that eleven cities were targeted with leaflets on July 27, but Hiroshima was not one of them, and there were no leaflet sorties on July 30. Leaflet sorties were undertaken on August 1 and August 4. Hiroshima may have been leafleted in late July or early August, as survivor accounts talk about a delivery of leaflets a few days before the atomic bomb was dropped. Three versions were printed of a leaflet listing 11 or 12 cities targeted for firebombing; a total of 33 cities listed. With the text of this leaflet reading in Japanese "... we can not promise that only these cities will be among those attacked... '' Hiroshima was not listed. Under the Quebec Agreement with the United Kingdom, nuclear weapons would not be used against another country without mutual consent. Stimson therefore had to obtain British permission. A meeting of the Combined Policy Committee was held at the Pentagon on July 4, 1945. Field Marshal Sir Henry Maitland Wilson announced that the British government concurred with the use of nuclear weapons against Japan, which would be officially recorded as a decision of the Combined Policy Committee. As the release of information to third parties was also controlled by the Quebec Agreement, discussion then turned to what scientific details would be revealed in the press announcement of the bombing. The meeting also considered what Truman could reveal to Joseph Stalin, the leader of the Soviet Union, at the upcoming Potsdam Conference, as this also required British concurrence. Orders for the attack were issued to General Carl Spaatz on July 25 under the signature of General Thomas T. Handy, the acting Chief of Staff, since Marshall was at the Potsdam Conference with Truman. It read: That day, Truman noted in his diary that: This weapon is to be used against Japan between now and August 10th. I have told the Sec. of War, Mr. Stimson, to use it so that military objectives and soldiers and sailors are the target and not women and children. Even if the Japs are savages, ruthless, merciless and fanatic, we as the leader of the world for the common welfare can not drop that terrible bomb on the old capital (Kyoto) or the new (Tokyo). He and I are in accord. The target will be a purely military one. The July 16 success of the Trinity Test in the New Mexico desert exceeded expectations. On July 26, Allied leaders issued the Potsdam Declaration, which outlined the terms of surrender for Japan. The declaration was presented as an ultimatum and stated that without a surrender, the Allies would attack Japan, resulting in "the inevitable and complete destruction of the Japanese armed forces and just as inevitably the utter devastation of the Japanese homeland ''. The atomic bomb was not mentioned in the communiqué. On July 28, Japanese papers reported that the declaration had been rejected by the Japanese government. That afternoon, Prime Minister Suzuki Kantarō declared at a press conference that the Potsdam Declaration was no more than a rehash (yakinaoshi) of the Cairo Declaration and that the government intended to ignore it (mokusatsu, "kill by silence ''). The statement was taken by both Japanese and foreign papers as a clear rejection of the declaration. Emperor Hirohito, who was waiting for a Soviet reply to non-committal Japanese peace feelers, made no move to change the government position. Japan 's willingness to surrender remained conditional on the preservation of the kokutai (Imperial institution and national polity), assumption by the Imperial Headquarters of responsibility for disarmament and demobilization, no occupation of the Japanese Home Islands, Korea or Formosa, and delegation of the punishment of war criminals to the Japanese government. At Potsdam, Truman agreed to a request from Winston Churchill that Britain be represented when the atomic bomb was dropped. William Penney and Group Captain Leonard Cheshire were sent to Tinian, but found that LeMay would not let them accompany the mission. All they could do was send a strongly worded signal to Wilson. The Little Boy bomb, except for the uranium payload, was ready at the beginning of May 1945. There were two uranium - 235 components, a hollow cylindrical projectile and a cylindrical target insert. The projectile was completed on June 15, and the target insert on July 24. The projectile and eight bomb pre-assemblies (partly assembled bombs without the powder charge and fissile components) left Hunters Point Naval Shipyard, California, on July 16 aboard the cruiser USS Indianapolis, and arrived on Tinian on July 26. The target insert followed by air on July 30, accompanied by Commander Francis Birch from Project Alberta. Responding to concerns expressed by the 509th Composite Group about the possibility of a B - 29 crashing on takeoff, Birch had modified the Little Boy design to incorporate a removable breech plug that would permit the bomb to be armed in flight. The first plutonium core, along with its polonium - beryllium urchin initiator, was transported in the custody of Project Alberta courier Raemer Schreiber in a magnesium field carrying case designed for the purpose by Philip Morrison. Magnesium was chosen because it does not act as a tamper. The core departed from Kirtland Army Air Field on a C - 54 transport aircraft of the 509th Composite Group 's 320th Troop Carrier Squadron on July 26, and arrived at North Field July 28. Three Fat Man high - explosive pre-assemblies, designated F31, F32, and F33, were picked up at Kirtland on July 28 by three B - 29s, two from the 393d Bombardment Squadron plus one from the 216th Army Air Force Base Unit, and transported to North Field, arriving on August 2. At the time of its bombing, Hiroshima was a city of both industrial and military significance. A number of military units were located nearby, the most important of which was the headquarters of Field Marshal Shunroku Hata 's Second General Army, which commanded the defense of all of southern Japan, and was located in Hiroshima Castle. Hata 's command consisted of some 400,000 men, most of whom were on Kyushu where an Allied invasion was correctly anticipated. Also present in Hiroshima were the headquarters of the 59th Army, the 5th Division and the 224th Division, a recently formed mobile unit. The city was defended by five batteries of 7 - cm and 8 - cm (2.8 and 3.1 inch) anti-aircraft guns of the 3rd Anti-Aircraft Division, including units from the 121st and 122nd Anti-Aircraft Regiments and the 22nd and 45th Separate Anti-Aircraft Battalions. In total, an estimated 40,000 Japanese military personnel were stationed in the city. Hiroshima was a supply and logistics base for the Japanese military. The city was a communications center, a key port for shipping, and an assembly area for troops. It was a beehive of war industry, manufacturing parts for planes and boats, for bombs, rifles, and handguns. The center of the city contained several reinforced concrete buildings and lighter structures. Outside the center, the area was congested by a dense collection of small timber workshops set among Japanese houses. A few larger industrial plants lay near the outskirts of the city. The houses were constructed of timber with tile roofs, and many of the industrial buildings were also built around timber frames. The city as a whole was highly susceptible to fire damage. It was the second largest city in Japan after Kyoto that was still undamaged by air raids, primarily because it lacked the aircraft manufacturing industry that was the XXI Bomber Command 's priority target. On July 3, the Joint Chiefs of Staff placed it off limits to bombers, along with Kokura, Niigata and Kyoto. The population of Hiroshima had reached a peak of over 381,000 earlier in the war but prior to the atomic bombing, the population had steadily decreased because of a systematic evacuation ordered by the Japanese government. At the time of the attack, the population was approximately 340,000 -- 350,000. Residents wondered why Hiroshima had been spared destruction by firebombing. Some speculated that the city was to be saved for U.S. occupation headquarters, others thought perhaps their relatives in Hawaii and California had petitioned the U.S. government to avoid bombing Hiroshima. More realistic city officials had ordered buildings torn down to create long, straight firebreaks. These continued to be expanded and extended up to the morning of August 6, 1945. Hiroshima was the primary target of the first nuclear bombing mission on August 6, with Kokura and Nagasaki as alternative targets. The 393d Bombardment Squadron B - 29 Enola Gay, named after Tibbets ' mother and piloted by Tibbets, took off from North Field, Tinian, about six hours ' flight time from Japan. Enola Gay was accompanied by two other B - 29s. The Great Artiste, commanded by Major Charles Sweeney, which carried instrumentation, and a then - nameless aircraft later called Necessary Evil, commanded by Captain George Marquardt, which served as the photography aircraft. After leaving Tinian the aircraft made their way separately to Iwo Jima to rendezvous with Sweeney and Marquardt at 05: 55 at 9,200 feet (2,800 m), and set course for Japan. The aircraft arrived over the target in clear visibility at 31,060 feet (9,470 m). Parsons, who was in command of the mission, armed the bomb in flight to minimize the risks during takeoff. He had witnessed four B - 29s crash and burn at takeoff, and feared that a nuclear explosion would occur if a B - 29 crashed with an armed Little Boy on board. His assistant, Second Lieutenant Morris R. Jeppson, removed the safety devices 30 minutes before reaching the target area. During the night of August 5 -- 6, Japanese early warning radar detected the approach of numerous American aircraft headed for the southern part of Japan. Radar detected 65 bombers headed for Saga, 102 bound for Maebashi, 261 en route to Nishinomiya, 111 headed for Ube and 66 bound for Imabari. An alert was given and radio broadcasting stopped in many cities, among them Hiroshima. The all - clear was sounded in Hiroshima at 00: 05. About an hour before the bombing, the air raid alert was sounded again, as Straight Flush flew over the city. It broadcast a short message which was picked up by Enola Gay. It read: "Cloud cover less than 3 / 10th at all altitudes. Advice: bomb primary. '' The all - clear was sounded over Hiroshima again at 07: 09. At 08: 09, Tibbets started his bomb run and handed control over to his bombardier, Major Thomas Ferebee. The release at 08: 15 (Hiroshima time) went as planned, and the Little Boy containing about 64 kg (141 lb) of uranium - 235 took 44.4 seconds to fall from the aircraft flying at about 31,000 feet (9,400 m) to a detonation height of about 1,900 feet (580 m) above the city. Enola Gay traveled 11.5 mi (18.5 km) before it felt the shock waves from the blast. Due to crosswind, the bomb missed the aiming point, the Aioi Bridge, by approximately 800 ft (240 m) and detonated directly over Shima Surgical Clinic. It released the equivalent energy of 16 kilotons of TNT (67 TJ), ± 2 kt. The weapon was considered very inefficient, with only 1.7 % of its material fissioning. The radius of total destruction was about 1 mile (1.6 km), with resulting fires across 4.4 square miles (11 km). Enola Gay stayed over the target area for two minutes and was ten miles away when the bomb detonated. Only Tibbets, Parsons, and Ferebee knew of the nature of the weapon; the others on the bomber were only told to expect a blinding flash and given black goggles. "It was hard to believe what we saw '', Tibbets told reporters, while Parsons said "the whole thing was tremendous and awe - inspiring... the men aboard with me gasped ' My God ' ''. He and Tibbets compared the shockwave to "a close burst of ack - ack fire ''. People on the ground reported a pika (ピカ) -- a brilliant flash of light -- followed by a don (ドン) -- a loud booming sound. Some 70,000 -- 80,000 people, or around 30 % of the population of Hiroshima, were killed by the blast and resultant firestorm, and another 70,000 were injured. Perhaps as many as 20,000 Japanese military personnel were killed. U.S. surveys estimated that 4.7 square miles (12 km) of the city were destroyed. Japanese officials determined that 69 % of Hiroshima 's buildings were destroyed and another 6 -- 7 % damaged. Some of the reinforced concrete buildings in Hiroshima had been very strongly constructed because of the earthquake danger in Japan, and their framework did not collapse even though they were fairly close to the blast center. Since the bomb detonated in the air, the blast was directed more downward than sideways, which was largely responsible for the survival of the Prefectural Industrial Promotional Hall, now commonly known as the Genbaku (A-bomb) dome. This building was designed and built by the Czech architect Jan Letzel, and was only 150 m (490 ft) from ground zero. The ruin was named Hiroshima Peace Memorial and was made a UNESCO World Heritage Site in 1996 over the objections of the United States and China, which expressed reservations on the grounds that other Asian nations were the ones who suffered the greatest loss of life and property, and a focus on Japan lacked historical perspective. The bombing started fires that spread rapidly through timber and paper homes. As in other Japanese cities, the firebreaks proved ineffective. The intense fires started gutted everything in a 2 kilometers (1.2 mi) radius. The air raid warning had been cleared at 07: 31, and many people were outside, going about their activities. Eizō Nomura was the closest known survivor, being in the basement of a reinforced concrete building (it remained as the Rest House after the war) only 170 meters (560 ft) from ground zero (the hypocenter) at the time of the attack. He died in 1982, aged 84. Akiko Takakura was among the closest survivors to the hypocenter of the blast. She was in the solidly built Bank of Hiroshima only 300 meters (980 ft) from ground - zero at the time of the attack. Over 90 % of the doctors and 93 % of the nurses in Hiroshima were killed or injured -- most had been in the downtown area which received the greatest damage. The hospitals were destroyed or heavily damaged. Only one doctor, Terufumi Sasaki, remained on duty at the Red Cross Hospital. Nonetheless, by early afternoon, the police and volunteers had established evacuation centres at hospitals, schools and tram stations, and a morgue was established in the Asano library. Most elements of the Japanese Second General Army headquarters were at physical training on the grounds of Hiroshima Castle, barely 900 yards (820 m) from the hypocenter. The attack killed 3,243 troops on the parade ground. The communications room of Chugoku Military District Headquarters that was responsible for issuing and lifting air raid warnings was in a semi-basement in the castle. Yoshie Oka, a Hijiyama Girls High School student who had been mobilized to serve as a communications officer had just sent a message that the alarm had been issued for Hiroshima and neighboring Yamaguchi, when the bomb exploded. She used a special phone to inform Fukuyama Headquarters (some 100 kilometers (62 mi) away) that "Hiroshima has been attacked by a new type of bomb. The city is in a state of near - total destruction. '' Since Mayor Senkichi Awaya had been killed while eating breakfast with his son and granddaughter at the mayoral residence, Field Marshal Hata, who was only slightly wounded, took over the administration of the city, and coordinated relief efforts. Many of his staff had been killed or fatally wounded, including a Korean prince of the Joseon Dynasty, Yi Wu, who was serving as a lieutenant colonel in the Japanese Army. Hata 's senior surviving staff officer was the wounded Colonel Kumao Imoto, who acted as his chief of staff. Soldiers from the undamaged Hiroshima Ujina Harbor used suicide boats, intended to repel the American invasion, to collect the wounded and take them down the rivers to the military hospital at Ujina. Trucks and trains brought in relief supplies and evacuated survivors from the city. Twelve American airmen were imprisoned at the Chugoku Military Police Headquarters, about 1,300 feet (400 m) from the hypocenter of the blast. Most died instantly, although two were reported to have been executed by their captors, and two prisoners badly injured by the bombing were left next to the Aioi Bridge by the Kempei Tai, where they were stoned to death. Eight U.S. prisoners of war killed as part of the medical experiments program at Kyushu University were falsely reported by Japanese authorities as having been killed in the atomic blast as part of an attempted cover up. The Tokyo control operator of the Japan Broadcasting Corporation noticed that the Hiroshima station had gone off the air. He tried to re-establish his program by using another telephone line, but it too had failed. About 20 minutes later the Tokyo railroad telegraph center realized that the main line telegraph had stopped working just north of Hiroshima. From some small railway stops within 16 km (10 mi) of the city came unofficial and confused reports of a terrible explosion in Hiroshima. All these reports were transmitted to the headquarters of the Imperial Japanese Army General Staff. Military bases repeatedly tried to call the Army Control Station in Hiroshima. The complete silence from that city puzzled the General Staff; they knew that no large enemy raid had occurred and that no sizable store of explosives was in Hiroshima at that time. A young officer was instructed to fly immediately to Hiroshima, to land, survey the damage, and return to Tokyo with reliable information for the staff. It was felt that nothing serious had taken place and that the explosion was just a rumor. The staff officer went to the airport and took off for the southwest. After flying for about three hours, while still nearly 160 km (100 mi) from Hiroshima, he and his pilot saw a great cloud of smoke from the bomb. After circling the city in order to survey the damage they landed south of the city, where the staff officer, after reporting to Tokyo, began to organize relief measures. Tokyo 's first indication that the city had been destroyed by a new type of bomb came from President Truman 's announcement of the strike, sixteen hours later. After the Hiroshima bombing, Truman issued a statement announcing the use of the new weapon. He stated, "We may be grateful to Providence '' that the German atomic bomb project had failed, and that the United States and its allies had "spent two billion dollars on the greatest scientific gamble in history -- and won ''. Truman then warned Japan: "If they do not now accept our terms, they may expect a rain of ruin from the air, the like of which has never been seen on this earth. Behind this air attack will follow sea and land forces in such numbers and power as they have not yet seen and with the fighting skill of which they are already well aware. '' This was a widely broadcast speech picked up by Japanese news agencies. The 50,000 - watt standard wave station on Saipan, the OWI radio station, broadcast a similar message to Japan every 15 minutes about Hiroshima, stating that more Japanese cities would face a similar fate in the absence of immediate acceptance of the terms of the Potsdam Declaration and emphatically urged civilians to evacuate major cities. Radio Japan, which continued to extoll victory for Japan by never surrendering, had informed the Japanese of the destruction of Hiroshima by a single bomb. Prime Minister Suzuki felt compelled to meet the Japanese press, to whom he reiterated his government 's commitment to ignore the Allies ' demands and fight on. Soviet Foreign Minister Vyacheslav Molotov informed Tokyo of the Soviet Union 's unilateral abrogation of the Soviet -- Japanese Neutrality Pact on August 5. At two minutes past midnight on August 9, Tokyo time, Soviet infantry, armor, and air forces had launched the Manchurian Strategic Offensive Operation. Four hours later, word reached Tokyo of the Soviet Union 's official declaration of war. The senior leadership of the Japanese Army began preparations to impose martial law on the nation, with the support of Minister of War Korechika Anami, in order to stop anyone attempting to make peace. On August 7, a day after Hiroshima was destroyed, Dr. Yoshio Nishina and other atomic physicists arrived at the city, and carefully examined the damage. They then went back to Tokyo and told the cabinet that Hiroshima was indeed destroyed by a nuclear weapon. Admiral Soemu Toyoda, the Chief of the Naval General Staff, estimated that no more than one or two additional bombs could be readied, so they decided to endure the remaining attacks, acknowledging "there would be more destruction but the war would go on ''. American Magic codebreakers intercepted the cabinet 's messages. Purnell, Parsons, Tibbets, Spaatz, and LeMay met on Guam that same day to discuss what should be done next. Since there was no indication of Japan surrendering, they decided to proceed with dropping another bomb. Parsons said that Project Alberta would have it ready by August 11, but Tibbets pointed to weather reports indicating poor flying conditions on that day due to a storm, and asked if the bomb could be readied by August 9. Parsons agreed to try to do so. The city of Nagasaki had been one of the largest seaports in southern Japan, and was of great wartime importance because of its wide - ranging industrial activity, including the production of ordnance, ships, military equipment, and other war materials. The four largest companies in the city were Mitsubishi Shipyards, Electrical Shipyards, Arms Plant, and Steel and Arms Works, which employed about 90 % of the city 's labor force, and accounted for 90 % of the city 's industry. Although an important industrial city, Nagasaki had been spared from firebombing because its geography made it difficult to locate at night with AN / APQ - 13 radar. Unlike the other target cities, Nagasaki had not been placed off limits to bombers by the Joint Chiefs of Staff 's July 3 directive, and was bombed on a small scale five times. During one of these raids on August 1, a number of conventional high - explosive bombs were dropped on the city. A few hit the shipyards and dock areas in the southwest portion of the city, and several hit the Mitsubishi Steel and Arms Works. By early August, the city was defended by the 134th Anti-Aircraft Regiment of the 4th Anti-Aircraft Division with four batteries of 7 cm (2.8 in) anti-aircraft guns and two searchlight batteries. In contrast to Hiroshima, almost all of the buildings were of old - fashioned Japanese construction, consisting of timber or timber - framed buildings with timber walls (with or without plaster) and tile roofs. Many of the smaller industries and business establishments were also situated in buildings of timber or other materials not designed to withstand explosions. Nagasaki had been permitted to grow for many years without conforming to any definite city zoning plan; residences were erected adjacent to factory buildings and to each other almost as closely as possible throughout the entire industrial valley. On the day of the bombing, an estimated 263,000 people were in Nagasaki, including 240,000 Japanese residents, 10,000 Korean residents, 2,500 conscripted Korean workers, 9,000 Japanese soldiers, 600 conscripted Chinese workers, and 400 Allied prisoners of war in a camp to the north of Nagasaki. Responsibility for the timing of the second bombing was delegated to Tibbets. Scheduled for August 11 against Kokura, the raid was moved earlier by two days to avoid a five - day period of bad weather forecast to begin on August 10. Three bomb pre-assemblies had been transported to Tinian, labeled F - 31, F - 32, and F - 33 on their exteriors. On August 8, a dress rehearsal was conducted off Tinian by Sweeney using Bockscar as the drop airplane. Assembly F - 33 was expended testing the components and F - 31 was designated for the August 9 mission. At 03: 49 on the morning of August 9, 1945, Bockscar, flown by Sweeney 's crew, carried Fat Man, with Kokura as the primary target and Nagasaki the secondary target. The mission plan for the second attack was nearly identical to that of the Hiroshima mission, with two B - 29s flying an hour ahead as weather scouts and two additional B - 29s in Sweeney 's flight for instrumentation and photographic support of the mission. Sweeney took off with his weapon already armed but with the electrical safety plugs still engaged. During pre-flight inspection of Bockscar, the flight engineer notified Sweeney that an inoperative fuel transfer pump made it impossible to use 640 US gallons (2,400 l; 530 imp gal) of fuel carried in a reserve tank. This fuel would still have to be carried all the way to Japan and back, consuming still more fuel. Replacing the pump would take hours; moving the Fat Man to another aircraft might take just as long and was dangerous as well, as the bomb was live. Tibbets and Sweeney therefore elected to have Bockscar continue the mission. This time Penney and Cheshire were allowed to accompany the mission, flying as observers on the third plane, Big Stink, flown by the group 's operations officer, Major James I. Hopkins, Jr. Observers aboard the weather planes reported both targets clear. When Sweeney 's aircraft arrived at the assembly point for his flight off the coast of Japan, Big Stink failed to make the rendezvous. According to Cheshire, Hopkins was at varying heights including 9,000 feet (2,700 m) higher than he should have been, and was not flying tight circles over Yakushima as previously agreed with Sweeney and Captain Frederick C. Bock, who was piloting the support B - 29 The Great Artiste. Instead, Hopkins was flying 40 - mile (64 km) dogleg patterns. Though ordered not to circle longer than fifteen minutes, Sweeney continued to wait for Big Stink for forty minutes. Before leaving the rendezvous point, Sweeney consulted Ashworth, who was in charge of the bomb. As commander of the aircraft, Sweeney made the decision to proceed to the primary, the city of Kokura. After exceeding the original departure time limit by nearly a half - hour, Bockscar, accompanied by The Great Artiste, proceeded to Kokura, thirty minutes away. The delay at the rendezvous had resulted in clouds and drifting smoke over Kokura from fires started by a major firebombing raid by 224 B - 29s on nearby Yahata the previous day. Additionally, the Yahata Steel Works intentionally burned coal tar, to produce black smoke. The clouds and smoke resulted in 70 % of the area over Kokura being covered, obscuring the aiming point. Three bomb runs were made over the next 50 minutes, burning fuel and exposing the aircraft repeatedly to the heavy defenses around Kokura, but the bombardier was unable to drop visually. By the time of the third bomb run, Japanese antiaircraft fire was getting close, and Second Lieutenant Jacob Beser, who was monitoring Japanese communications, reported activity on the Japanese fighter direction radio bands. After three runs over the city, and with fuel running low because of the failed fuel pump, Bockscar and The Great Artiste headed for their secondary target, Nagasaki. Fuel consumption calculations made en route indicated that Bockscar had insufficient fuel to reach Iwo Jima and would be forced to divert to Okinawa, which had become entirely Allied - occupied territory only six weeks earlier. After initially deciding that if Nagasaki were obscured on their arrival the crew would carry the bomb to Okinawa and dispose of it in the ocean if necessary, Ashworth agreed with Sweeney 's suggestion that a radar approach would be used if the target was obscured. At about 07: 50 Japanese time, an air raid alert was sounded in Nagasaki, but the "all clear '' signal was given at 08: 30. When only two B - 29 Superfortresses were sighted at 10: 53, the Japanese apparently assumed that the planes were only on reconnaissance and no further alarm was given. A few minutes later at 11: 00, The Great Artiste dropped instruments attached to three parachutes. These instruments also contained an unsigned letter to Professor Ryokichi Sagane, a physicist at the University of Tokyo who studied with three of the scientists responsible for the atomic bomb at the University of California, Berkeley, urging him to tell the public about the danger involved with these weapons of mass destruction. The messages were found by military authorities but not turned over to Sagane until a month later. In 1949, one of the authors of the letter, Luis Alvarez, met with Sagane and signed the letter. At 11: 01, a last - minute break in the clouds over Nagasaki allowed Bockscar 's bombardier, Captain Kermit Beahan, to visually sight the target as ordered. The Fat Man weapon, containing a core of about 5 kg (11 lb) of plutonium, was dropped over the city 's industrial valley. It exploded 47 seconds later at 1,650 ± 33 ft (503 ± 10 m), above a tennis court, halfway between the Mitsubishi Steel and Arms Works in the south and the Nagasaki Arsenal in the north. This was nearly 3 km (1.9 mi) northwest of the planned hypocenter; the blast was confined to the Urakami Valley and a major portion of the city was protected by the intervening hills. The resulting explosion released the equivalent energy of 21 ± 2 kt (87.9 ± 8.4 TJ). Big Stink spotted the explosion from a hundred miles away, and flew over to observe. Bockscar flew on to Okinawa, arriving with only sufficient fuel for a single approach. Sweeney tried repeatedly to contact the control tower for landing clearance, but received no answer. He could see heavy air traffic landing and taking off from Yontan Airfield. Firing off every flare on board to alert the field to his emergency landing, the Bockscar came in fast, landing at 140 miles per hour (230 km / h) instead of the normal 120 miles per hour (190 km / h). The number two engine died from fuel starvation as he began the final approach. Touching down on only three engines midway down the landing strip, Bockscar bounced up into the air again for about 25 feet (7.6 m) before slamming back down hard. The heavy B - 29 slewed left and towards a row of parked B - 24 bombers before the pilots managed to regain control. Its reversible propellers were insufficient to slow the aircraft adequately, and with both pilots standing on the brakes, Bockscar made a swerving 90 - degree turn at the end of the runway to avoid running off it. A second engine died from fuel exhaustion before the plane came to a stop. Following the mission, there was confusion over the identification of the plane. The first eyewitness account by war correspondent William L. Laurence of The New York Times, who accompanied the mission aboard the aircraft piloted by Bock, reported that Sweeney was leading the mission in The Great Artiste. He also noted its "Victor '' number as 77, which was that of Bockscar. Laurence had interviewed Sweeney and his crew, and was aware that they referred to their airplane as The Great Artiste. Except for Enola Gay, none of the 393d 's B - 29s had yet had names painted on the noses, a fact which Laurence himself noted in his account. Unaware of the switch in aircraft, Laurence assumed Victor 77 was The Great Artiste, which was in fact, Victor 89. Although the bomb was more powerful than the one used on Hiroshima, the effect was confined by hillsides to the narrow Urakami Valley. Of 7,500 Japanese employees who worked inside the Mitsubishi Munitions plant, including "mobilized '' students and regular workers, 6,200 were killed. Some 17,000 -- 22,000 others who worked in other war plants and factories in the city died as well. Casualty estimates for immediate deaths vary widely, ranging from 22,000 to 75,000. At least 35,000 -- 40,000 people were killed and 60,000 others injured. In the days and months following the explosion, more people died from their injuries. Because of the presence of undocumented foreign workers, and a number of military personnel in transit, there are great discrepancies in the estimates of total deaths by the end of 1945; a range of 39,000 to 80,000 can be found in various studies. Unlike Hiroshima 's military death toll, only 150 Japanese soldiers were killed instantly, including thirty - six from the 134th AAA Regiment of the 4th AAA Division. At least eight known POWs died from the bombing and as many as 13 may have died, including a British prisoner of war, Royal Air Force Corporal Ronald Shaw, and seven Dutch POWs. One American POW, Joe Kieyoomia, was in Nagasaki at the time of the bombing but survived, reportedly having been shielded from the effects of the bomb by the concrete walls of his cell. There were 24 Australian POWs in Nagasaki, all of whom survived. The radius of total destruction was about 1 mi (1.6 km), followed by fires across the northern portion of the city to 2 mi (3.2 km) south of the bomb. About 58 % of the Mitsubishi Arms Plant was damaged, and about 78 % of the Mitsubishi Steel Works. The Mitsubishi Electric Works suffered only 10 % structural damage as it was on the border of the main destruction zone. The Nagasaki Arsenal was destroyed in the blast. Although many fires likewise burnt following the bombing, in contrast to Hiroshima where sufficient fuel density was available, no firestorm developed in Nagasaki as the damaged areas did not furnish enough fuel to generate the phenomenon. Instead, the ambient wind at the time pushed the fire spread along the valley. As in Hiroshima, the bombing badly dislocated the city 's medical facilities. A makeshift hospital was established at the Shinkozen Primary School, which served as the main medical centre. The trains were still running, and evacuated many victims to hospitals in nearby towns. A medical team from a naval hospital reached the city in the evening, and fire - fighting brigades from the neighboring towns assisted in fighting the fires. Takashi Nagai was a doctor working in the radiology department of Nagasaki Medical College Hospital. He received a serious injury that severed his right temporal artery, but joined the rest of the surviving medical staff in treating bombing victims. Groves expected to have another atomic bomb ready for use on August 19, with three more in September and a further three in October. On August 10, he sent a memorandum to Marshall in which he wrote that "the next bomb... should be ready for delivery on the first suitable weather after 17 or 18 August. '' Marshall endorsed the memo with the hand - written comment, "It is not to be released over Japan without express authority from the President '', something Truman had requested that day. This modified the previous order that the target cities were to be attacked with atomic bombs "as made ready ''. There was already discussion in the War Department about conserving the bombs then in production for Operation Downfall, and Marshall suggested to Stimson that the remaining cities on the target list be spared attack with atomic bombs. Two more Fat Man assemblies were readied, and scheduled to leave Kirtland Field for Tinian on August 11 and 14, and Tibbets was ordered by LeMay to return to Albuquerque, New Mexico, to collect them. At Los Alamos, technicians worked 24 hours straight to cast another plutonium core. Although cast, it still needed to be pressed and coated, which would take until August 16. Therefore, it could have been ready for use on August 19. Unable to reach Marshall, Groves ordered on his own authority on August 13 that the core should not be shipped. Until August 9, Japan 's war council still insisted on its four conditions for surrender. The full cabinet met on 14: 30 on August 9, and spent most of the day debating surrender. Anami conceded that victory was unlikely, but argued in favour of continuing the war nonetheless. The meeting ended at 17: 30, with no decision having been reached. Suzuki went to the palace to report on the outcome of meeting, where he met with Kōichi Kido, the Lord Keeper of the Privy Seal of Japan. Kido informed him that the emperor had agreed to hold an imperial conference, and gave a strong indication that the emperor would consent to surrender on condition that kokutai be preserved. A second cabinet meeting was held at 18: 00. Only four ministers supported Anami 's position of adhering to the four conditions, but since cabinet decisions had to be unanimous, no decision was reached before it ended at 22: 00. Calling an imperial conference required the signatures of the prime minister and the two service chiefs, but the Chief Cabinet Secretary Hisatsune Sakomizu had already obtained signatures from Toyoda and General Yoshijirō Umezu in advance, and he reneged on his promise to inform them if a meeting was to be held. The meeting commenced at 23: 50. No consensus had emerged by 02: 00, but the emperor gave his "sacred decision '', authorizing the Foreign Minister, Shigenori Tōgō, to notify the Allies that Japan would accept their terms on one condition, that the declaration "does not comprise any demand which prejudices the prerogatives of His Majesty as a Sovereign ruler. '' On August 12, the Emperor informed the imperial family of his decision to surrender. One of his uncles, Prince Asaka, then asked whether the war would be continued if the kokutai could not be preserved. Hirohito simply replied, "Of course. '' As the Allied terms seemed to leave intact the principle of the preservation of the Throne, Hirohito recorded on August 14 his capitulation announcement which was broadcast to the Japanese nation the next day despite a short rebellion by militarists opposed to the surrender. In his declaration, Hirohito referred to the atomic bombings, and did not explicitly mention the Soviets as a factor for surrender: Despite the best that has been done by every one -- the gallant fighting of military and naval forces, the diligence and assiduity of Our servants of the State and the devoted service of Our one hundred million people, the war situation has developed not necessarily to Japan 's advantage, while the general trends of the world have all turned against her interest. Moreover, the enemy now possesses a new and terrible weapon with the power to destroy many innocent lives and do incalculable damage. Should we continue to fight, not only would it result in an ultimate collapse and obliteration of the Japanese nation, but also it would lead to the total extinction of human civilization. Such being the case, how are we to save the millions of our subjects, or to atone ourselves before the hallowed spirits of our imperial ancestors? This is the reason why we have ordered the acceptance of the provisions of the joint declaration of the powers. In his "Rescript to the Soldiers and Sailors '' delivered on August 17, however, he stressed the impact of the Soviet invasion on his decision to surrender. On August 10, 1945, the day after the Nagasaki bombing, Yōsuke Yamahata, correspondent Higashi, and artist Yamada arrived in the city with orders to record the destruction for maximum propaganda purposes, Yamahata took scores of photographs, and on August 21, they appeared in Mainichi Shimbun, a popular Japanese newspaper. Leslie Nakashima filed the first personal account of the scene to appear in American newspapers. A version of his August 27 UPI article appeared in The New York Times on August 31. Wilfred Burchett was the first western journalist to visit Hiroshima after the bombing, arriving alone by train from Tokyo on September 2. His Morse code dispatch, "The Atomic Plague '', was printed by the Daily Express newspaper in London on September 5, 1945. Nakashima 's and Burchett 's reports were the first public reports to mention the effects of radiation and nuclear fallout -- radiation burns and radiation poisoning. Burchett 's reporting was unpopular with the U.S. military, who accused Burchett of being under the sway of Japanese propaganda, and suppressed a supporting story submitted by George Weller of the Chicago Daily News. Laurence dismissed the reports on radiation sickness as Japanese efforts to undermine American morale, ignoring his own account published one week earlier. A member of the U.S. Strategic Bombing Survey, Lieutenant Daniel McGovern, used a film crew to document the effects of the bombings in early 1946. The film crew shot 90,000 ft (27,000 m) of film, resulting in a three - hour documentary titled The Effects of the Atomic Bombs Against Hiroshima and Nagasaki. The documentary included images from hospitals showing the human effects of the bomb; it showed burned - out buildings and cars, and rows of skulls and bones on the ground. It was classified "secret '' for the next 22 years. Motion picture company Nippon Eigasha started sending cameramen to Nagasaki and Hiroshima in September 1945. On October 24, 1945, a U.S. military policeman stopped a Nippon Eigasha cameraman from continuing to film in Nagasaki. All Nippon Eigasha 's reels were confiscated by the American authorities, but they were requested by the Japanese government, and declassified. The public release of film footage of the city post-attack, and some research about the effects of the attack, was restricted during the occupation of Japan, but the Hiroshima - based magazine, Chugoku Bunka, in its first issue published on March 10, 1946, devoted itself to detailing the damage from the bombing. The book Hiroshima, written by Pulitzer Prize winner John Hersey, which was originally published in article form in the popular magazine The New Yorker, on August 31, 1946, is reported to have reached Tokyo in English by January 1947, and the translated version was released in Japan in 1949. It narrated the stories of the lives of six bomb survivors from immediately prior to, and months after, the dropping of the Little Boy bomb. Beginning in 1974, a compilation of drawings and artwork made by the survivors of the bombings began to be compiled, with completion in 1977, and under both book and exhibition format, it was titled The Unforgettable Fire. The bombing amazed Otto Hahn and other German atomic scientists, whom the British held at Farm Hall in Operation Epsilon. Hahn stated that he had not believed an atomic weapon "would be possible for another twenty years ''; Werner Heisenberg did not believe the news at first. Carl Friedrich von Weizsäcker said "I think it 's dreadful of the Americans to have done it. I think it is madness on their part '', but Heisenberg replied, "One could equally well say ' That 's the quickest way of ending the war ' ''. Hahn was grateful that the German project had not succeeded in developing "such an inhumane weapon ''; Karl Wirtz observed that even if it had, "we would have obliterated London but would still not have conquered the world, and then they would have dropped them on us ''. Hahn told the others, "Once I wanted to suggest that all uranium should be sunk to the bottom of the ocean ''. The Vatican agreed; L'Osservatore Romano expressed regret that the bomb 's inventors did not destroy the weapon for the benefit of humanity. Rev. Cuthbert Thicknesse, the Dean of St Albans, prohibited using St Albans Abbey for a thanksgiving service for the war 's end, calling the use of atomic weapons "an act of wholesale, indiscriminate massacre ''. Nonetheless, news of the atomic bombing was greeted enthusiastically in the U.S.; a poll in Fortune magazine in late 1945 showed a significant minority of Americans (23 %) wishing that more atomic bombs could have been dropped on Japan. The initial positive response was supported by the imagery presented to the public (mainly the powerful images of the mushroom cloud). During this time in America, it was a common practice for editors to keep graphic images of death out of films, magazines, and newspapers. Frequent estimates are that 140,000 people in Hiroshima (39 % of the population) and 70,000 people in Nagasaki (28 % of the population) died in 1945, though the number which died immediately as a result of exposure to the blast, heat, or due to radiation, is unknown. One Atomic Bomb Casualty Commission report discusses 6,882 people examined in Hiroshima, and 6,621 people examined in Nagasaki, who were largely within 2000 meters from the hypocenter, who suffered injuries from the blast and heat but died from complications frequently compounded by acute radiation syndrome (ARS), all within about 20 -- 30 days. The most well known of which being Midori Naka, some 650 meters from the hypocenter at Hiroshima, who would travel to Tokyo and then with her death on August 24, 1945 was to be the first death officially certified as a result of radiation poisoning, or as it was referred to by many, "Atomic bomb disease ''. It was unappreciated at the time but the average radiation dose that will kill approximately 50 % of adults, the LD50, was approximately halved, that is, smaller doses were made more lethal, when the individual experienced concurrent blast or burn polytraumatic injuries. Conventional skin injuries that cover a large area frequently result in bacterial infection; the risk of sepsis and death is increased when a usually non-lethal radiation dose moderately suppresses the white blood cell count. In the spring of 1948, the Atomic Bomb Casualty Commission (ABCC) was established in accordance with a presidential directive from Truman to the National Academy of Sciences -- National Research Council to conduct investigations of the late effects of radiation among the survivors in Hiroshima and Nagasaki. In 1956, the ABCC published The Effect of Exposure to the Atomic Bombs on Pregnancy Termination in Hiroshima and Nagasaki. The ABCC became the Radiation Effects Research Foundation (RERF), on April 1, 1975. A binational organization run by both the United States and Japan, the RERF is still in operation today. As cancers do not immediately emerge after exposure to radiation instead radiation - induced cancer has a minimum latency period of some 5 + years and Leukemia some 2 + which peaks around 6 -- 8 years later. Dr Jarrett Foley published the first major reports on the significant increased incidence of the latter among survivors, almost all cases of leukemia over the following 50 years were in people exposed to more than 1 Gy. In a strictly dependent manner dependent on their distance from the hypocenter, in the 1987 Life Span Study, conducted by the Radiation Effects Research Foundation, a statistical excess of 507 cancers, of undefined lethality, were observed in 79,972 hibakusha who had still been living between 1958 -- 1987 and who took part in the study. As the epidemiology study continues with time, the RERF estimates that from 1950 to 2000, 46 % of leukemia deaths which may include Sadako Sasaki and 11 % of solid cancers of unspecificed lethality, were likely due to radiation from the bombs or some other post-attack city effects, with the statistical excess being 200 leukemia deaths and 1,700 solid cancers of undeclared lethality. Both of these statistics being derived from the observation of approximately half of the total survivors, strictly those who took part in the study. While during the preimplantation period, that is 1 -- 10 days following conception, interuterine radiation exposure of "at least 0.2 Gy '' can cause complications of implantation and death of the human embryo. The number of miscarriages caused by the radiation from the bombings, during this radiosensitive period, is not known. One of the early studies conducted by the ABCC was on the outcome of pregnancies occurring in Hiroshima and Nagasaki, and in a control city, Kure, located 18 mi (29 km) south of Hiroshima, in order to discern the conditions and outcomes related to radiation exposure. James V. Neel led the study which found that the overall number of birth defects was not significantly higher among the children of survivors who were pregnant at the time of the bombings. He also studied the longevity of the children who survived the bombings of Hiroshima and Nagasaki, reporting that between 90 and 95 percent were still living 50 years later. While The National Academy of Sciences raised the possibility that Neel 's procedure did not filter the Kure population for possible radiation exposure which could bias the results. Overall, a statistically insignificant increase in birth defects occurred directly after the bombings of Nagasaki and Hiroshima when the cities were taken as wholes, in terms of distance from the hypocenters however, Neel and others noted that in approximately 50 humans who were of an early gestational age at the time of the bombing and who were all within about 1 kilometre (0.62 mi) from the hypocenter, an increase in microencephaly and anencephaly was observed upon birth, with the incidence of these two particular malformations being nearly 3 times what was to be expected when compared to the control group in Kure, were approximately 20 cases were observed in a similar sample size. In 1985, Johns Hopkins University geneticist James F. Crow examined Neel 's research and confirmed that the number of birth defects was not significantly higher in Hiroshima and Nagasaki. Many members of the ABCC and its successor Radiation Effects Research Foundation (RERF) were still looking for possible birth defects among the survivors decades later, but found no evidence that they were significantly common among the survivors, or inherited in the children of survivors. Despite the small sample size of 1600 to 1800 persons who came forth as prenatally exposed at the time of the bombings, that were both within a close proximity to the two hypocenters, to survive the In utero absorption of a substantial dose of radiation and then the malnourished post-attack environment, data from this cohort does support the increased risk of severe mental retardation (SMR), that was observed in some 30 individuals, with SMR being a common outcome of the aforementioned microencephaly. While a lack of statistical data, with just 30 individuals out of 1800, prevents a definitive determination of a threshold point, the data collected suggests a threshold interuterine or fetal dose for SMR, at the most radiosensitive period of cognitive development, when there is the largest number of undifferentiated neural cells (8 to 15 weeks post-conception) to begin at a threshold dose of approximately "0.09 '' to "0.15 '' Gy, with the risk then linearly increasing to a 43 % rate of SMR when exposed to a fetal dose of 1 Gy at any point during these weeks of rapid Neurogenesis. However either side of this radiosensitive age, none of the prenatally exposed to the bombings at an age less than 8 weeks, that is prior to synaptogenesis or at a gestational age more than 26 weeks "were observed to be mentally retarded '', with the condition therefore being isolated to those solely of 8 -- 26 weeks of age and who absorbed more than approximately "0.09 '' to "0.15 '' Gy of prompt radiation energy. Examination of the prenatally exposed in terms of IQ performance and school records, determined the beginning of a statistically significant reduction in both, when exposed to greater than 0.1 to 0.5 Gray, during the same gestational period of 8 -- 25 weeks. However outside this period, at less than 8 weeks and greater than 26 after conception, "there is no evidence of a radiation - related effect on scholastic performance. '' The reporting of doses in terms of absorbed energy in units of (Gy and rad) rather than the use of the biologically significant, biologically weighted Sievert, in both the SMR and cognitive performance data is typical. The reported threshold dose variance between the two cities, is suggested to be a manifestation of the difference between X-ray and neutron absorption, with Little Boy emitting substantially more neutron flux, whereas the Baratol that surrounded the core of Fat Man, filtered or shifted the absorbed neutron - radiation profile, so that the dose of radiation energy received in Nagasaki, is mostly that from exposure to x-rays / gamma rays, in contrast to the environment within 1500 meters of the hypocenter at Hiroshima, were instead the in - utero dose more depended on the absorption of neutrons, which have a higher biological effect per unit of energy absorbed. From the Radiation dose reconstruction work, which were also informed by the 1962 BREN Tower - Japanese city analog, the estimated dosimetry at Hiroshima still has the largest uncertainty as the Little Boy - bomb design was never tested before deployment or afterward, therefore the estimated radiation profile absorbed by individuals at Hiroshima had required greater reliance on calculations than the Japanese soil, concrete and roof - tile measurements which began to reach accurate levels and thereby inform researchers, in the 1990s. Many other investigations into cognitive outcomes, such as Schizophrenia as a result of prenatal exposure, have been conducted with "no statistically significant linear relationship seen '', there is a suggestion that in the most extremely exposed, those who survived within a kilometer or so of the hypocenters, a trend emerges akin to that seen in SMR, though the sample size is too small to determine with any significance. The survivors of the bombings are called hibakusha (被爆 者, Japanese pronunciation: (çibakɯ̥ɕa)), a Japanese word that literally translates to "explosion - affected people ''. The Japanese government has recognized about 650,000 people as hibakusha. As of March 31, 2018, 154,859 were still alive, mostly in Japan. The government of Japan recognizes about 1 % of these as having illnesses caused by radiation. The memorials in Hiroshima and Nagasaki contain lists of the names of the hibakusha who are known to have died since the bombings. Updated annually on the anniversaries of the bombings, as of August 2018, the memorials record the names of almost 495,000 hibakusha; 314,118 in Hiroshima and 179,226 in Nagasaki. If they discuss their background, Hibakusha and their children were (and still are) victims of fear based discrimination and exclusion when it comes to prospects of marriage or work due to public ignorance about the consequences of radiation sickness or that the low doses that the majority received were less than a routine diagnostic x-ray, much of the public however persist with the belief that the Hibakusha carry some hereditary or even contagious disease. This is despite the fact that no statistically demonstrable increase of birth defects / congenital malformations was found among the later conceived children born to survivors of the nuclear weapons used at Hiroshima and Nagasaki, or indeed has been found in the later conceived children of cancer survivors who had previously received radiotherapy. The surviving women of Hiroshima and Nagasaki, that could conceive, who were exposed to substantial amounts of radiation, went on and had children with no higher incidence of abnormalities / birth defects than the rate which is observed in the Japanese average. A study of the long - term psychological effects of the bombings on the survivors found that even 17 -- 20 years after the bombings had occurred survivors showed a higher prevalence of anxiety and somatization symptoms. Perhaps as many as 200 people from Hiroshima sought refuge in Nagasaki. The 2006 documentary Twice Survived: The Doubly Atomic Bombed of Hiroshima and Nagasaki documented 165 nijū hibakusha (lit. double explosion - affected people), nine of whom claimed to be in the blast zone in both cities. On March 24, 2009, the Japanese government officially recognized Tsutomu Yamaguchi as a double hibakusha. He was confirmed to be 3 km (1.9 mi) from ground zero in Hiroshima on a business trip when the bomb was detonated. He was seriously burnt on his left side and spent the night in Hiroshima. He arrived at his home city of Nagasaki on August 8, the day before the bombing, and he was exposed to residual radiation while searching for his relatives. He was the first officially recognized survivor of both bombings. He died on January 4, 2010, at the age of 93, after a battle with stomach cancer. During the war, Japan brought as many as 670,000 Korean conscripts to Japan to work as forced labor. About 5,000 -- 8,000 Koreans were killed in Hiroshima and another 1,500 -- 2,000 died in Nagasaki. For many years, Korean survivors had a difficult time fighting for the same recognition as Hibakusha as afforded to all Japanese survivors, a situation which resulted in the denial of the free health benefits to them in Japan. Most issues were eventually addressed in 2008 through lawsuits. Hiroshima was subsequently struck by Typhoon Ida on September 17, 1945. More than half the bridges were destroyed, and the roads and railroads were damaged, further devastating the city. The population increased from 83,000 soon after the bombing to 146,000 in February 1946. The city was rebuilt after the war, with help from the national government through the Hiroshima Peace Memorial City Construction Law passed in 1949. It provided financial assistance for reconstruction, along with land donated that was previously owned by the national government and used for military purposes. In 1949, a design was selected for the Hiroshima Peace Memorial Park. Hiroshima Prefectural Industrial Promotion Hall, the closest surviving building to the location of the bomb 's detonation, was designated the Hiroshima Peace Memorial. The Hiroshima Peace Memorial Museum was opened in 1955 in the Peace Park. Hiroshima also contains a Peace Pagoda, built in 1966 by Nipponzan - Myōhōji. Nagasaki was also rebuilt after the war, but was dramatically changed in the process. The pace of reconstruction was initially slow, and the first simple emergency dwellings were not provided until 1946. The focus on redevelopment was the replacement of war industries with foreign trade, shipbuilding and fishing. This was formally declared when the Nagasaki International Culture City Reconstruction Law was passed in May 1949. New temples were built, as well as new churches owing to an increase in the presence of Christianity. Some of the rubble was left as a memorial, such as a torii at Sannō Shrine, and an arch near ground zero. New structures were also raised as memorials, such as the Nagasaki Atomic Bomb Museum, which was opened in the mid-1990s. The role of the bombings in Japan 's surrender, and the ethical, legal, and military controversies surrounding the United States ' justification for them have been the subject of scholarly and popular debate. On one hand, it has been argued, that the bombings caused the Japanese surrender, thereby preventing casualties that an invasion of Japan would have involved. Stimson talked of saving one million casualties. The naval blockade might have starved the Japanese into submission without an invasion, but this would also have resulted in many more Japanese deaths. It has also been pointed out that the conventional bombing of Japan caused just as much destruction as the atomic bombs, if not more so. Indeed, Operation Meetinghouse, known as the Great Tokyo Air Raid in Japan, was the single most devastating air raid of the war, with a higher death toll than either of the two atomic bombings. Japanese historian Tsuyoshi Hasegawa argued that the entry of the Soviet Union into the war against Japan "played a much greater role than the atomic bombs in inducing Japan to surrender because it dashed any hope that Japan could terminate the war through Moscow 's mediation ''. A view among critics of the bombings, that was popularized by American historian Gar Alperovitz in 1965, is the idea of atomic diplomacy: that the United States used nuclear weapons in order to intimidate the Soviet Union in the early stages of the Cold War. Although not accepted by mainstream historians, this became the position in Japanese school history textbooks. Those who oppose the bombings, give other reasons for their view; among them: a belief, that atomic bombing is fundamentally immoral, that the bombings counted as war crimes, that they constituted state terrorism, and that they involved racism against and the dehumanization of the Japanese people. Like the way it began, the manner in which World War II ended cast a long shadow over international relations for decades to come. By June 30, 1946, there were components for only nine atomic bombs in the US arsenal, all Fat Man devices identical to the one used in the bombing of Nagasaki. The nuclear weapons were handmade devices, and a great deal of work remained to improve their ease of assembly, safety, reliability and storage before they were ready for production. There were also many improvements to their performance that had been suggested or recommended, but that had not been possible under the pressure of wartime development. The Chairman of the Joint Chiefs of Staff, Fleet Admiral William D. Leahy had decried the use of the atomic bombs as adopting "an ethical standard common to the barbarians of the Dark Ages '', but in October 1947, he reported a military requirement for 400 bombs. The American monopoly on nuclear weapons lasted only four years before the Soviet Union detonated an atomic bomb in September 1949. The United States responded with the development of the hydrogen bomb, a nuclear weapon a thousand times as powerful as the bombs that devastated Hiroshima and Nagasaki. Such ordinary fission bombs would henceforth be regarded as small tactical nuclear weapons. By 1986, the United States would have 23,317 nuclear weapons, while the Soviet Union had 40,159. By 2017, nine nations had nuclear weapons, but Japan was not one of them. Japan reluctantly signed the Treaty on the Non-Proliferation of Nuclear Weapons in February 1970, but it still sheltered under the American nuclear umbrella. American nuclear weapons were stored on Okinawa, and sometimes in Japan itself, albeit in contravention of agreements between the two nations. Lacking the resources to fight the Soviet Union using conventional forces, the Western Alliance came to depend on the use of nuclear weapons to defend itself during the Cold War, a policy that became known in the 1950s as the New Look. In the decades after Hiroshima and Nagasaki, the United States would threaten to use its nuclear weapons many times.
song i am gonna to hire a wino to decorate our home
I 'm Gonna Hire a Wino to Decorate Our Home - Wikipedia "I 'm Gonna Hire a Wino to Decorate Our Home '' is a song written by Dewayne Blackwell and recorded by American country music artist David Frizzell. It was released in April 1982 as the first single from the album The Family 's Fine, But This One 's All Mine. "I 'm Gonna Hire a Wino to Decorate Our Home '' was David Frizzell 's only number one on the country chart as a solo artist. The single went to number one for one week and spent a total of 14 weeks in country music 's top 40. The song also became an unexpected mainstream pop hit in Canada, peaking at No. 20 on the RPM Top Singles chart (in addition to peaking at No. 3 on the magazine 's Top Country Tracks chart). The song talks of a wife who grows tired of her husband 's barhopping (and spending his entire paycheck doing so). But instead of ending the marriage, she comes up with a unique plan -- she decides to redecorate their house into a bar, and play the part of bartender / waitress as an inducement to get her husband to stay at home (and possibly bring his friends along with him, so they can spend their paychecks). While he recovers from his hangover the following morning, she will deposit the proceeds in their bank account.
where did the name hells kitchen come from
Hell 's Kitchen, Manhattan - wikipedia Hell 's Kitchen, also known as Clinton, is a neighborhood on the West Side of Midtown Manhattan in New York City. It is traditionally considered to be bordered by 34th Street to the south, 59th Street to the north, Eighth Avenue to the east, and the Hudson River to the west. The area provides transport, medical, and warehouse - infrastructure support to Midtown 's business district. Once a bastion of poor and working class Irish Americans, Hell 's Kitchen 's location in Midtown has changed its personality since the 1970s. Though Hell 's Kitchen 's gritty reputation had long held real - estate prices below those of most other areas of Manhattan, by 1969, the City Planning Commission 's Plan for New York City reported that development pressures related to its Midtown location were driving people of modest means from the area. Since the early 1990s, the area has been gentrifying, and rents have risen rapidly. Located close to both Broadway theaters and the Actors Studio training school, Hell 's Kitchen has long been a home to learning and practicing actors, and, in recent years, to young Wall Street financiers. The name "Hell 's Kitchen '' generally refers to the area from 34th to 59th Streets. Starting west of Eighth Avenue and north of 43rd Street, city zoning regulations generally limit buildings to six stories. As a result, most of the buildings are older, and are often walk - up apartments. For the most part, the neighborhood encompasses the ZIP codes 10019 and 10036. The post office for 10019 is called Radio City Station, the original name for Rockefeller Center on Sixth Avenue. To the east, the neighborhood overlaps the Times Square Theater District to the east at Eighth Avenue. On its southeast border, it overlaps the Garment District also on Eighth Avenue. Here, two landmarks reside -- the New Yorker Hotel and the dynamic Manhattan Center building (at the northwest corner of 34th Street and Eighth Avenue). Included in the transition area on Eighth Avenue are the Port Authority Bus Terminal at 42nd Street, the Pride of Manhattan Fire Station (from which 15 firefighters died at the World Trade Center), several theatres including Studio 54, the original soup stand of Seinfeld 's "The Soup Nazi '' ' and the Hearst Tower. The northern edge of Hell 's Kitchen borders the southern edge of the Upper West Side. 57th Street is the traditional boundary between the two neighborhoods. However, Hell 's Kitchen is often considered to extend further north to 59th Street, the southern edge of Central Park starting at Eighth Avenue, where the avenue names change; this neighborhood overlaps with the Upper West Side if this is considered to be Hell 's Kitchen 's northern boundary. Included in the 57th to 59th Street transition area are the Time Warner Center at Columbus Circle, Hudson Hotel, Mount Sinai West, where John Lennon died in 1980 after being shot, and John Jay College. The southern boundary is at Chelsea, but the two neighborhoods overlap and are often lumped together as the "West Side '' since they support the Midtown Manhattan business district. The traditional dividing line is 34th Street. The transition area just north of Madison Square Garden and Pennsylvania Station includes the Jacob K. Javits Convention Center. The western border of the neighborhood is the Hudson River at the Hudson River Park and West Side Highway. Several explanations exist for the original name. An early use of the phrase appears in a comment Davy Crockett made about another notorious Irish slum in Manhattan, Five Points. According to the Irish Cultural Society of the Garden City Area: When, in 1835, Davy Crockett said, "In my part of the country, when you meet an Irishman, you find a first - rate gentleman; but these are worse than savages; they are too mean to swab hell 's kitchen. '' He was referring to the Five Points. According to an article by Kirkley Greenwell, published online by the Hell 's Kitchen Neighborhood Association: No one can pin down the exact origin of the label, but some refer to a tenement on 54th Street as the first "Hell 's Kitchen. '' Another explanation points to an infamous building at 39th as the true original. A gang and a local dive took the name as well... a similar slum also existed in London and was known as Hell 's Kitchen. Local historian Mary Clark explained the name thus: ... first appeared in print on September 22, 1881 when a New York Times reporter went to the West 30s with a police guide to get details of a multiple murder there. He referred to a particularly infamous tenement at 39th Street and Tenth Avenue as "Hell 's Kitchen '' and said that the entire section was "probably the lowest and filthiest in the city. '' According to this version, 39th Street between 9th and 10th Avenues became known as Hell 's Kitchen and the name was later expanded to the surrounding streets. Another version ascribes the name 's origins to a German restaurant in the area known as Heil 's Kitchen, after its proprietors. But the most common version traces it to the story of "Dutch Fred the Cop '', a veteran policeman, who with his rookie partner, was watching a small riot on West 39th Street near Tenth Avenue. The rookie is supposed to have said, "This place is hell itself '', to which Fred replied, "Hell 's a mild climate. This is Hell 's Kitchen. '' Hell 's Kitchen has stuck as the most - used name of the neighborhood, even though real estate developers have offered alternatives of "Clinton '' and "Midtown West '', or even "the Mid-West ''. The "Clinton '' name, used by the municipality of New York City, originated in 1959 in an attempt to link the area to DeWitt Clinton Park at 52nd and Eleventh Avenue, named after the 19th century New York governor. On the island of Manhattan as it was when Europeans first saw it, the Great Kill formed from three small streams that united near present - day Tenth Avenue and 40th Street, and then wound through the low - lying Reed Valley, renowned for fish and waterfowl, to empty into the Hudson River at a deep bay on the river at the present 42nd Street. The name was retained in a tiny hamlet called Great Kill, which became a center for carriage - making, while the upland to the south and east became known as Longacre, the predecessor of Longacre Square (now Times Square). One of the large farms of the colonial era in this neighborhood was that of Andreas Hopper and his descendants, extending from today 's 48th Street nearly to 59th Street and from the river east to what is now Sixth Avenue. One of the Hopper farmhouses, built in 1752 for John Hopper the younger, stood near 53rd Street and Eleventh Avenue; christened "Rosevale '' for its extensive gardens, it was the home of the War of 1812 veteran, Gen. Garrit Hopper Striker, and lasted until 1896, when it was demolished. The site was purchased for the city and naturalistically landscaped by Samuel Parsons Jr. as DeWitt Clinton Park. In 1911 New York Hospital bought a full city block largely of the Hopper property, between 54th and 55th Streets, Eleventh and Twelfth Avenues. Beyond the railroad track, projecting into the river at 54th Street, was Mott 's Point, with an 18th - century Mott family house surrounded by gardens that was inhabited by members of the family until 1884 and survived until 1895. A lone surviving structure that dates from the time this area was open farmland and suburban villas is a pre-1800s carriage house that once belonged to a villa owned by former Vice President and New York State governor George Clinton, now in a narrow court behind 422 West 46th Street. From 1811 until it was officially de-mapped in 1857, the diminutive Bloomingdale Square was part of the city 's intended future; it extended from 53rd to 57th Streets between Eighth and Ninth Avenues. It was eliminated after the establishment of Central Park, and the name shifted to the junction of Broadway, West End Avenue, and 106th Street, now Straus Park. In 1825, the City purchased for $10 clear title to a right - of - way through John Leake Norton 's farm, "The Hermitage '', to lay out 42nd Street clear to the river. Before long, cattle ferried from Weehawken were being driven along the unpaved route to slaughterhouses on the East Side. Seventy acres of the Leakes ' (later the Nortons ') property, extending north from 42nd to 46th Street and from Broadway to the river, had been purchased before 1807 by John Jacob Astor and William Cutting, who held it before dividing it into building lots as the district became more suburban. There were multiple changes that helped Hell 's Kitchen integrate with New York City proper. The first was construction of the Hudson River Railroad, whose initial leg -- the 40 miles (64 km) to Peekskill -- was completed on September 29, 1849, By the end of 1849, it stretched to Poughkeepsie and in 1851 it extended to Albany. The track ran at a steep grade up Eleventh Avenue, as far as 60th Street. The formerly rural riverfront was industrialized by businesses, such as tanneries, that used the river for shipping products and dumping waste. The neighborhood that would later be known as Hell 's Kitchen, started forming in the southern part of the 22nd Ward in the mid-19th century. Irish immigrants -- mostly refugees from the Great Famine -- found work on the docks and railroad along the Hudson River and established shantytowns there. After the American Civil War, there was an influx of people who moved to New York city. The tenements that were built became overcrowded quickly. Many who lived in this congested, poverty - stricken area turned to gang life. Following Prohibition, implemented in 1919, the district 's many warehouses were ideal locations for bootleg distilleries for the rumrunners who controlled illicit liquor. At the start of the 20th century, the neighborhood was controlled by gangs, including the violent Gopher Gang led by One Lung Curran and later by Owney Madden. Early gangs, like the Hell 's Kitchen Gang, transformed into organized crime entities, around the same time that Owney Madden became one of the most powerful mobsters in New York. It became known as the "most dangerous area on the American Continent ''. After the repeal of Prohibition, many of the organized crime elements moved into other rackets, such as illegal gambling and union shakedowns. The postwar era was characterized by a flourishing waterfront, and longshoreman work was plentiful. By the end of the 1950s, however, the implementation of containerized shipping led to the decline of the West Side piers and many longshoremen found themselves out of work. In addition, construction of the Lincoln Tunnel, Lincoln Tunnel access roads, and the Port Authority Bus Terminal and ramps destroyed much of Hell 's Kitchen south of 41st Street. - In 1959, an aborted rumble between rival Irish and Puerto Rican gangs led to the notorious "Capeman '' murders in which two innocent teenagers were killed. By 1965, Hell 's Kitchen was the home base of the Westies, an Irish mob aligned with the Gambino crime family. It was not until the early 1980s that widespread gentrification began to alter the demographics of the longtime working - class Irish American neighborhood. The 1980s also saw an end to the Westies ' reign of terror, when the gang lost all of its power after the RICO convictions of most of its principals in 1986. Although the neighborhood is immediately west of New York 's main business district, large - scale redevelopment has been kept in check for more than 40 years by strict zoning rules in a Special Clinton District designed to protect the neighborhood 's residents and its low - rise character. In part to qualify for federal aid, New York developed a comprehensive Plan for New York City in 1969 -- 70. For Hell 's Kitchen, the master plan called for two to three thousand hotel rooms, 25,000 apartments, 25,000,000 square feet (2,300,000 m) of office space, a new super liner terminal, a subway along 48th Street, and a convention center to replace what the plan described as "blocks of antiquated and deteriorating structures of every sort. '' However, outrage at the massive residential displacement that this development project would have caused, and the failure of the City to complete any replacement housing, led to opposition to the first project -- a new convention center to replace the New York Coliseum. To prevent the convention center from sparking a development boom that would beget the rest of the master plan with its consequent displacement, the Clinton Planning Council and Daniel Gutman, their environmental planner, proposed that the convention center and all major development be located south of 42nd Street where public policy had already left tracts of vacant land. Nevertheless, in 1973 the Jacob K. Javits Convention Center was approved for a 44th Street site that would replace piers 84 and 86. But in exchange, and after the defeat of a bond issue that would have funded a 48th Street "people mover, '' the City first abandoned the rest of the 1969 -- 70 master plan and then gave the neighborhood a special zoning district to restrict further redevelopment. Since then, limited new development has filled in the many empty lots and rejuvenated existing buildings. Later, in 1978, when the city could not afford to construct the 44th Street convention center, the Mayor and Governor chose the rail yard site originally proposed by the local community. Major office and residential development south of 42nd Street indeed followed, albeit much later, after the City initiated the Hudson Yards Redevelopment Project and started construction on the 7 subway extension The SCD was originally split into four areas: Special permits are required for all demolition and construction in the SCD, including demolition of "any sound housing in the District '' and any rehabilitation that increases the number of dwellings in a structure. In the original provisions. no building could be demolished unless it was unsound. New developments, conversions, or alterations that create new units or zero bedroom units must contain at least 20 % two bedroom apartments with a minimum room size of 168 square feet (16 m). Alterations that reduce the percentage of two - bedroom units are not permitted unless the resulting building meets the 20 % two - bedroom requirement. Finally, building height in the Preservation Area can not exceed 66 feet (20 m) or seven stories, whichever is less. As the gentrification pace increased, there were numerous reports of problems between landlords and tenants. The most extreme example was the eight - story Windermere complex at the southwest corner of Ninth Avenue and 57th Street. Built in 1881, it is the second - oldest large apartment house in Manhattan. In 1980, the then - owner, Alan B. Weissman, tried to empty the building of its tenants. According to former tenants and court papers, rooms were ransacked, doors were ripped out, prostitutes were moved in, and tenants received death threats in the campaign to empty the building. All the major New York newspapers covered the trials that sent the Windermere 's managers to jail. Although the building 's landlord, Alan B. Weissman, was never linked to the harassment, he and his wife made top billing in the 1985 edition of The Village Voice annual list, "The Dirty Dozen: New York 's Worst Landlords. '' Most of the tenants eventually settled and moved out of the building. As of May 2006, seven tenants remained and court orders protecting the tenants and the building allowed it to remain in derelict condition even as the surrounding neighborhood was experiencing a dramatic burst of demolition and redevelopment. Finally, in September 2007, the fire department evacuated those remaining seven residents from the building, citing dangerous conditions, and padlocked the front door. In 2008 the New York Supreme Court ruled that the owners of the building, who include the TOA Construction Corporation of Japan, must repair it. By the 1980s the area south of 42nd Street was in decline. Both the state and the city hoped that the Jacob K. Javits Convention Center would renew the area. Hotels, restaurants, apartment buildings, and television studios were proposed. One proposal included apartments and hotels on a 30 acres (12 ha) pier jutting out onto Hudson River, which also included a marina, ferry slip, stores, restaurants, and a performing arts center. At Ninth Avenue and 33rd Street, a 32 - story office tower would be built. Hotels, apartment buildings, and a Madison Square Garden would be built over the tracks west of Pennsylvania Station. North of the Javits Center, a "Television City '' would be developed by Larry Silverstein in conjunction with NBC. One impediment to development was that there was a lack of mass transit in the area, which is far from Penn Station, and none of the proposals for a link to Penn Station were pursued successfully (for example, the ill - fated West Side Transitway). No changes to the zoning policy happened until 1990, when the city rezoned a small segment of 11th Avenue near the Javits Center. In 1993, part of 9th Avenue between 35th and 41st Streets was also rezoned. However, neither of these rezonings was particularly significant, as most of the area was still zoned as a manufacturing district with low - rise apartment buildings. By the early 1990s, there was a recession, which scuttled plans for rezoning and severely reduced the amount of development in the area. After the recession was over, developers invested in areas like Times Square, eastern Hell 's Kitchen, and Chelsea, but mostly skipped the Far West Side. While most fire stations in Manhattan lost firefighters in the September 11, 2001, terrorist attacks, the station with the greatest loss of firefighters was Engine 54, Ladder 4, Battalion 9 at 48th Street and Eighth Avenue, which lost 15 firefighters. Given its proximity to Midtown, the station has specialized in skyscraper fires and rescues; in 2007, it was the second - busiest firehouse in New York City, with 9,685 runs between the two companies. Its patch reads "Pride of Midtown '' and "Never Missed a Performance ''. Memorials dot the station 's exterior walls and a granite memorial is in a park to its north. Ladder 21, the "Pride of Hell 's Kitchen '', located on 38th Street between Ninth and Tenth Avenues, and stationed with Engine 34, lost seven firefighters on September 11. In addition, on September 11, Engine 26 was temporarily stationed with Engine 34 / Ladder 21 and lost many firefighters themselves. Larry Silverstein made part of his fortune that eventually earned him the lease for the World Trade Center by building and managing buildings in the neighborhood. Hell 's Kitchen has become an increasingly upscale neighborhood of affluent young professionals as well as residents from the "old days '', with rents in the neighborhood having increased dramatically above the average in Manhattan. It has also acquired a large and diverse community as residents have moved north from Chelsea. Zoning has long restricted the extension of Midtown Manhattan 's skyscraper development into Hell 's Kitchen, at least north of 42nd Street. The David Childs - and Frank Williams - designed Worldwide Plaza established a beachhead when it was built in 1989 at the former Madison Square Garden site, a full city block between 49th and 50th Streets and between Eighth and Ninth Avenues that was exempt from special district zoning rules. This project led a real - estate building boom on Eighth Avenue, including the Hearst Tower at 56th Street and Eighth Avenue. An indication of how fast real estate prices rose in the neighborhood was a 2004 transaction involving the Howard Johnson 's Motel at 52nd and Eighth Avenue. In June, Vikram Chatwal 's Hampshire Hotel Group bought the motel and adjoining SIR (Studio Instrument Rental) building for $9 million. In August, they sold the property to Elad Properties for about $43 million. Elad, which formerly owned the Plaza Hotel, is in the process of building The Link, a luxury 44 - story building. The most prominent real estate project in the area is the Hudson Yards Redevelopment Project, which will include over 45 million square feet of commercial and residential development, a renovation of the Jacob K. Javits Convention Center, and an extension of the IRT Flushing Line to the 34th Street -- Hudson Yards station at 34th Street and 11th Avenue. This new station for 7 and < 7 > ​ trains opened on September 13, 2015. Hudson Yards includes a mixed - use real estate development by Related Companies and Oxford Properties over the MTA 's West Side Yard which is expected to consist of 16 skyscrapers containing more than 12,700,000 square feet (1,180,000 m) of new office, residential, and retail space. six million square feet (560,000 m) of commercial office space, a 750,000 - square - foot (70,000 m) retail center with two levels of restaurants, cafes, markets and bars, a hotel, a cultural space, about 5,000 residences, a 750 - seat school, and 14 acres (5.7 ha) of public open space. Development on the rail yard site officially broke ground on December 4, 2012, with the first tower, an 895 - foot (273 m) office building in the southeast corner of the site, expected to be complete in 2016. Based on data from the 2010 United States Census, the population of Hell 's Kitchen (Clinton) was 45,884, an increase of 5,289 (13.0 %) from the 40,595 counted in 2000. Covering an area of 422.45 acres (170.96 ha), the neighborhood had a population density of 108.6 inhabitants per acre (69,500 / sq mi; 26,800 / km). The racial makeup of the neighborhood was 56.4 % (25,891) White, 6.3 % (2,869) African American, 0.2 % (70) Native American, 15.0 % (6,886) Asian, 0.1 % (31) Pacific Islander, 0.4 % (181) from other races, and 2.4 % (1,079) from two or more races. Hispanic or Latino of any race were 19.3 % (8,877) of the population. Hell 's Kitchen 's side streets are mostly lined with trees. The neighborhood does not have many parks or recreational areas, though smaller plots have been converted into green spaces. One such park is De Witt Clinton Park on Eleventh Avenue between 52nd and 54th streets, across the West Side Highway from Clinton Cove Park. Another is Hell 's Kitchen Park, built in the 1970s on a former parking lot on 10th Avenue between 47th and 48th Streets. A newer park in Hell 's Kitchen is the Hudson Park and Boulevard, which is part of the Hudson Yards Redevelopment Project. The Clinton Community Garden, a neighborhood garden, is a result of the actors living in the area. Since they mostly work at night in the local theatres, they took time to create a garden in what was then a rubble - strewn lot on West 48th Street between Ninth and Tenth avenues Eventually it contributed to the area 's gentrification. Although the garden has a gate which requires a key, everyone who lives in Hell 's Kitchen can apply for a membership and get a copy of the key. Hell 's Kitchen 's gritty reputation had made its housing prices lower than elsewhere in Manhattan. Given the lower costs in the past and its proximity to Broadway theatres, the neighborhood is a haven for aspiring actors. Many famous actors and entertainers have resided there, including Burt Reynolds, Rip Torn, Bob Hope, Charlton Heston, James Dean, Madonna, Jerry Seinfeld, Larry David, Alicia Keys, John Michael Bolger, and Sylvester Stallone. This is due in large part to the Actors Studio on West 44th at which Lee Strasberg taught and developed method acting. With the opening of the original Improv by Budd Friedman in 1963, the club became a hangout for singers to perform but quickly attracted comedians, as well, turning it into the reigning comedy club of its time. Once located near West 44th Street and Ninth Avenue, it has since shuttered, replaced by a restaurant. Manhattan Plaza at 43rd Street between Ninth and Tenth Avenues was built in the 1970s to house artists. It consists of two 46 - story towers with 70 % of the apartments set aside for rent discounts for those who work in the arts. The Actors ' Temple and St. Malachy Roman Catholic Church with its Actors ' Chapel also testify to the long - time presence of show business people. The neighborhood is also home to a number of broadcast and music - recording studios, including the CBS Broadcast Center at 524 West 57th Street, where the CBS television network records many of its news and sports programs such as 60 Minutes and The NFL Today; the former Sony Music Studios at 460 West 54th Street, which closed in 2007; Manhattan Center Studios at 311 West 34th Street; and Right Track Recording 's Studio A509 orchestral recording facility at West 38th Street and Tenth Avenue. The syndicated Montel Williams Show is also taped at the Unitel Studios, 433 West 53rd Street, between Ninth and Tenth Avenues. In 2016, rock music singer and songwriter Sting recorded his album entitled 57th & 9th at Avatar Studios, a music studio located near the intersection of 57th Street and Ninth Avenue in Hell 's Kitchen. The Comedy Central satirical news program The Daily Show has been taped in Hell 's Kitchen since its debut. In 2005, it moved from its quarters at 54th Street and Tenth Avenue to a new studio in the neighborhood, at 733 Eleventh Avenue, between 51st and 52nd Streets. The 54th and 10th location was used for The Colbert Report throughout its entire run from 2005 until 2014. Until its cancellation, the studio was used for The Nightly Show with Larry Wilmore, following Stephen Colbert 's departure from Comedy Central. Next door at 511 W. 54th St. is Ars Nova theater, home to emerging artists Joe Iconis and breakout star Jesse Eisenberg, among others. The headquarters of Troma studios was located in Hell 's Kitchen before their move to Long Island City in Queens. The Baryshnikov Arts Center opened at 37 Arts on 37th Street in 2005, the Orchestra of St. Luke 's opened the DiMenna Center for Classical Music in the same building in 2011. The Alvin Ailey American Dance Theater opened at 55th Street and Ninth Avenue in 2006. The Metropolitan Community Church of New York, geared toward an LGBTQ membership, is located in Hell 's Kitchen. Ninth Avenue is noted for its many ethnic restaurants. The Ninth Avenue Association 's International Food Festival stretches through the Kitchen from 42nd to 57th Streets every May, usually on the third weekend of the month. It has been going on since 1974 and is one of the oldest street fairs in the city. There are Caribbean, Chinese, French, German, Greek, Italian, Irish, Mexican, and Thai restaurants as well as multiple Afghan, Argentine, Ethiopian, Peruvian, Turkish, Indian, Pakistani, and Vietnamese restaurants. Restaurant Row, so called because of the abundance of restaurants, is located on West 46th Street between Eighth and Ninth Avenues. Notable establishments on Ninth Avenue include Mickey Spillane 's, part - owned by the mobster 's son, who also owns Mr Biggs on Tenth Avenue / 43rd Street. There are more restaurants and food carts and trucks on Tenth Avenue between 43rd and 47th Streets, including Hallo Berlin. The Lincoln Tunnel connects New York City to New Jersey. Parking lots dot the neighborhood, but are dwindling in quantity as developments are being built. Eleventh Avenue is lined with car dealerships, many of which claim to have the highest volume among all dealerships for their brands in the country. The massive Port Authority Bus Terminal is between 40th and 42nd Streets and Eighth and Ninth Avenues. Several New York City Bus routes (such as the M11, M12, M31, M34 SBS, M42, M50) also service the area. Many of the horse - drawn carriages from Central Park stay in stables just off the West Side Highway. It is not uncommon to hear the sound of horses in the neighborhood. There have been calls for banning horse - drawn carriages, especially from Mayor of New York City Bill de Blasio following a handful of collisions between cars and carriages. The carriage horses live in historic stables originally built in the 19th century, but today boast the latest in barn design, such as fans, misting systems, box stalls, and state - of - the - art sprinkler systems. As horses always have in densely populated urban areas, the carriage horses live upstairs in their stables while the carriages are parked below on the ground floor. Cruise ships frequently dock at the New York Passenger Ship Terminal in the 48th to 52nd piers called Piers 88, 90, 92. Cruise ship horns are a common sound in the neighborhood. Several French restaurants opened on West 51st Street to accommodate traffic from the French Line. The piers originally built in 1930 are now considered small, and some cruise traffic uses other locations. Other ship operations in the neighborhood include Circle Line Sightseeing Cruises at West 42nd and the NY Waterway ferry service. Hell 's Kitchen begins northwest of Penn Station. Amtrak trains going into the station run along a sunken corridor west of Tenth Avenue, which feeds into the Freedom Tunnel; it is used by approximately thirty trains daily. During the post-9 / 11 building boom, apartment houses have been built over sections of the train tracks. Hell 's Kitchen is bounded on the east by the New York City Subway 's IND Eighth Avenue Line (A, ​ C and ​ E trains). The MTA built the 7 Subway Extension (7 and < 7 > ​ trains) for the aforementioned Hudson Yards development. The extension to 34th Street -- Hudson Yards opened on September 13, 2015, making the IRT Flushing Line the westernmost New York City Subway line within Midtown. The Success Academy Charter Schools group opened an elementary school, Success Academy Hell 's Kitchen, in the High School of Graphic Communication Arts building in 2013. Notable current and former residents of Hell 's Kitchen include:
who plays the oldest boy in nanny mcphee 2
Nanny McPhee and the Big Bang - wikipedia Nanny McPhee and the Big Bang (released in the United States and Canada as Nanny McPhee Returns) is a 2010 fantasy comedy family film directed by Susanna White, produced by Tim Bevan, Eric Fellner and Lindsay Doran with music by James Newton Howard and co-produced by StudioCanal, Relativity Media, Working Title Films and Three Strange Angels. It is a sequel to the 2005 film Nanny McPhee. It was adapted by Emma Thompson from Christianna Brand 's Nurse Matilda books. Thompson reprises her role as Nanny McPhee, and the film also stars Maggie Gyllenhaal, Ralph Fiennes, Rhys Ifans, Ewan McGregor, Asa Butterfield and Dame Maggie Smith. The film was theatrically released on August 20, 2010 by Universal Pictures. The film received positive reviews from critics and it earned $93,251,121 on a $35 million budget. It also received a Young Artist Award nomination for Best Performance in a Feature Film. The film was released on DVD and Blu - ray in the UK on 19 June 2010. On a farm during World War II, while her husband is away at war, Isabel Green is driven to her wits end by her hectic life. Between trying to keep the family farm up and running and her job in the village shop, run by the slightly mad Mrs. Docherty, she also has three boisterous children to look after, Norman, Megsie, and Vincent. When her children 's two wealthy cousins, Cyril and Celia, also then come to live with them, Isabel requires childcare help. When Nanny McPhee arrives, the children at first do not listen and carry on fighting, which she soon puts a stop to. Meanwhile, Isabel 's brother - in - law, Phil, has gambled away his half of the farm, and is being chased by two hired - assassin women under casino owner Mrs. Biggles. He desperately attempts to make Isabel sell her half of the farm, using mean and spiteful schemes to leave her no choice. Isabel takes the children on a picnic during which an ARP Warden, Mr. Docherty, warns them about bombs and how he imagines a pilot might accidentally release his bomb. At the end of the picnic, Uncle Phil delivers a telegram saying Isabel 's husband has been killed in action. Isabel and everyone else believes the telegram, but Norman says that he can "feel it in his bones '' that his father is not dead. He tells this to Cyril, who at first says it is just because he is upset, but then agrees Norman might be right, so the two boys ask Nanny McPhee to take them to the War Office in London, where Cyril and Celia 's father works. There, Nanny McPhee and the boys ask Cyril 's father Lord Gray, who is very important in the War Office, what has happened to Mr. Green. At first Lord Gray sneers at Norman 's disbelief at his father 's death, but after Cyril blurts out that he knows that his parents are getting a divorce, Lord Gray checks what has happened. While he is gone, Cyril tells Norman that he and Celia have been sent away because their parents will be splitting up, and Norman asks where Cyril and Celia will live. When Cyril replies "with Mum, I suppose, not that it makes much difference, she only ever really sees us when she wants to show us off '', Norman tells Cyril that he and Celia are welcome to live on the farm with the Greens. Lord Gray returns and tells Norman that his father is not dead, but missing in action, and that there is no record of a telegram being sent to his mother. After the boys leave, Norman deduces that Uncle Phil forged the telegram. While the boys were at the War Office, Megsie, Celia and Vincent were trying to stop Isabel from signing the papers and selling the farm. Just as Isabel is about to sign the papers, a German pilot accidentally drops a huge bomb; it shakes everything but does not explode and is left sticking out of the barley field. When Nanny McPhee returns with Norman and Cyril, Norman accuses Phil of forgery, which he admits to, and Isabel handcuffs him to the stove. The children go out to watch Mr. Docherty dismantle the bomb, but he falls from the ladder. Megsie takes over, and succeeds with the help of the other children and Nanny McPhee 's raven, Mr. Edelweiss. After Nanny McPhee helps to harvest the barley, with a little magic, it is revealed that old Mrs. Docherty is in fact baby Agatha from the first film and that Nanny McPhee has been staying with her. As Nanny McPhee walks away from the now happy family, the children and Isabel chase after her, only to see Mr. Green, in army uniform and with an injured arm, making his way to them. He runs to his family and they all hug. The village in the film is Hambleden in Buckinghamshire, the farm set and scenes were filmed in Hascombe, near Godalming in Surrey and the War Office scenes, both interior and exterior, were filmed at the University of London, and the motorbike scenes on various London roads. Senate House. Dunsfold Aerodrome, the location of Top Gear, name Nanny McPhee and the Big Bang as being filmed there The film is one of a wave of adaptations of the book trilogy by children 's author Christianna Brand, Nurse Matilda. The books are not directly connected to the film, but most scenes in the film are adapted from the books. Emma Thompson started to write the script based on Brand 's books in the spring of 2007. Nanny McPhee and the Big Bang was theatrically released on August 20, 2010 by Universal Pictures (April 2, 2010 in the UK). The film was released on DVD and Blu - ray in the UK on 19 June 2010. Nanny McPhee Returns, as the film was renamed for the North American market, was released on DVD and Blu - ray on 14 December 2010. Critical response for the film was positive. Review aggregation website Rotten Tomatoes gave the film a rating of 76 % based on 115 reviews, with the site 's consensus stating: "Emma Thompson 's second labor of love with the Nanny McPhee character actually improves on the first, delivering charming family fare with an excellent cast. '' News of the World gave it 4 / 5 stars, stating the film was "smart, witty and beautifully crafted -- exactly what you want from a family film '' and excited about the next film said "Roll on Nanny McThree ''. The Independent gave a very favourable review, stating "the film is an ingenious entertainment machine fuelled by a profound understanding of what children enjoy, whether it 's cowpats, talking welly boots or piglets doing synchronised swimming. Thompson has written a properly funny script, which is performed superbly by Ifans, Maggie Smith, Bill Bailey, Ralph Fiennes and some estimable child actors. '' Eros Vlahos was nominated for Best Leading Young Actor at the Young Artist Awards 2011 against Noah Ringer for The Last Airbender, Zachary Gordon for Diary of a Wimpy Kid and Jaden Smith (winner) for The Karate Kid. In the UK, the film opened at number one, with £ 2,586,760 outperforming fellow new release The Blind Side, grossing a total of £ 16,211,057. In the United States and Canada, it debuted in seventh position with a $8.4 million. Gross exceeded $27 million. In the course of the story, Nanny McPhee teaches the children five lessons: (1) to stop fighting; (2) to share nicely; (3) to help each other; (4) to be brave; and (5) to have faith.
tamil nadu state agricultural marketing board chennai address
Tamil Nadu State agricultural marketing Board - wikipedia The Tamil Nadu State Agricultural Marketing Board (TNSAMB) was constituted by an executive order of the State Government in G.O. Ms. No. 2852 Agriculture Department, dated: 24.10. 1970 and functioning since 24.10. 1970, with the objective to regulate the activities of Market Committees and to act as an advisory body The following are the Functions And Powers of the Board:
who was the mastermind behind rebuilding the acropolis following the persian war of 480 bce
Acropolis of Athens - wikipedia The Acropolis of Athens is an ancient citadel located on a rocky outcrop above the city of Athens and contains the remains of several ancient buildings of great architectural and historic significance, the most famous being the Parthenon. The word acropolis is from the Greek words ἄκρον (akron, "highest point, extremity '') and πόλις (polis, "city ''). Although the term acropolis is generic and there are many other acropoleis in Greece, the significance of the Acropolis of Athens is such that it is commonly known as "The Acropolis '' without qualification. During ancient times it was known also more properly as Cecropia, after the legendary serpent - man, Cecrops, the first Athenian king. While there is evidence that the hill was inhabited as far back as the fourth millennium BC, it was Pericles (c. 495 -- 429 BC) in the fifth century BC who coordinated the construction of the site 's most important present remains including the Parthenon, the Propylaia, the Erechtheion and the Temple of Athena Nike. The Parthenon and the other buildings were damaged seriously during the 1687 siege by the Venetians during the Morean War when gunpowder being stored in the Parthenon was hit by a cannonball and exploded. The Acropolis is located on a flattish - topped rock that rises 150 m (490 ft) above sea level in the city of Athens, with a surface area of about 3 hectares (7.4 acres). While the earliest artifacts date to the Middle Neolithic era, there have been documented habitations in Attica from the Early Neolithic period (6th millennium BC). There is little doubt that a Mycenaean megaron palace stood upon the hill during the late Bronze Age. Nothing of this megaron survives except, probably, a single limestone column - base and pieces of several sandstone steps. Soon after the palace was constructed, a Cyclopean massive circuit wall was built, 760 meters long, up to 10 meters high, and ranging from 3.5 to 6 meters thick. This wall would serve as the main defense for the acropolis until the 5th century. The wall consisted of two parapets built with large stone blocks and cemented with an earth mortar called emplekton (Greek: ἔμπλεκτον). The wall uses typical Mycenaean conventions in that it followed the natural contour of the terrain and its gate, which was towards the south, was arranged obliquely, with a parapet and tower overhanging the incomers ' right - hand side, thus facilitating defense. There were two lesser approaches up the hill on its north side, consisting of steep, narrow flights of steps cut in the rock. Homer is assumed to refer to this fortification when he mentions the "strong - built House of Erechtheus '' (Odyssey 7.81). At some time before the 13th century BC, an earthquake caused a fissure near the northeastern edge of the Acropolis. This fissure extended some 35 meters to a bed of soft marl in which a well was dug. An elaborate set of stairs was built and the well served as an invaluable, protected source of drinking water during times of siege for some portion of the Mycenaean period. There is no conclusive evidence for the existence of a Mycenean palace on top of the Athenian Acropolis. However, if there was such a palace, it seems to have been supplanted by later building activity. Not much is known about the architectural appearance of the Acropolis until the Archaic era. During the 7th and the 6th centuries BC, the site was controlled by Kylon during the failed Kylonian revolt, and twice by Peisistratos; all attempts directed at seizing political power by coups d'état. Apart from the Hekatompedon mentioned later, Peisistratos also built an entry gate or Propylaea. Nevertheless, it seems that a nine-gate wall, the Enneapylon, had been built around the biggest water spring, the Clepsydra, at the northwestern foot. A temple to Athena Polias, the tutelary deity of the city, was erected between 570 -- 550 BC. This Doric limestone building, from which many relics survive, is referred to as the Hekatompedon (Greek for "hundred -- footed ''), Ur - Parthenon (German for "original Parthenon '' or "primitive Parthenon ''), H -- Architecture or Bluebeard temple, after the pedimental three - bodied man - serpent sculpture, whose beards were painted dark blue. Whether this temple replaced an older one, or just a sacred precinct or altar, is not known. Probably, the Hekatompedon was built where the Parthenon now stands. Between 529 -- 520 BC yet another temple was built by the Peisistratids, the Old Temple of Athena, usually referred to as the Arkhaios Neōs (ἀρχαῖος νεώς, "ancient temple ''). This temple of Athena Polias was built upon the Dörpfeld foundations, between the Erechtheion and the still - standing Parthenon. Arkhaios Neōs was destroyed by the Persian invasion during 480 BC; however, the temple was probably reconstructed during 454 BC, since the treasury of the Delian League was transferred in its opisthodomos. The temple may have been burnt down during 406 / 405 BC as Xenophon mentions that the old temple of Athena was set afire. Pausanias does not mention it in his 2nd century AD Description of Greece. Around 500 BC the Hekatompedon was dismantled to make place for a new grander building, the "Older Parthenon '' (often referred to as the Pre-Parthenon, "early Parthenon ''). For this reason, Athenians decided to stop the construction of the Olympieion temple which was connoted with the tyrant Peisistratos and his sons and, instead, used the Piraeus limestone destined for the Olympieion to build the Older Parthenon. In order to accommodate the new temple, the south part of the summit was cleared, made level by adding some 8,000 two - ton blocks of limestone, a foundation 11 m (36 ft) deep at some points, and the rest was filled with soil kept in place by the retaining wall. However, after the victorious Battle of Marathon in 490 BC, the plan was revised and marble was used instead. The limestone phase of the building is referred to as Pre-Parthenon I and the marble phase as Pre-Parthenon II. In 485 BC, construction stalled to save resources as Xerxes became king of Persia and war seemed imminent. The Older Parthenon was still under construction when the Persians indeed invaded and sacked the city in 480 BC. The building was burned and looted, along with the Ancient Temple and practically everything else on the rock. After the Persian crisis had subsided, the Athenians incorporated many architectural parts of the unfinished temple (unfluted column drums, triglyphs, metopes, etc.) into the newly built northern curtain wall of the Acropolis, where they served as a prominent "war memorial '' and can still be seen today. The devastated site was cleared of debris. Statuary, cult objects, religious offerings and unsalvageable architectural members were buried ceremoniously in several deeply dug pits on the hill, serving conveniently as a fill for the artificial plateau created around the classic Parthenon. This "Persian debris '' is the richest archaeological deposit excavated on the Acropolis. After winning at Eurymedon during 468 BC, Cimon and Themistocles ordered the reconstruction of the southern and northern walls of the Acropolis. Most of the major temples, including the Parthenon, were rebuilt by order of Pericles during the so - called Golden Age of Athens (460 -- 430 BC). Phidias, an Athenian sculptor, and Ictinus and Callicrates, two famous architects, were responsible for the reconstruction. During 437 BC, Mnesicles started building the Propylaea, a monumental gate at the western end of the Acropolis with Doric columns of Pentelic marble, built partly upon the old propylaea of Peisistratos. These colonnades were almost finished during 432 BC and had two wings, the northern one decorated with paintings by Polygnotus. About the same time, south of the Propylaea, building started on the small Ionic Temple of Athena Nike in Pentelic marble with tetrastyle porches, preserving the essentials of Greek temple design. After an interruption caused by the Peloponnesian War, the temple was finished during the time of Nicias ' peace, between 421 BC and 409 BC. Construction of the elegant temple of Erechtheion in Pentelic marble (421 -- 406 BC) was in accordance with a complex plan which took account of the extremely uneven ground and the need to circumvent several shrines in the area. The entrance, facing east, is lined with six Ionic columns. Unusually, the temple has two porches, one on the northwest corner borne by Ionic columns, the other, to the southwest, supported by huge female figures or Caryatids. The eastern part of the temple was dedicated to Athena Polias, while the western part, serving the cult of the archaic king Poseidon - Erechtheus, housed the altars of Hephaestus and Voutos, brother of Erechtheus. Little is known about the original plan of the interior which was destroyed by fire during the first century BC and has been rebuilt several times. During the same period, a combination of sacred precincts including the temples of Athena Polias, Poseidon, Erechtheus, Cecrops, Herse, Pandrosos and Aglauros, with its Kore Porch (Porch of the Maidens) or Caryatids ' balcony was begun. Between the temple of Athena Nike and the Parthenon, there was the Sanctuary of Artemis Brauronia (or the Brauroneion), the goddess represented as a bear and worshipped in the deme of Brauron. According to Pausanias, a wooden statue or xoanon of the goddess and a statue of Artemis made by Praxiteles during the 4th century BC were both in the sanctuary. Behind the Propylaea, Phidias ' gigantic bronze statue of Athena Promachos ("Athena who fights in the front line ''), built between 450 BC and 448 BC, dominated. The base was 1.50 m (4 ft 11 in) high, while the total height of the statue was 9 m (30 ft). The goddess held a lance the gilt tip of which could be seen as a reflection by crews on ships rounding Cape Sounion, and a giant shield on the left side, decorated by Mys with images of the fight between the Centaurs and the Lapiths. Other monuments that have left almost nothing visible to the present day are the Chalkotheke, the Pandroseion, Pandion 's sanctuary, Athena 's altar, Zeus Polieus 's sanctuary and, from Roman times, the circular temple of Augustus and Rome. During the Hellenistic and Roman periods, many of the existing buildings in the area of the Acropolis were repaired, due to damage from age, and occasionally, war. Monuments to foreign kings were erected, notably those of the Attalid kings of Pergamon Attalos II (in front of the NW corner of the Parthenon), and Eumenes II, in front of the Propylaia. These were rededicated during the early Roman Empire to Augustus or Claudius (uncertain), and Agrippa, respectively. Eumenes was also responsible for constructing a stoa on the South slope, not unlike that of Attalos in the Agora below. During the Julio - Claudian period, the Temple of Rome and Augustus, a small, round edifice, about 23 meters from the Parthenon, was to be the last significant ancient construction on the summit of the rock. Around the same time, on the North slope, in a cave next to the one dedicated to Pan since the classical period, a sanctuary was founded where the archons dedicated to Apollo on assuming office. During 161 AD, on the South slope, the Roman Herodes Atticus built his grand amphitheatre or Odeon. It was destroyed by the invading Herulians a century later but was reconstructed during the 1950s. During the 3rd century, under threat from a Herulian invasion, repairs were made to the Acropolis walls, and the "Beulé Gate '' was constructed to restrict entrance in front of the Propylaia, thus returning the Acropolis to use as a fortress. During the Byzantine period, the Parthenon was used as a church, dedicated to the Virgin Mary. During the Latin Duchy of Athens, the Acropolis functioned as the city 's administrative center, with the Parthenon as its cathedral, and the Propylaia as part of the Ducal Palace. A large tower was added, the "Frankopyrgos '', demolished during the 19th century. After the Ottoman conquest of Greece, the Parthenon was used as the garrison headquarters of the Turkish army, and the Erechtheum was turned into the Governor 's private Harem. The buildings of the Acropolis suffered significant damage during the 1687 siege by the Venetians in the Morean War. The Parthenon, which was being used as a gunpowder magazine, was hit by artillery shot and damaged severely. During subsequent years, the Acropolis was a site of bustling human activity with many Byzantine, Frankish, and Ottoman structures. The dominant feature during the Ottoman period was a mosque inside the Parthenon, complete with a minaret. After the Greek War of Independence, most features that dated from the Byzantine, Frankish and Ottoman periods were cleared from the site in an attempt to restore the monument to its original form, "cleansed '' of all later additions. The entrance to the Acropolis was a monumental gateway termed the Propylaea. To the south of the entrance is the tiny Temple of Athena Nike. At the centre of the Acropolis is the Parthenon or Temple of Athena Parthenos (Athena the Virgin). East of the entrance and north of the Parthenon is the temple known as the Erechtheum. South of the platform that forms the top of the Acropolis there are also the remains of the ancient, though often remodelled, Theatre of Dionysus. A few hundred metres away, there is the now partially reconstructed Odeon of Herodes Atticus. All the valuable ancient artifacts are situated in the Acropolis Museum, which resides on the southern slope of the same rock, 280 metres from the Parthenon. Site plan of the Acropolis at Athens showing the major archaeological remains The Project began during 1975 but as of 2017 has almost ground to a halt. The goal of the restoration was to reverse the decay of centuries of attrition, pollution, destruction stemming from military use, and misguided past restorations. The project included collection and identification of all stone fragments, even small ones, from the Acropolis and its slopes and the attempt was made to restore as much as possible using reassembled original material (anastylosis), with new marble from Mount Penteli used sparingly. All restoration was made using titanium dowels and is designed to be completely reversible, in case future experts decide to change things. A combination of cutting - edge modern technology and extensive research and reinvention of ancient techniques were used. The Parthenon colonnades, largely destroyed by Venetian bombardment during the 17th century, were restored, with many wrongly assembled columns now properly placed. The roof and floor of the Propylaea were partly restored, with sections of the roof made of new marble and decorated with blue and gold inserts, as in the original. Restoration of the Temple of Athena Nike was completed in 2010. A total of 2,675 tons of architectural members were restored, with 686 stones reassembled from fragments of the originals, 905 patched with new marble, and 186 parts made entirely of new marble. A total of 530 cubic meters of new Pentelic marble were used. Every four years, the Athenians had a festival called the Panathenaea that rivaled the Olympic Games in popularity. During the festival, a procession (believed to be depicted on the Parthenon frieze) traveled through the city via the Panathenaic Way and culminated on the Acropolis. There, a new robe of woven wool (peplos) was placed on either the statue of Athena Polias in the Erechtheum (during a regular Panathenaea) or on the statue of Athena Parthenos in the Parthenon (during the Great Panathenaea, held every four years). Within the later tradition of Western Civilization and classical revival the Acropolis, from at least the mid-18th century on, has often been invoked as a key symbol of the Greek legacy and of the glories of Classical Greece.
who say let's get ready to rumble
Michael Buffer - wikipedia Michael Buffer (born November 2, 1944) is an American ring announcer for boxing and professional wrestling matches. He is known for his trademarked catchphrase, "Let 's get ready to rumble! '', and for pioneering a distinct announcing style in which he rolls certain letters and adds other inflections to a fighter 's name. His half - brother is UFC announcer Bruce Buffer. Buffer was born and raised in Philadelphia, Pennsylvania, to an enlisted man in the United States Navy and his wife during World War II. His parents divorced when he was 11 months of age, and Buffer was then raised by foster parents, a school bus driver and housewife, in the Philadelphia suburb of Roslyn. He enlisted in the United States Army during the Vietnam War at age 20 and served until age 23. He held various jobs including a car salesman, then began a modeling career at age 32 before becoming a ring announcer at age 38. In 1982, Buffer began his career as a ring announcer. By 1983, he was announcing all boxing matches promoted by Bob Arum 's Top Rank on ESPN, which gave him a national identity at a time when ring announcers were strictly locally hired talent. By 1984, Buffer developed the catchphrase "Let 's get ready to rumble '' in his announcing, which gained enormous popularity. He began the process of obtaining a federal trademark for the phrase in the 1980s, which he acquired in 1992. Consequently, Buffer has earned in excess of $400 million with the license for his trademark. By the late 1980s, Buffer was the exclusive ring announcer for all bouts in Donald Trump - owned casinos. Trump said of Buffer, "He 's great, he 's the choice, he has a unique ability... I told my people, ' We got to have him. ' '' Buffer 's work was also admired by many of the boxing greats. Sugar Ray Leonard once said, "When (Buffer) introduces a fighter, it makes him want to fight. '' Buffer 's fame has reunited him with long - lost family members. In 1989, Buffer was contacted by his birth - father, who introduced Buffer to Buffer 's half - brothers after having seen him on television. In the mid-1990s, Buffer brought on one of his half - brothers, Bruce Buffer, as his agent / manager. This grew into a business partnership to increase licensing productivity of the trademark. Michael Buffer is currently the announcer for all HBO and RTL (Germany) boxing matches, along with Versus matches promoted by Top Rank. He has also been the ring announcer for a number of boxing events shown on Sky Sports in the United Kingdom Buffer was formerly the exclusive ring announcer for World Championship Wrestling (WCW) main events featuring Hulk Hogan or other top WCW talent until 2001, when the organization folded. WCW 's former parent company Time Warner owned through their pay - per - view subscription division HBO, which broadcast many matches from promoter Top Rank, of which Buffer is the lead ring announcer. The exclusivity of his contract with WCW prevented Buffer from announcing for other wrestling - type organizations, forcing him to stop announcing for the UFC (his only UFC cards were UFC 6 and UFC 7). However, when WCW ceased to exist, and Time Warner had no more affiliation with professional wrestling, Buffer was enabled to announce in other wrestling promotions. WWF wrestler Triple H created the phrase "Let 's get ready to suck it! '' as part of his D - Generation X act to mock Buffer while he was on WWF Raw 's rival show WCW Monday Nitro during the Monday Night Wars. On the August 18, 2007 edition of Saturday Night 's Main Event, for the first time in more than six years, Buffer returned to pro-wrestling ring announcing duties at Madison Square Garden in a boxing match between pro boxer Evander Holyfield (who was substituting for Montel Vontavious Porter) and pro wrestler Matt Hardy. Buffer appears in the Royal Rumble 2008 commercial, in which he begins to say "Let 's get ready to rumble! '' only to be superkicked by Shawn Michaels, causing him to fall over. As well as being in the commercial for the event, he was the guest ring announcer during the Royal Rumble match itself. During his career, Buffer has announced the University of Kentucky Athletics 2016 Big Blue Madness, the World Series, Stanley Cup Finals, NBA Finals, the Volunteer 500 at Bristol Motor Speedway, and NFL playoff games. He was a guest announcer at the 1999 Indianapolis 500. Buffer, like his brother Bruce, announced early UFC fights, starting at UFC 6 in 1995. He has appeared on various talk shows hosted by Jay Leno, David Letterman, Arsenio Hall, Conan O'Brien and Jimmy Kimmel. He has also appeared on Saturday Night Live, In Living Color, Mad TV and The Howard Stern Show. He has been animated in The Simpsons, South Park, and Celebrity Deathmatch, he appears as featuring artist on "Let 's Get Ready to Rumble '' and "Go for It All! '' by German eurodance group the K.O. 's and his voice was sampled in the Ant & Dec song "Let 's Get Ready to Rhumble ''. He has played himself in various films including Ready to Rumble and Rocky Balboa, and in 2008 Buffer appeared as Walbridge, the main villain in the comedy You Do n't Mess with the Zohan. Buffer is currently the host of Versus ' boxing retro show Legends of the Ring, which is produced by Top Rank, Inc., where he is ring announcer for most of their top matches. He appeared on NBC 's Deal or No Deal on December 10, 2007, and opened the finale of the seventh season of American Idol, a production of RTL. On July 19, 2008, he announced the Affliction: Banned mixed martial arts show. On November 10, 2008, Buffer started the heads - up action at the 2008 World Series of Poker final table with a modified version of his trademark statement, "Let 's get ready to shuffle up and deal ''. Buffer also appears in the animated TV series Phineas and Ferb in the episode "Raging Bully '', as the voice of the announcer for the big thumb - wrestling match with Phineas and Buford. Buffer has been reproduced as an action figure in both Toy Biz 's WCW line and Jakks Pacific 's Rocky line. He recorded the introduction track for country artist Josh Turner 's 2012 album, Punching Bag. In 2011 he made an appearance on the 12th season of Dancing with the Stars to announce Sugar Ray Leonard week 3 dance. In 2013, Buffer appeared in Progressive Insurance commercials, promoting their program of combining different coverages into one policy, with a parody of his famous phrase - "Let 's get ready to bundle! '' Buffer has also served as ringside announcer for the syndicated television game show The Grudge Match, hosted by Steve Albert and Jesse Ventura. Buffer appeared in the extended version of the Muppets webisode "Food Fight '', where he is seen announcing the cooking competition between Gordon Ramsay and the Swedish Chef. Buffer appeared at the University of Kentucky 's men 's basketball teams ' legendary "Big Blue Madness '' on October 14th, 2016. Instead of his traditional "Let 's get ready to rumble! '' Buffer announced the beginning of the event with "Let 's get ready to roundball! ''. He kept up this tradition on Saturday, January 28th when he announced his new rendition again at the perennial Blue Blood rivalry between the men 's basketball teams of the University of Kentucky and the University of Kansas. Buffer began using the phrase "Let 's get ready to rumble! '' in 1984. By 1992, he acquired a federal trademark for the phrase. Buffer uses his famous phrase in various licensing deals including the platinum selling album Jock Jams by Tommy Boy Records, the video games Ready 2 Rumble Boxing, Ready 2 Rumble Boxing: Round 2 for the PlayStation 2, Nintendo 64, Dreamcast and Game Boy Advance and Greatest Heavyweights of the Ring for the Sega Genesis and numerous other products. In addition, he has used variations of the phrase in advertisements, including the popular commercial for Mega Millions in which he says "Let 's get ready to Win Big! '' and the Kraft Cheese commercial in which he says "Let 's get ready to Crumble! '' and most recently for Progressive Insurance in which he says "Let 's get ready to bundle! '' The phrase, "Are you ready to rumble? '' is spoken in the 1957 television show Maverick, titled "Stage West '' (episode 6, season one). by the stage coach driver to the passengers as he prepares to leave in the stage coach. As of 2009, the catchphrase has generated $400 million in revenue from licensing the trademark. Buffer first wed at age 21. The marriage, which ended in divorce after seven years. He has two sons from his first marriage. More than 25 years passed before he remarried in 1999. He and his second wife divorced in 2003. On September 13, 2007, while making an appearance on The Tonight Show with Jay Leno, he proposed to his current (third) wife, Christine. Buffer currently resides in Southern California. His half - brother Bruce Buffer is the announcer for leading mixed martial arts promotion, Ultimate Fighting Championship. Both Michael and Bruce are grandsons of late boxer Johnny Buff. In 2008 Buffer was treated for throat cancer.
where was the deathly hallows part 1 filmed
Harry Potter and the Deathly Hallows -- Part 1 - wikipedia Harry Potter and the Deathly Hallows -- Part 1 is a 2010 British - American fantasy film directed by David Yates and distributed by Warner Bros. Pictures. It is the first of two cinematic parts based on the novel of the same name by J.K. Rowling and features an ensemble cast. The film, which is the seventh and penultimate installment in the Harry Potter film series, was written by Steve Kloves and produced by David Heyman, David Barron, and Rowling. The film stars Daniel Radcliffe as Harry Potter, with Rupert Grint and Emma Watson, respectively, reprising roles as Harry 's best friends Ron Weasley and Hermione Granger. It is the sequel to Harry Potter and the Half - Blood Prince and is followed by the concluding entry, Harry Potter and the Deathly Hallows -- Part 2. The story follows Harry Potter who has been tasked by Dumbledore with finding and destroying Lord Voldemort 's secret to immortality -- the Horcruxes. Filming began on 19 February 2009 (2009 - 02 - 19) and was completed on 12 June 2010 (2010 - 06 - 12). Part 1 was released in 2D cinemas and IMAX formats worldwide on 19 November 2010. In the film 's worldwide opening weekend, Part 1 grossed $330 million, the third - highest in the series, and the highest opening of 2010, as well as the eighth - highest of all time. With a worldwide gross of $960 million, Part 1 is the third highest - grossing film of 2010, behind Toy Story 3 and Alice in Wonderland, and the third - highest - grossing Harry Potter film in terms of worldwide totals, behind Deathly Hallows -- Part 2 and Philosopher 's Stone, and the 37th highest - grossing film of all - time. The film received two nominations at the 83rd Academy Awards: Best Visual Effects and Best Art Direction. Minister of Magic Rufus Scrimgeour addresses the wizarding media, and states that the Ministry will remain strong, even as Lord Voldemort gains strength. The Death Eaters have made major gains after Dumbledore 's death, committing mass killings of Muggles and infiltrating the Ministry itself. Harry, Ron, and Hermione set out to complete the mission Dumbledore gave Harry by hunting down and destroying Voldemort 's Horcruxes. Meanwhile, Severus Snape informs Lord Voldemort and the Death Eaters of Harry 's impending departure from Privet Drive. Voldemort commandeers Lucius Malfoy 's wand, as Voldemort 's own wand can not be used to kill Harry because they share the same core. The Order of the Phoenix gather and escort Harry to safety, using Polyjuice Potion to create decoy Harrys for the trip. During their flight they all are ambushed by Death Eaters who kill Mad - Eye Moody and Hedwig, injure George Weasley, and knock out Hagrid. After arriving at the Burrow, Harry has a vision of the wand - maker Ollivander being tortured by Voldemort. The next day Scrimgeour arrives at the Burrow with Albus Dumbledore 's will and distributes three items to Ron, Hermione, and Harry. Ron receives Dumbledore 's Deluminator, Hermione receives a copy of The Tales of Beedle the Bard, and Harry receives the first Golden Snitch that he ever caught in a Quidditch match. Scrimgeour reveals that Harry was also bequeathed the Sword of Godric Gryffindor. The minister tells Harry that the sword was not Dumbledore 's to bequeath, and in any case is missing. The Death Eaters kill Scrimgeour and replace him with Pius Thicknesse. The Ministry begins arresting and persecuting Muggle - born witches and wizards. Before the wedding of Bill Weasley and Fleur Delacour, Harry Potter and Ginny Weasley kiss passionately until Ginny 's brother interrupts them. At Bill and Fleur 's wedding, however, Death Eaters attack. Kingsley Shacklebolt 's patronus charm forewarns the wedding party, and most escape. Harry, Hermione, and Ron disapparate to London, where they are attacked in a diner by Death Eaters. The trio are forced to seek refuge at 12 Grimmauld Place. While there, they discover that the "R.A.B. '' from the fake Horcrux locket is Regulus Arcturus Black, the younger brother of Sirius Black. Kreacher, the Black 's house elf, tells them that Mundungus Fletcher broke in and stole many items from the house including the real locket. Kreacher and Dobby apprehend Fletcher, who reveals that the locket is in the possession of Dolores Umbridge. Using Polyjuice Potion, the trio infiltrate the Ministry and find the locket around Umbridge 's neck. Harry stuns Umbridge and Hermione retrieves the locket. The trio escape from their pursuers by apparating in the wilderness but Ron is injured and can not apparate again until he recovers. After several unsuccessful attempts to destroy the Horcrux, the trio take turns wearing it to dilute its power. Harry sees a vision of Voldemort interrogating and killing the wand - maker Gregorovitch, who claims that a teenage boy had once stolen the legendary Elder Wand from his shop. While Ron is wearing the locket, he is overcome by negative feelings and argues with Harry before leaving him and Hermione behind. Hermione deduces that the sword of Gryffindor can destroy Horcruxes and decides with Harry to go to Godric 's Hollow. They visit Harry 's parents ' graves and the house where they were killed. They encounter Bathilda Bagshot, who they believe may have the sword. Bathilda lets Harry and Hermione into her house, and is revealed to actually be Nagini, controlling Bathilda 's reanimated corpse from inside of her. Hermione and Harry escape into the Forest of Dean, but Hermione accidentally breaks Harry 's wand whilst fighting Nagini. Hermione is then able to identify the mysterious thief seen in Harry 's vision as Gellert Grindelwald. That evening Harry sees a Patronus in the form of a doe, which leads him to a frozen pond. Gryffindor 's sword lies beneath the pond 's ice, which Harry breaks and jumps in to retrieve it. The locket around his neck attempts to strangle him, but Ron arrives just in time to rescue Harry. Harry uses parseltongue to open the locket, and Ron destroys the Horcrux with the sword. Hermione and Ron reconcile, and the trio decide to go and visit Xenophilius Lovegood to learn more about a symbol drawn in the book Dumbledore left Hermione. Lovegood explains to them that the symbol represents the Deathly Hallows, three magical objects that when combined make a wizard master of Death. Hermione reads the story of the Hallows from her book, and after some awkward conversation the trio try to leave but are stopped by Lovegood. He explains that Luna has been kidnapped before betraying them and summoning the Death Eaters. Harry, Ron, and Hermione disapparate as Lovegood 's house is destroyed. Arriving back in the wilderness, the trio sets up camp when Snatchers find and chase them down. Hermione strikes Harry with a curse to disguise his features as the Snatchers take them all to Malfoy Manor. Bellatrix Lestrange imprisons Harry and Ron in the cellar with Luna, Ollivander, and Griphook the goblin. Bellatrix then tortures Hermione for information on how they got the sword of Gryffindor, which Bellatrix claims was in her vault at Gringott 's. Harry requests help, communicating with a piece of broken mirror in his possession. Dobby apparates into the cellar to save them. Harry and Ron rush to save Hermione, and a battle ensues that sees Harry disarm Draco Malfoy. Dobby drops a chandelier above Bellatrix, forcing her to release Hermione. Bellatrix throws her knife at them as Dobby grabs everyone and disapparates. They arrive at Shell Cottage and find that Bellatrix 's knife has fatally wounded Dobby. Harry insists that they bury Dobby properly without any magic, which they all agree to do. In the final scene, Voldemort breaks into Dumbledore 's tomb and steals the Elder Wand. Part 1 was filmed back - to - back with Harry Potter and the Deathly Hallows -- Part 2 from 19 February 2009 to 12 June 2010. Director David Yates, who shot the film alongside director of photography Eduardo Serra, described Part 1 as "quite real ''; a "road movie '' that 's "almost like a vérité documentary ''. Originally set for a single theatrical release, the idea to split the book into two parts was suggested by executive producer Lionel Wigram due to what David Heyman called "creative imperative ''. Heyman initially responded negatively to the idea, but Wigram asked, "No, David. How are we going to do it? ''. After rereading the book and discussing it with screenwriter Steve Kloves, he agreed with the division. The production filmed at Dartford Crossing for the dramatic chase where Hagrid and Harry are being ambushed by Death Eaters. Stuart Craig, set designer for all of the previous Harry Potter films, returned for the final two parts. He said, "We made a very different kind of film, which was shot a great deal on location. We travelled quite far, we built sets, and they spend a lot of time in a forest, '' he explained. "We built forest sets and integrated them into the real forests, so there were challenges there, as you might imagine. '' Craig was ultimately nominated for an Academy Award for his work on Part 1. On the wedding tent for Bill and Fleur 's wedding in Part 1, Craig commented on his aim to "rather than make it an extension of the house, which is rather eccentric, homemade, we decided to make it rather elegant... It 's lined with silk and beautiful, floating candelabra. So it 's a nice contrast with the house. '' For the Ministry of Magic set, he noted, "This is an underground world; this is a ministry, so we went to the real ministries, the Muggle ministries -- Whitehall, in London -- and decided that our magical ministry was kind of a parallel universe to these real ministries. '' Craig also commented on his design of Malfoy Manor, saying that it is "a very strong architectural set. The exterior is based on an Elizabethan house here in this country called Hardwick Hall and it has massive windows, and these windows are kind of blinded out. The shutters are drawn so they are like blind windows and they have a real kind of presence, an ominous presence, so that gave us the basis for a good exterior. There 's an extraordinary magical roof that 's added and surrounded by forest which is n't there in reality, but again is one of the devices to make it more threatening and mysterious. '' The costumes for Part 1 were designed by Jany Temime, who has been the costume designer on Harry Potter productions since Harry Potter and the Prisoner of Azkaban (2004). Temime was involved in a controversy regarding her work on Fleur Delacour 's wedding dress. She was accused of copying the design from a similar dress from Alexander McQueen 's Fall 2008 collection. Temime spoke about the dress, saying that she "wanted it to be a witch wedding dress but not a Halloween dress. The dress is white but it needed to have something fantastic to it. So there is the phoenix (motif), the bird, which is a symbol of love in a way because there is rebirth, love never dies, it is born again. '' After working on every film since Prisoner of Azkaban, Double Negative was asked to provide visual effects for the final instalments of the story, in Harry Potter and the Deathly Hallows -- Parts 1 and 2. Working closely with the film 's VFX Supervisor, Tim Burke, the team was led by VFX Supervisor, David Vickery and VFX Producer Charlotte Loughlane. The main team also included 3D Supervisor, Rick Leary and 2D Supervisor, Sean Stranks. Double Negative 's work for Part 1 included the corroding Warner Brothers logo and extensive environment extensions of the Burrows and its surrounds. Additional environment work was completed on Xenophilius Lovegood 's home, extending it in 3D and culminating in the Death Eaters ' attack. Double Negative also advanced the Death Eaters ' smoke effects, with the introduction of the ' flayed man ' stage in between their smokey, fluid, flying state and their live - action presence upon landing. Other work included the Patronus charm that interrupts the wedding party to inform the guests that Voldemort has taken over the Ministry of Magic. The visual effects company Framestore produced most of the creature CGI, as in previous films, as well as the animated Tale of the Three Brothers sequence, which was directed and designed by Ben Hibon. Composer Nicholas Hooper, who scored Order of the Phoenix and Half - Blood Prince, did not return for Deathly Hallows. Instead, Alexandre Desplat was hired to compose the score for Harry Potter and the Deathly Hallows − Part 1. The film also featured the song "O Children '' by Nick Cave and the Bad Seeds. The first official picture from the first film was released on 1 December 2009 (2009 - 12 - 01), showing Harry, Ron and Hermione in a London street. A clip was officially released on 8 December 2009 (2009 - 12 - 08) with the release of Harry Potter and the Half - Blood Prince on Blu - ray and DVD. At the 2010 ShoWest convention, Alan F. Horn premiered unfinished footage from both films. The 2010 MTV Movie Awards premiered more footage from Deathly Hallows. Following this was the release of the official teaser poster, which shows the release date of both Part 1 and Part 2 and a destroyed Hogwarts castle. ABC Family broadcast interviews and additional scenes from both parts during their Harry Potter weekend, which began on 8 July 2010. A two - minute trailer for the film was released worldwide on 22 September 2010. On 29 September 2010, three character posters for Part 1 of Harry, Ron, and Hermione were released by Yahoo! Movies. The following day, a Part 1 cinema poster was released featuring the trio on the run in a forest. The theatrical poster has the tagline "Nowhere is safe '', and another version with no credits has the tagline "The end begins ''. Various other character posters for Part 1 were released on 6 October 2010, featuring Harry, Ron, Hermione, Lord Voldemort, Bellatrix Lestrange, Severus Snape and Fenrir Greyback. On 12 October, four new character posters were released. The posters are set to the theme of "Trust no one '' and "The hunt begins ''. On 15 October 2010, tickets began selling on Fandango for the US release of Part 1, and on 19 October, a 50 - second clip featuring never - before - seen footage was aired at the 2010 Scream Awards. On 16 October, the second TV spot was released on Cartoon Network during a premiere of Scooby - Doo! Curse of the Lake Monster. On 25 October 2010, Yahoo! Movies released an exclusive featurette of the film. On 30 October 2010, Entertainment Weekly released two new featurettes titled "Horcruxes '' and "The Story '', featuring a large amount of never - before - seen footage. On the same day, the Warner Bros. Harry Potter website was updated to reveal twelve miniature clips from the film. On 3 November 2010, the Los Angeles Times released an extended clip of Harry leaving the Burrow to find the Horcruxes, titled "No One Else Is Going to Die for Me ''. On 4 November, a new clip was released from the Harry Potter Facebook page, titled "The Seven Potters ''. Two more clips were released over the next two days, including a scene depicting a café attack and another taking place in Malfoy Manor. On 26 August 2010, director David Yates, producers David Heyman and David Barron, and with Warner Bros. president Alan F. Horn attended a test screening for Deathly Hallows -- Part 1 in Chicago. The unfinished film gained rave reviews from test screeners, some of whom labelled it "amazing and dark '' and "the most perfect Harry Potter film ''. Others expressed that the film faithfully adapted the novel, which led to an inheritance of the "book 's own problems ''. Warner Bros. Pictures was originally going to release Part 1 of Deathly Hallows in 2D and 3D formats. On 8 October 2010, it was announced that plans for a 3D version of Part 1 had been scrapped. "Warner Bros. Pictures has made the decision to release Harry Potter and the Deathly Hallows -- Part 1 in 2D, in both conventional and IMAX cinemas (because) we will not have a completed 3D version of the film within our release date window. Despite everyone 's best efforts, we were unable to convert the film in its entirety and meet the highest standards of quality. '' Part 1 of Deathly Hallows was released on Blu - ray 3D as a Best Buy Exclusive. Part 2 was still released in 2D, 3D, and IMAX formats. The world premiere for Deathly Hallows -- Part 1 was held in Leicester Square in London on 11 November 2010, with fans from across the world turning up -- some of whom had camped for days in the square. This was followed by the Belgian premiere on 12 November and the US premiere in New York City on 15 November. Just 48 hours prior to the official North American launch of Part 1, the first 36 minutes of the film were leaked on the internet. Even before the leak, the film was already the fifth - biggest generator of advance ticket sales in history, after selling out 1,000 cinemas across the United States. Despite widely circulating rumours that the leaked footage was a marketing ploy to generate hype for the movie release date, no screener discs had been created by Warner Bros., and executives called it "a serious breach of copyright violation and theft of Warner Bros. property ''. In Australia, the film had its premiere on 13 November at Warner Bros. Movie World, located on the Gold Coast, Queensland. Three hundred people attended the viewing, which was the second official showing in the world, behind the UK premiere. The film premiered in Kuwait 's release on 16 November. In Israel, Estonia, and New Zealand, the film was released on 18 November. Harry Potter and the Deathly Hallows -- Part 1 was released on a single and double disc DVD and 3 - disc Blu - ray combo pack on 11 April 2011 in the UK and on 15 April 2011 in the US. On 28 January 2011, it was announced by Emma Watson on the Harry Potter UK Facebook page that the page 's fans will get to vote for their preferred cover for the Part 1 Blu - ray. The cover with the most votes will be the cover for the disc. Voting started that same day. The DVD and Blu - ray include eight deleted scenes, with the Blu - ray Combo Pack containing an opening scene from Part 2 featuring Harry and Ollivander discussing the Deathly Hallows. Deathly Hallows -- Part 1 performed well in DVD sales, selling 7,237,437 DVD units and adding $86,932,256 to the gross revenue of the film, bringing the total to $1,043,331,967. Harry Potter and the Deathly Hallows -- Part 1 grossed $24 million in North America during its midnight showing, beating the record for the highest midnight gross of the series, previously held by Half Blood Prince, at $22.2 million. The film also had the third - highest midnight gross of all time, behind The Twilight Saga: Eclipse and The Twilight Saga: New Moon, which grossed $30 million and $26.3 million, respectively. The film broke the record for the highest midnight gross in IMAX, with $1.4 million in box office sales, surpassing Eclipse, which grossed $1 million. All of these records were later topped in 2011 by the film 's sequel, Harry Potter and the Deathly Hallows -- Part 2. In North America, the film grossed $61.7 million on its opening day, marking the sixth highest single day gross ever at the time. It became the highest opening day for a Harry Potter film in the series, a record previously held by Half - Blood Prince with $58.2 million, until it was broken by Harry Potter and the Deathly Hallows -- Part 2 with $92.1 million. The film grossed a total of $125 million in its opening weekend, marking the largest opening for the franchise, previously held by Goblet of Fire and later topped by its sequel Harry Potter and the Deathly Hallows -- Part 2. It also was the second biggest November opening ever at the time, behind The Twilight Saga: New Moon 's $142.8 million, the ninth biggest weekend opening for a film of all time at the North American box office, and the second biggest opening weekend for a 2010 film in the United States and Canada behind Iron Man 2 's $128.1 million. The film stayed at the top of the box office for two weeks, grossing $75 million over the five - day Thanksgiving weekend, bringing its total to $219.1 million. In the United Kingdom, Ireland, and Malta, the film broke records for the highest Friday gross (£ 5.9 million), Saturday gross (£ 6.6 million), and Sunday gross (£ 5.7 million). Additionally, the film set the largest single day gross (£ 6.6 million) and the largest opening three - day gross (£ 18,319,721), a record previously held by Quantum of Solace, which grossed £ 15.4 million. As of 13 February 2011, Part 1 has grossed £ 52,404,464 ($86,020,929), becoming the second highest - grossing 2010 release in the country, behind Toy Story 3 (£ 73,405,113). Outside North America, the film grossed an estimated $205 million in its opening weekend, becoming the sixth highest of all time, the highest for a 2010 release, and the second highest for a Harry Potter movie, behind only Half - Blood Prince. Globally, the film grossed $330 million in its opening weekend, ranking seventh on the all - time chart. It was the highest grossing 2010 film in Indonesia ($6,149,448), Singapore ($4,546,240), Thailand ($4,933,136), Belgium and Luxembourg ($8,944,329), France and the Maghreb region ($51,104,397), Germany ($61,430,098), the Netherlands ($13,790,585), Norway ($7,144,020), Sweden ($11,209,387), and Australia ($41,350,865). In total overseas earnings, it surpassed Philosopher 's Stone ($657.2 million) to become the highest grossing Harry Potter film overseas. On 7 April 2011, Part 1 ended its run with $295,983,305 in the United States and Canada, making it the fifth highest - grossing film of 2010 in these regions, and $664,300,000 from other countries around the world, for a worldwide total of $960,283,305, making it the third highest - grossing film of 2010 worldwide behind Toy Story 3 and Alice in Wonderland, as well as the 31st highest - grossing film of all time worldwide and the third highest grossing Harry Potter film in the series behind The Deathly Hallows -- Part 2 and The Philosopher 's Stone. Due to the success of the sequel in Germany, Harry Potter and the Deathly Hallows -- Part 1 could return to No. 9 on the country 's Cinema Charts with 28,000 viewers in July 2011. Review - aggregation website Rotten Tomatoes gives the film an approval rating of 78 % based on 261 reviews, with an average score of 7.1 / 10. The site 's consensus reads, "It ca n't help but feel like the prelude it is, but Deathly Hallows: Part I is a beautifully filmed, emotionally satisfying penultimate installment for the Harry Potter series. '' On Metacritic, which assigns a normalised rating to reviews, the film has a score of 65 out of 100, based on 41 critics, indicating "generally favourable reviews ''. On CinemaScore, audiences gave the film an average grade of "A '' on an A+ to F scale. The UK 's Daily Telegraph also gave the film a positive review, remarking, "For the most part the action romps along, spurred by some impressive special effects, '' adding, "It 's just slightly disappointing that, with the momentum having been established so effectively, we now have to wait until next year to enjoy the rest of the ride. '' Roger Ebert awarded the first part three out of four stars, praising the cast and calling it "a handsome and sometimes harrowing film... completely unintelligible for anyone coming to the series for the first time ''. Scott Bowles of USA Today called it, "Menacing and meditative, Hallows is arguably the best instalment of the planned eight - film franchise, though audiences who have n't kept up with previous chapters will be hopelessly lost '', while Lisa Schwarzbaum of Entertainment Weekly likewise praised the film as "the most cinematically rewarding chapter yet. '' In a review for the Orlando Sentinel, Roger Moore proclaimed Part I as "Alternately funny and touching, it 's the best film in the series, an Empire Strikes Back for these wizards and their wizarding world. And those effects? They 're so special you do n't notice them. '' Ramin Setoodeh of Newsweek gave a negative review, writing, "They 've taken one of the most enchanting series in contemporary fiction and sucked out all the magic... while Rowling 's stories are endlessly inventive, Potter onscreen just gives you a headache. '' Lou Lumenick of the New York Post found the film to be "Beautifully shot but a soulless cash machine... (that) delivers no dramatic payoff, no resolution and not much fun. '' Harry Potter and the Deathly Hallows -- Part 1 was nominated for Best Art Direction and Best Visual Effects at the 83rd Academy Awards. It is the second film in the Harry Potter film series to be nominated for a Visual Effects Oscar (the previous one being Harry Potter and the Prisoner of Azkaban). The film was long - listed for eight different categories, including Best Cinematography, Production Design, and Original Score, at the 64th BAFTA awards, and ultimately was nominated for Best Special Visual Effects and Make - up.
2 at the same time call it changing faces
Janus - Wikipedia In ancient Roman religion and myth, Janus (/ ˈdʒeɪnəs /; Latin: IANVS (Iānus), pronounced (ˈjaː. nus)) is the god of beginnings, gates, transitions, time, duality, doorways, passages, and endings. He is usually depicted as having two faces, since he looks to the future and to the past. It is conventionally thought that the month of January is named for Janus (Ianuarius), but according to ancient Roman farmers ' almanacs Juno was the tutelary deity of the month. Janus presided over the beginning and ending of conflict, and hence war and peace. The gates of a building in Rome named after him, not a temple as it is often called, but an open enclosure with gates at each end, were opened in time of war, and closed to mark the arrival of peace (which did not happen very often). As a god of transitions, he had functions pertaining to birth and to journeys and exchange, and in his association with Portunus, a similar harbor and gateway god, he was concerned with travelling, trading and shipping. Janus had no flamen or specialised priest (sacerdos) assigned to him, but the King of the Sacred Rites (rex sacrorum) himself carried out his ceremonies. Janus had an ubiquitous presence in religious ceremonies throughout the year. As such, Janus was ritually invoked at the beginning of each ceremony, regardless of the main deity honored on any particular occasion. The ancient Greeks had no equivalent to Janus, whom the Romans claimed as distinctively their own. Three etymologies were proposed by ancient erudites, each of them bearing implications about the nature of the god. The first one is based on the definition of Chaos given by Paul the Deacon: hiantem, hiare, be open, from which word Ianus would derive by loss of the initial aspirate. In this etymology, the notion of Chaos would define the primordial nature of the god. Another etymology proposed by Nigidius Figulus is related by Macrobius: Ianus would be Apollo and Diana Iana, by the addition of a D for the sake of euphony. This explanation has been accepted by A.B. Cook and J.G. Frazer. It supports all the assimilations of Janus to the bright sky, the sun and the moon. It supposes a former * Dianus, formed on * dia - < * dy - eð2 from Indo - European root * dey - shine represented in Latin by dies day, Diovis and Iuppiter. However the form Dianus postulated by Nigidius is not attested. A third etymology indicated by Cicero, Ovid and Macrobius, which explains the name as Latin, deriving it from the verb ire ("to go '') is based on the interpretation of Janus as the god of beginnings and transitions. Modern scholars have conjectured that it derives from the Indo - European root meaning transitional movement (cf. Sanskrit "yana - '' or Avestan "yah - '', likewise with Latin "i - '' and Greek "ei - ''.). Iānus would then be an action name expressing the idea of going, passing, formed on the root * yā - < * y - eð2 - theme II of the root * ey - go from which eō, ειμι. Other modern scholars object to an Indo - European etymology either from Dianus or from root * yā -. From Ianus derived ianua ("door ''), and hence the English word "janitor '' (Latin, ianitor). While the fundamental nature of Janus is debated, in most modern scholars ' view the god 's functions may be seen as being organized around a single principle: presiding over all beginnings and transitions, whether abstract or concrete, sacred or profane. Interpretations concerning the god 's fundamental nature either limit it to this general function or emphasize a concrete or particular aspect of it (identifying him with light the sun, the moon, time, movement, the year, doorways, bridges etc.) or else see in the god a sort of cosmological principle, interpreting him as a uranic deity. Almost all of these modern explanations were originally formulated by the ancients. The function god of beginnings has been clearly expressed in numerous ancient sources, among them most notably Cicero, Ovid, and Varro. As a god of motion, Janus looks after passages, causes actions to start and presides over all beginnings. Since movement and change are interconnected, he has a double nature, symbolised in his two headed image. He has under his tutelage the stepping in and out of the door of homes, the ianua, which took its name from him, and not vice versa. Similarly, his tutelage extends to the covered passages named iani and foremost to the gates of the city, including the cultic gate of the Argiletum, named Ianus Geminus or Porta Ianualis from which he protects Rome against the Sabines. He is also present at the Sororium Tigillum, where he guards the terminus of the ways into Rome from Latium. He has an altar, later a temple near the Porta Carmentalis, where the road leading to Veii ended, as well as being present on the Janiculum, a gateway from Rome out to Etruria. The connection of the notions of beginning (principium), movementy, transition (eundo), and thence time has been clearly expressed by Cicero. In general, Janus is at the origin of time as the guardian of the gates of Heaven: Jupiter himself can move forth and back because of Janus 's working. In one of his temples, probably that of Forum Holitorium, the hands of his statue were positioned to signify the number 355 (the number of days in a lunar year), later 365, symbolically expressing his mastership over time. He presides over the concrete and abstract beginnings of the world, such as religion and the gods themselves, he too holds the access to Heaven and to other gods: this is the reason why men must invoke him first, regardless of the god they want to pray or placate. He is the initiator of human life, of new historical ages, and financial enterprises: according to myth he was the first to mint coins and the as, first coin of the liberal series, bears his effigy on one face. Janus frequently symbolized change and transitions such as the progress of past to future, from one condition to another, from one vision to another, and young people 's growth to adulthood. He represented time, because he could see into the past with one face and into the future with the other. Hence, Janus was worshipped at the beginnings of the harvest and planting times, as well as at marriages, deaths and other beginnings. He represented the middle ground between barbarism and civilization, rural and urban space, youth and adulthood. Having jurisdiction over beginnings Janus had an intrinsic association with omens and auspices. Leonhard Schmitz suggests that he was likely the most important god in the Roman archaic pantheon. He was often invoked together with Iuppiter (Jupiter). In one of his works G. Dumézil has postulated the existence of a structural difference in level between the Indo - European gods of beginning and ending and the other gods who fall into a tripartite structure, reflecting the most ancient organization of society. So in IE religions there is an introducer god (as Vedic Vâyu and Roman Janus) and a god of ending, a nurturer goddess and a genie of fire (as Vedic Saraswati and Agni, Avestic Armaiti, Anâitâ and Roman Vesta) who show a sort of mutual solidarity: the concept of ' god of ending ' is defined in connection to the human referential, i.e. the current situation of man in the universe, and not to endings as transitions, which are under the jurisdiction of the gods of beginning owing to the ambivalent nature of the concept. Thus the god of beginning is not structurally reducible to a sovereign god, nor the goddess of ending to any of the three categories on to which the goddesses are distributed. There is though a greater degree of fuzziness concerning the function and role of goddesses, which may have formed a preexisting structure allowing the absorption of the local Mediterranean mother goddesses, nurturers and protectresses. As a consequence the position of the gods of beginning would not be the issue of a diachronic process of debasement undergone by a supreme uranic god, but rather a structural feature inherent to their theology. The fall of uranic primordial gods into the condition of deus otiosus is a well - known phenomenon in the history of religions. Mircea Eliade gave a positive evaluation of Dumezil 's views and of the results in comparative research on Indoeuropean religions achieved in Tarpeia. even though he himself in many of his works observed and discussed the phenomenon of the fall of uranic deities in numerous societies of ethnologic interest. According to Macrobius who cites Nigidius Figulus and Cicero, Janus and Jana (Diana) are a pair of divinities, worshipped as Apollo or the sun and moon, whence Janus received sacrifices before all the others, because through him is apparent the way of access to the desired deity. A similar solar interpretation has been offered by A. Audin who interprets the god as the issue of a long process of development, starting with the Sumeric cultures, from the two solar pillars located on the eastern side of temples, each of them marking the direction of the rising sun at the dates of the two solstices: the southeastern corresponding to the Winter and the northeastern to the Summer solstice. These two pillars would be at the origin of the theology of the divine twins, one of whom is mortal (related to the NE pillar, as confining with the region where the sun does not shine) and the other is immortal (related to the SE pillar and the region where the sun always shines). Later these iconographic models evolved in the Middle East and Egypt into a single column representing two torsos and finally a single body with two heads looking at opposite directions. Numa in his regulation of the Roman calendar called the first month Januarius after Janus, according to tradition considered the highest divinity at the time. Numa built the Ianus geminus (also Janus Bifrons, Janus Quirinus or Portae Belli), a passage ritually opened at times of war, and shut again when Roman arms rested. It formed a walled enclosure with gates at each end, situated between the old Roman Forum and that of Julius Caesar, which had been consecrated by Numa Pompilius himself. About the exact location and aspect of the temple there has been much debate among scholars. In wartime the gates of the Janus were opened, and in its interior sacrifices and vaticinia were held, to forecast the outcome of military deeds. The doors were closed only during peacetime, an extremely rare event. The function of the Ianus Geminus was supposed to be a sort of good omen: in time of peace it was said to close the wars within or to keep peace inside; in times of war it was said to be open to allow the return of the people on duty. A temple of Janus is said to have been consecrated by the consul Gaius Duilius in 260 BC after the Battle of Mylae in the Forum Holitorium. It contained a statue of the god with the right hand showing the number 300 and the left the number 65 -- i.e., the length in days of the solar year, and twelve altars, one for each month. The four - sided structure known as the Arch of Janus in the Forum Transitorium dates from the 1st century of the Christian era: according to common opinion it was built by the Emperor Domitian. However American scholars L. Ross Taylor and L. Adams Holland on the grounds of a passage of Statius maintain that it was an earlier structure (tradition has it the Ianus Quadrifrons was brought to Rome from Falerii) and that Domitian only surrounded it with his new forum. In fact the building of the Forum Transitorium was completed and inaugurated by Nerva in AD 96. One way of investigating the complex nature of Janus is by systematically analysing his cultic epithets: religious documents may preserve a notion of a deity 's theology more accurately than other literary sources. The main sources of Janus 's cult epithets are the fragments of the Carmen Saliare preserved by Varro in his work De Lingua Latina, a list preserved in a passage of Macrobius 's Saturnalia (I 9, 15 - 16), another in a passage of Johannes Lydus 's De Mensibus (IV 1), a list in Cedrenus 's Historiarum Compendium (I p. 295 7 Bonn), partly dependent on Lydus 's, and one in Servius Honoratus 's commentary to the Aeneis (VII 610). Literary works also preserve some of Janus 's cult epithets, such as Ovid 's long passage of the Fasti devoted to Janus at the beginning of Book I (89 - 293), Tertullian, Augustine and Arnobius. As may be expected the opening verses of the Carmen, are devoted to honouring Janus, thence were named versus ianuli. Paul the Deacon mentions the versus ianuli, iovii, iunonii, minervii. Only part of the versus ianuli and two of the iovii are preserved. The manuscript has: (paragraph 26): "cozeulodorieso. omia ũo adpatula coemisse. / ian cusianes duonus ceruses. dun; ianusue uet põmelios eum recum ''; (paragraph 27): "diuum êpta cante diuum deo supplicante. '' "ianitos ''. Many reconstructions have been proposed: they vary widely in dubious points and are all tentative, nonetheless one can identify with certainty some epithets: Cozeiuod orieso. Omnia vortitod Patulti; oenus es iancus (or ianeus), Iane, es, duonus Cerus es, duonus Ianus. Veniet potissimum melios eum recum. Diuum eum patrem (or partem) cante, diuum deo supplicate. ianitos. The epithets that can be identified are: Cozeuios, i.e. Conseuius the Sower, which opens the carmen and is attested as an old form of Consivius in Tertullian; Patultius: the Opener; Iancus or Ianeus: the Gatekeeper; Duonus Cerus: the Good Creator; rex king (potissimum melios eum recum: the most powerful and best of kings); diuum patrem (partem): father of the gods (or part of the gods); diuum deus: god of the gods; ianitos: the Janitor, Gatekeeper. The above - mentioned sources give: Ianus Geminus, I. Pater, I. Iunonius, I. Consivius, I. Quirinus, I. Patulcius and Clusivius (Macrobius above I 9, 15): Ι. Κονσίβιον, Ι. Κήνουλον, Ι. Κιβουλλιον, I. Πατρίκιον, I. Κλουσίβιον, I. Ιουνώνιον, I. Κυρινον, I. Πατούλκιον, I. Κλούσιον, I. Κουριάτιον (Lydus above IV 1); I. Κιβούλλιον, I. Κυρινον, I. Κονσαιον, I. Πατρίκιον (Cedrenus Historiarum Compendium I p. 295 7 Bonn); I. Clusiuius, I. Patulcius, I. Iunonius, I. Quirinus (Servius Aen. VII 610). Even though the lists overlap to a certain extent (five epithets are common to Macrobius 's and Lydus 's list), the explanations of the epithets differ remarkably. Macrobius 's list and explanation are probably based directly on Cornelius Labeo 's work, as he cites this author often in his Saturnalia, as when he gives a list of Maia 's cult epithets and mentions one of his works, Fasti. In relating Janus ' epithets Macrobius states: "We invoke in the sacred rites ''. Labeo himself, as it is stated in the passage on Maia, read them in the lists of indigitamenta of the libri pontificum. On the other hand, Lydus 's authority can not have consulted these documents precisely because he offers different (and sometimes bizarre) explanations for the common epithets: it seems likely he received a list with no interpretations appended and his interpretations are only his own. Pater is perhaps the most frequent epithet of Janus, found also in the composition Ianuspater. While numerous gods share this cultic epithet it seems the Romans felt it was typically pertinent to Janus. When invoked along with other gods, usually only he is called pater. For Janus the title is not just a term of respect; principally it marks his primordial role. He is the first of the gods and thus their father: the formula quasi deorum deum corresponds to diuum deus of the carmen Saliare. Similarly, in the expression duonus Cerus, Cerus means creator and is considered a masculine form related to Ceres. Lydus gives Πατρίκιος (Patricius) and explains it as autóchthon: since he does not give another epithet corresponding to Pater it may be inferred that Lydus understands Patricius as a synonym of Pater. There is no evidence connecting Janus to gentilician cults or identifying him as a national god particularly venerated by the oldest patrician families. Geminus is the first epithet in Macrobius 's list. Although the etymology of the word is unclear, it is certainly related to his most typical character, that of having two faces or heads. The proof are the numerous equivalent expressions. The origin of this epithet might be either concrete, referring directly to the image of the god reproduced on coins and supposed to have been introduced by king Numa in the sanctuary at the lowest point of the Argiletum, or to a feature of the Ianus of the Porta Belli, the double gate ritually opened at the beginning of wars, or abstract, deriving metaphorically from the liminal, intermediary functions of the god themselves: both in time and space passages connected two different spheres, realms or worlds. The Janus quadrifrons or quadriformis, brought according to tradition from Falerii in 241 BC and installed by Domitian in the Forum Transitorium, although having a different meaning, seems to be connected to the same theological complex, as its image purports an ability to rule over every direction, element and time of the year. It did not give rise to a new epithet though. Patulcius and Clusivius or Clusius are epithets related to an inherent quality and function of doors, that of standing open or shut. Janus as the Gatekeeper has jurisdiction over every kind of door and passage and the power of opening or closing them. Servius interprets Patulcius in the same way. Lydus gives an incorrect translation, "αντί του οδαιον '' which however reflects one of the attributes of the god, that of being the protector of roads. Elsewhere Lydus cites the epithet θυρέος to justify the key held by Janus. The antithetical quality of the two epithets is meant to refer to the alterning opposite conditions and is commonly found in the indigitamenta: in relation to Janus, Macrobius cites instances of Antevorta and Postvorta, the personifications of two indigitations of Carmentis. These epithets are associated with the ritual function of Janus in the opening of the Porta Ianualis or Porta Belli. The rite might go back to times pre-dating the founding of Rome. Poets tried to explain this rite by imagining that the gate closed either war or peace inside the ianus, but in its religious significance it might have been meant to propitiate the return home of the victorious soldiers. Quirinus is a debated epithet. According to some scholars, mostly Francophone, it looks to be strictly related to the ideas of the passage of the Roman people from war back to peace, from the condition of miles, soldier, to that of quiris, citizen occupied in peaceful business, as the rites of the Porta Belli imply. This is in fact the usual sense of the word quirites in Latin. Other scholars, mainly Germanophone, think it is related on the contrary to the martial character of the god Quirinus, an interpretation supported by numerous ancient sources: Lydus, Cedrenus, Macrobius, Ovid, Plutarch and Paul the Daecon. Schilling and Capdeville counter that it is his function of presiding over the return to peace that gave Janus this epithet, as confirmed by his association on 30 March with Pax, Concordia and Salus, even though it is true that Janus as god of all beginnings presides also over that of war and is thus often called belliger, bringer of war as well as pacificus. This use is also discussed by Dumézil in various works concerning the armed nature of the Mars qui praeest paci, the armed quality of the gods of the third function and the arms of the third function. C. Koch on the other hand sees the epithet Janus Quirinus as a reflection of the god 's patronage over the two months beginning and ending the year, after their addition by king Numa in his reform of the calendar. This interpretation too would befit the liminal nature of Janus. The compound term Ianus Quirinus was particularly in vogue at the time of Augustus, its peaceful interpretation complying particularly well with the Augustan ideology of the Pax Romana. The compound Ianus Quirinus is to be found also in the rite of the spolia opima, a lex regia ascribed to Numa, which prescribed that the third rank spoils of a king or chief killed in battle, those conquered by a common soldier, be consecrated to Ianus Quirinus. Schilling believes the reference of this rite to Ianus Quirinus to embody the original prophetic interpretation, which ascribes to this deity the last and conclusive spoils of Roman history. The epithet Ποπάνων (Popanōn) is attested only by Lydus, who cites Varro as stating that on the day of the kalendae he was offered a cake which earned him this title. There is no surviving evidence of this name in Latin, although the rite is attested by Ovid for the kalendae of January and by Paul. This cake was named ianual but the related epithet of Janus could not plausibly have been Ianualis: it has been suggested Libo which remains purely hypothetical. The context could allow an Etruscan etymology. Janus owes the epithet Iunonius to his function as patron of all kalends, which are also associated with Juno. In Macrobius 's explanation: "Iunonium, as it were, not only does he hold the entry to January, but to all the months: indeed all the kalends are under the jurisdiction of Juno ''. At the time when the rising of the new moon was observed by the pontifex minor the rex sacrorum assisted by him offered a sacrifice to Janus in the Curia Calabra while the regina sacrorum sacrificed to Juno in the regia. Some scholars have maintained that Juno was the primitive paredra of the god. This point bears on the nature of Janus and Juno and is at the core of an important dispute: was Janus a debased ancient uranic supreme god, or were Janus and Jupiter co-existent, their distinct identities structurally inherent to their original theology? Among Francophone scholars Grimal and (implicitly and partially) Renard and Basanoff have supported the view of a uranic supreme god against Dumézil and Schilling. Among Anglophone scholars Frazer and Cook have suggested an interpretation of Janus as uranic supreme god. Whatever the case, it is certain that Janus and Juno show a peculiar reciprocal affinity: while Janus is Iunonius, Juno is Ianualis, as she presides over childbirth and the menstrual cycle, and opens doors. Moreover, besides the kalends Janus and Juno are also associated at the rite of the Tigillum Sororium of 1 October, in which they bear the epithets Ianus Curiatius and Iuno Sororia. These epithets, which swap the functional qualities of the gods, are the most remarkable apparent proof of their proximity. The rite is discussed in detail in the section below. Consivius, sower, is an epithet that reflects the tutelary function of the god at the first instant of human life and of life in general, conception. This function is a particular case of his function of patron of beginnings. As far as man is concerned it is obviously of the greatest importance, even though both Augustine and some modern scholars see it as minor. Augustine shows astonishment at the fact that some of the dii selecti may be engaged in such tasks: "In fact Janus himself first, when pregnancy is conceived,... opens the way to receiving the semen ''. Varro on the other hand had clear the relevance of the function of starting a new life by opening the way to the semen and therefore started his enumeration of the gods with Janus, following the pattern of the Carmen Saliare. Macrobius gives the same interpretation of the epithet in his list: "Consivius from sowing (conserendo), i.e. from the propagation of the human genre, that is disseminated by the working of Janus. '' as the most ancient form. He though does not consider Conseuius to be an epithet of Janus but a theonym in its own right. Lydus understands Consivius as βουλαιον (consiliarius) owing to a conflation with Consus through Ops Consiva or Consivia. The interpretation of Consus as god of advice is already present in Latin authors and is due to a folk etymology supported by the story of the abduction of the Sabine women, (which happened on the day of the Consualia aestiva), said to have been advised by Consus. However no Latin source cites relationships of any kind between Consus and Janus Consivius. Moreover, both the passages that this etymology requires present difficulties, particularly as it seems Consus can not be etymologically related to adjective consivius or conseuius, found in Ops Consivia and thence the implied notion of sowing. Κήνουλος (Coenulus) and Κιβουλλιος (Cibullius) are not attested by Latin sources. The second epithet is not to be found in Lydus 's manuscripts and is present in Cedrenus along with its explanation concerning food and nurture. The editor of Lydus R. Wünsch has added Cedrenus 's passage after Lydus 's own explanation of Coenulus as ευωχιαστικός, good host at a banquet. Capdeville considers Cedrenus ' text to be due to a paleographic error: only Coenulus is indubitably an epithet of Janus and the adjective used to explain it, meaning to present and to treat well at dinner, was used in a ritual invocation before meals, wishing the diners to make good flesh. This is one of the features of Janus as shown by the myth that associates him with Carna, Cardea, Crane. The epithet Curiatius is found in association with Iuno Sororia as designating the deity to which one of the two altars behind the Tigillum Sororium was dedicated. Festus and other ancient authors explain Curiatius by the aetiological legend of the Tigillum: the expiation undergone by P. Horatius after his victory over the Alban Curiatii for the murder of his own sister, by walking under a beam with his head veiled. Capdeville sees this epithet as related exclusively to the characters of the legend and the rite itself: he invokes the analysis by Dumézil as his authority. Schilling supposes it was probably a sacrum originally entrusted to the gens Horatia that allowed the desacralisation of the iuvenes at the end of the military season, later transferred to the state. Janus 's patronage of a rite of passage would be natural. The presence of Juno would be related to the date (Kalends), her protection of the iuvenes, soldiers, or the legend itself. Renard connects the epithet 's meaning to the cu (i) ris, the spear of Juno Curitis as here she is given the epithet of Sororia, corresponding to the usual epithet Geminus of Janus and to the twin or feminine nature of the passage between two coupled posts. Schilling opines that it is related to curia, as the Tigillum was located not far from the curiae veteres: however this interpretation, although supported by an inscription (lictor curiatius) is considered unacceptable by Renard because of the different quantity of the u, short in curiatius, curis and Curitis and long in curia. Moreover, it is part of the different interpretation of the meaning of the ritual of the Tigillum Sororium proposed by Herbert Jennings Rose, Kurt Latte and Robert Schilling himself. However the etymology of Curiatius remains uncertain. On the role of Janus in the rite of the Tigillum Sororium see also the section below. The rites concerning Janus were numerous. Owing to the versatile and far reaching character of his basic function marking all beginnings and transitions, his presence was ubiquitous and fragmented. Apart from the rites solemnizing the beginning of the new year and of every month, there were the special times of the year which marked the beginning and closing of the military season, in March and October respectively. These included the rite of the arma movēre on 1 March and that of the arma condĕre at the end of the month performed by the Salii, and the Tigillum Sororium on 1 October. Janus Quirinus was closely associated with the anniversaries of the dedications of the temples of Mars on 1 June (a date that corresponded with the festival of Carna, a deity associated with Janus: see below) and of that of Quirinus on 29 June (which was the last day of the month in the pre-Julian calendar). These important rites are discussed in detail below. Any rite or religious act whatever required the invocation of Janus first, with a corresponding invocation to Vesta at the end (Janus primus and Vesta extrema). Instances are to be found in the Carmen Saliare, the formula of the devotio, the lutration of the fields and the sacrifice of the porca praecidanea, the Acta of the Arval Brethren. Although Janus had no flamen, he was closely associated with the rex sacrorum who performed his sacrifices and took part in most of his rites: the rex held the first place in the ordo sacerdotum, hierarchy of priests. The flamen of Portunus performed the ritual greasing of the spear of the god Quirinus on 17 August, day of the Portunalia, on the same date that the temple of Janus in the Forum Holitorium had been consecrated by consul Gaius Duilius in 260 BC. The Winter solstice was thought to occur on 25 December. January 1 was new year day: the day was consecrated to Janus since it was the first of the new year and of the month (kalends) of Janus: the feria had an augural character as Romans believed the beginning of anything was an omen for the whole. Thus on that day it was customary to exchange cheerful words of good wishes. For the same reason everybody devoted a short time to his usual business, exchanged dates, figs and honey as a token of well wishing and made gifts of coins called strenae. Cakes made of spelt (far) and salt were offered to the god and burnt on the altar. Ovid states that in most ancient times there were no animal sacrifices and gods were propitiated with offerings of spelt and pure salt. This libum was named ianual and it was probably correspondent to the summanal offered the day before the Summer solstice to god Summanus, which however was sweet being made with flour, honey and milk. Shortly afterwards, on 9 January, on the feria of the Agonium of January the rex sacrorum offered the sacrifice of a ram to Janus. At the kalends of each month the rex sacrorum and the pontifex minor offered a sacrifice to Janus in the curia Calabra, while the regina offered a sow or a she lamb to Juno. Morning belonged to Janus: men started their daily activities and business. Horace calls him Matutine Pater, morning father. G. Dumézil believes this custom is at the origin of the learned interpretations of Janus as a solar deity. Janus was also involved in spatial transitions, presiding over home doors, city gates and boundaries. Numerous toponyms of places located at the boundary between the territory of two communities, especially Etrurians and Latins or Umbrians, are named after the god. The most notable instance is the Ianiculum which marked the access to Etruria from Rome. Since borders often coincided with rivers and the border of Rome (and other Italics) with Etruria was the Tiber, it has been argued that its crossing had a religious connotation; it would have involved a set of rigorous apotropaic practices and a devotional attitude. Janus would have originally regulated particularly the crossing of this sacred river through the pons sublicius. The name of the Iāniculum is not derived by that of the god, but from the abstract noun iānus, - us. Adams Holland opines it would have been originally the name of a small bridge connecting the Tiber Island (on which she supposes the first shrine of Janus stood) with the right bank of the river. However Janus was the protector of doors, gates and roadways in general, as is shown by his two symbols, the key and the staff. The key too was a sign that the traveller had come to a harbour or ford in peace in order to exchange his goods. The rite of the bride 's oiling the posts of the door of her new home with wolf fat at her arrival, though not mentioning Janus explicitly, is a rite of passage related to the ianua. The rites of the Salii marked the springtime beginning of the war season in March and its closing in October. The structure of the patrician sodalitas, made up by the two groups of the Salii Palatini, who were consecrated to Mars and whose institution was traditionally ascribed to Numa (with headquarter on the Palatine), and the Salii Collini or Agonales, consecrated to Quirinus and whose foundation was ascribed to Tullus Hostilius, (with headquarter on the Quirinal) reflects in its division the dialectic symbolic role they played in the rites of the opening and closing of the military season. So does the legend of their foundation itself: the peace - loving king Numa instituted the Salii of Mars Gradivus, foreseeing the future wars of the Romans while the warmonger king Tullus, in a battle during a longstanding war with the Sabines, swore to found a second group of Salii should he obtain victory. The paradox of the pacifist king serving Mars and passage to war and of the warmonger king serving Quirinus to achieve peace under the expected conditions highlights the dialectic nature of the cooperation between the two gods, inherent to their own function. Because of the working of the talismans of the sovereign god they guaranteed alternatively force and victory, fecundity and plenty. It is noteworthy that the two groups of Salii did not split their competences so that one group only opened the way to war and the other to peace: they worked together both at the opening and the conclusion of the military season, marking the passage of power from one god to the other. Thus the Salii enacted the dialectic nature present in the warring and peaceful aspect of the Roman people, particularly the iuvenes. This dialectic was reflected materially by the location of the temple of Mars outside the pomerium and of the temple of Quirinus inside it. The annual dialectic rhythm of the rites of the Salii of March and October was also further reflected within the rites of each month and spatially by their repeated crossing of the pomerial line. The rites of March started on the first with the ceremony of the ancilia movere, developed through the month on the 14th with Equirria in the Campus Martius (and the rite of Mamurius Veturius marking the expulsion of the old year), the 17th with the Agonium Martiale, the 19th with the Quinquatrus in the Comitium (which correspond symmetrically with the Armilustrium of 19 October), on the 23rd with the Tubilustrium and they terminated at the end of the month with the rite of the ancilia condere. Only after this month - long set of rites was accomplished was it fas to undertake military campaigns. While Janus sometimes is named belliger and sometimes pacificus in accord with his general function of beginner, he is mentioned as Janus Quirinus in relation to the closing of the rites of March at the end of the month together with Pax, Salus and Concordia: This feature is a reflection of the aspect of Janus Quirinus which stresses the quirinal function of bringing peace back and the hope of soldiers for a victorious return. As the rites of the Salii mimic the passage from peace to war and back to peace by moving between the two poles of Mars and Quirinus in the monthly cycle of March, so they do in the ceremonies of October, the Equus October ("October Horse '') taking place on the Campus Martius the Armilustrium, purification of the arms, on the Aventine, and the Tubilustrium on the 23rd. Other correspondences may be found in the dates of the founding of the temples of Mars on 1 June and of that of Quirinus on 29 June, in the pre-Julian calendar the last day of the month, implying that the opening of the month belonged to Mars and the closing to Quirinus. The reciprocity of the two gods ' situations is subsumed under the role of opener and closer played by Janus as Ovid states: "Why are you hidden in peace, and open when the arms have been moved? '' Another analogous correspondence may be found in the festival of the Quirinalia of February, last month of the ancient calendar of Numa. The rite of the opening and closure of the Janus Quirinus would thus reflect the idea of the reintegretation of the miles into civil society, i.e. the community of the quirites, by playing a lustral role similar to the Tigillum Sororium and the porta triumphalis located at the south of the Campus Martius. In Augustan ideology this symbolic meaning was strongly emphasised. This rite was supposed to commemorate the expiation of the murder of his own sister by Marcus Horatius. The young hero with his head veiled had to pass under a beam spanning an alley. The rite was repeated every year on 1 October. The tigillum consisted of a beam on two posts. It was kept in good condition at public expenses to the time of Livy. Behind the tigillum, on opposite sides of the alley, stood the two altars of Janus Curiatius and Juno Sororia. Its location was on the vicus leading to the Carinae, perhaps at the point of the crossing of the pomerium. The rite and myth have been interpreted by Dumezil as a purification and desacralization of the soldiers from the religious pollution contracted in war, and a freeing of the warrior from furor, wrath, as dangerous in the city as it is necessary on campaign. The rite took place on the kalends of October, the month marking the end of the yearly military activity in ancient Rome. Scholars have offered different interpretations of the meaning of Janus Curiatius and Juno Sororia. The association of the two gods with this rite is not immediately clear. It is however apparent that they exchanged their epithets, as Curiatius is connected to (Juno) Curitis and Sororia to (Janus) Geminus. Renard thinks that while Janus is the god of motion and transitions he is not concerned directly with purification, while the arch is more associated with Juno. This fact would be testified by the epithet Sororium, shared by the tigillum and the goddess. Juno Curitis is also the protectress of the iuvenes, the young soldiers. Paul the Deacon states that the sororium tigillum was a sacer (sacred) place in honour of Juno. Another element linking Juno with Janus is her identification with Carna, suggested by the festival of this deity on the kalends (day of Juno) of June, the month of Juno. Carna was a nymph of the sacred lucus of Helernus, made goddess of hinges by Janus with the name of Cardea, and had the power of protecting and purifying thresholds and the doorposts. This would be a further element in explaining the role of Juno in the Tigillum. It was also customary for new brides to oil the posts of the door of their new homes with wolf fat. In the myth of Janus and Carna (see section below) Carna had the habit when pursued by a young man of asking him out of shyness for a hidden recess and thereupon fleeing: but two headed Janus saw her hiding in a crag under some rocks. Thence the analogy with the rite of the Tigillum Sororium would be apparent: both in the myth and in the rite Janus, the god of motion, goes through a low passage to attain Carna as Horatius passes under the tigillum to obtain his purification and the restitution to the condition of citizen eligible for civil activities, including family life. The purification is then the prerequisite for fertility. The custom of attaining lustration and fertility by passing under a gap in rocks, a hole in the soil or a hollow in a tree is widespread. The veiled head of Horatius could also be explained as an apotropaic device if one considers the tigillum the iugum of Juno, the feminine principle of fecundity. Renard concludes that the rite is under the tutelage of both Janus and Juno, being a rite of transition under the patronage of Janus and of desacralisation and fertility under that of Juno: through it the iuvenes coming back from campaign were restituted to their fertile condition of husbands and peasants. Janus is often associated with fecundity in myths, representing the masculine principle of motion, while Juno represents the complementary feminine principle of fertility: the action of the first would allow the manifestation of the other. In discussing myths about Janus, one should be careful in distinguishing those which are ancient and originally Latin and those others which were later attributed to him by Greek mythographers. In the Fasti Ovid relates only the myths that associate Janus with Saturn, whom he welcomed as a guest and with whom he eventually shared his kingdom in reward for teaching the art of agriculture, and to the nymph Crane Grane or Carna, whom Janus raped and made the goddess of hinges as Cardea, while in the Metamorphoses he records his fathering with Venilia the nymph Canens, loved by Picus, first legendary king of the Aborigines. The myth of Crane has been studied by M. Renard and G. Dumezil. The first scholar sees in it a sort of parallel with the theology underlying the rite of the Tigillum Sororium. Crane is a nymph of the sacred wood of Helernus, located at the issue of the Tiber, whose festival of 1 February corresponded with that of Juno Sospita: Crane might be seen as a minor imago of the goddess. Her habit of deceiving her male pursuers by hiding in crags in the soil reveals her association not only with vegetation but also with rocks, caverns, and underpassages. Her nature looks to be also associated with vegetation and nurture: G. Dumezil has proved that Helernus was a god of vegetation, vegetative lushness and orchards, particularly associated with vetch. As Ovid writes in his Fasti, 1 June was the festival day of Carna, besides being the kalendary festival of the month of Juno and the festival of Juno Moneta. Ovid seems to purposefully conflate and identify Carna with Cardea in the aetiologic myth related above. Consequently, the association of both Janus and the god Helernus with Carna - Crane is highlighted in this myth: it was customary on that day to eat ivetch and lard, which were supposed to strengthen the body. Cardea had also magic powers for protecting doorways (by touching thresholds and posts with wet hawthorn twigs) and newborn children by the aggression of the striges (in the myth the young Proca). M. Renard sees the association of Janus with Crane as reminiscent of widespread rites of lustration and fertility performed through ritual walking under low crags or holes in the soil or natural hollows in trees, which in turn are reflected in the lustrative rite of the Tigillum Sororium. Macrobius relates that Janus was supposed to have shared a kingdom with Camese in Latium, in a place then named Camesene. He states that Hyginus recorded the tale on the authority of a Protarchus of Tralles. In Macrobius Camese is a male: after Camese 's death Janus reigned alone. However Greek authors make of Camese Janus 's sister and spouse: Atheneus citing a certain Drakon of Corcyra writes that Janus fathered with his sister Camese a son named Aithex and a daughter named Olistene. Servius Danielis states Tiber (i.e., Tiberinus) was their son. Arnobius writes that Fontus was the son of Janus and Juturna. The name itself proves that this is a secondary form of Fons modelled on Janus, denouncing the late character of this myth: it was probably conceived because of the proximity of the festivals of Juturna (11 January) and the Agonium of Janus (9 January) as well as for the presence of an altar of Fons near the Janiculum and the closeness of the notions of spring and of beginning. Plutarch writes that according to some Janus was a Greek from Perrhebia. When Romulus and his men kidnapped the Sabine women, Janus caused a volcanic hot spring to erupt, resulting in the would - be attackers being buried alive in the deathly hot, brutal water and ash mixture of the rushing hot volcanic springs that killed, burned, or disfigured many of Tatius 's men. This spring is called Lautolae by Varro. Later on, however, the Sabines and Romans agreed on creating a new community together. In honor of this, the doors of a walled roofless structure called ' The Janus ' (not a temple) were kept open during war after a symbolic contingent of soldiers had marched through it. The doors were closed in ceremony when peace was concluded. In accord with his fundamental character of being the Beginner Janus was considered by Romans the first king of Latium, sometimes along with Camese. He would have received hospitably god Saturn, who, expelled from Heaven by Jupiter, arrived on a ship to the Janiculum. Janus would have also effected the miracle of turning the waters of the spring at the foot of the Viminal from cold to scorching hot in order to fend off the assault of the Sabines of king Titus Tatius, come to avenge the kidnapping of their daughters by the Romans. His temple named Janus Geminus had to stand open in times of war. It was said to have been built by king Numa Pompilius, who kept it always shut during his reign as there were no wars. After him it was closed very few times, one after the end of the first Punic War, three times under Augustus and once by Nero. It is recorded that emperor Gordianus III opened the Janus Geminus. It is a noteworthy curiosity that the opening of the Janus was perhaps the last act connected to the ancient religion in Rome: Procopius writes that in 536, during the Gothic War, while general Belisarius was under siege in Rome, at night somebody opened the Janus Geminus stealthily, which had stayed closed since the 390 edict of Theodosius I that banned the ancient cults. Janus was faithful to his liminal role also in the marking of this last act. The uniqueness of Janus in Latium has suggested to L. Adams Holland and J. Gagé the hypothesis of a cult brought from far away by sailors and strictly linked to the amphibious life of the primitive communities living on the banks of the Tiber. In the myth of Janus the ship of Saturn as well as the myth of Carmenta and Evander are remininscent of an ancient pre-Roman sailing life. The elements that seem to connect Janus to sailing are presented in two articles by J. Gagé summarised here below. 1. The boat of Janus and the beliefs of the primitive sailing techniques. a) The proximity of Janus and Portunus and the functions of the flamen Portunalis. The temple of Janus was dedicated by Gaius Duilius on 17 August, day of the Portunalia. The key was the symbol of both gods and was also meant to signify that the boarding boat was a peaceful merchant boat. The flamen Portunalis oiled the arms of Quirinus with an ointment kept in a peculiar container named persillum, term perhaps derived from Etruscan persie. A similar object seems to be represented in a fresco picture of the Calendar of Ostia on which young boys prepare to apply a resin contained in a basin to a boat on a cart, i.e. yet to be launched. b) The Tigillum Sororium would be related to a gentilician cult of wood of the Horatii, as surmised by the episodes of the pons sublicius defended by Horatius Cocles and of the posts of the main entrance of the temple of Jupiter Capitolinus, on which Marcus Horatius Pulvillus lay his hand during the dedication rite. Gagé thinks the magic power of the Tigillum Sororium should be ascribed to the lively and burgeoning nature of wood. 2. Religious quality of trees as the wild olive and the Greek or Italic lotus (Celtis Australis), analogous to that of corniolum and wild fig, to sailing communities: its wood does not rot in sea water, thence it was used in shipbuilding and in the making of rolls for hauling of ships overland. 3. Janus and the depiction of Boreas as Bifrons: climatological elements. a) The calendar of Numa and the role of Janus. Contradictions of the ancient Roman calendar on the beginning of the new year: originally March was the first month and February the last one. January, the month of Janus, became the first afterwards and through several manipulations. The liminal character of Janus is though present in the association to the Saturnalia of December, reflecting the strict relationship between the two gods Janus and Saturn and the rather blurred distinction of their stories and symbols. The initial role of Janus in the political - religious operations of January: the nuncupatio votorum spanning the year, the imperial symbol of the boat in the opening rite of the sailing season, the vota felicia: Janus and his myths allow for an ancient interpretation of the vota felicia, different from the Isiadic one. b) The idea of the Seasons in the ancient traditions of the Ionian Islands. The crossing of the Hyperborean myths. Cephalonia as a place at the cross of famous winds. Application of the theory of winds for the navigation in the Ionian Sea. The type Boreas Bifrons as probable model of the Roman Janus. This observation was made first by the Roscher Lexicon: "Ianus is he too, doubtlessly, a god of wind '' and repeated in the RE Pauly - Wissowa s.v. Boreas by Rapp. P. Grimal has taken up this interpretation connecting it to a vase with red figures representing Boreas pursuing the nymph Oreithyia: Boreas is depicted as a two headed winged demon, the two faces with beards, one black and the other fair, perhaps symbolising the double movement of the winds Boreas and Antiboreas. This proves that the Greeks of the 5th century BC knew the image of Janus. Gagé feels compelled to mention here another parallel with Janus to be found in the figure of Argos with one hundred eyes and in his association with his murderer Hermes. c) Solar, solsticial and cosmological elements. While there is no direct proof of an original solar meaning of Janus, this being the issue of learned speculations of the Roman erudits initiated into the mysteries and of emperors as Domitian, the derivation from a Syrian cosmogonic deity proposed by P. Grimal looks more acceptable. Gagé though sees an ancient, preclassical Greek mythic substratum to which belong Deucalion and Pyrrha and the Hyperborean origins of the Delphic cult of Apollo as well as the Argonauts. The beliefs in the magic power of trees is reflected in the use of the olive wood, as for the rolls of the ship Argos: the myth of the Argonauts has links with Corcyra, remembered by Lucius Ampelius. 4. The sites of the cults of Janus at Rome and his associations in ancient Latium. a) Argiletum. Varro gives either the myth of the killing of Argos as an etymology of the word Argi - letum (death of Argos), which looks to be purely fantastic, or that of place located upon a soil of clay, argilla in Latin. The place so named stood at the foot of the Viminal, the hill of the reeds. It could also be referred to the white willow tree, used to make objects of trelliswork. b) The Janiculum may have been inhabited by people who were not Latin but had close alliances with Rome. The right bank of the Tiber would constitute a typical, convenient, commodious landing place for boats and the cult of Janus would have been double insofar as amphibious. c) Janus 's cultic alliances and relations in Latium would show a Prelatin character. Janus has no association in cult (calendar or prayer formulae) with any other entity. Even though he bears the epithet of Pater he is no head of a divine family; however some testimonies lend him a companion, sometimes female, and a son and / or a daughter. They belong to the family of the nymphs or genies of springs. Janus intervenes in the miracle of the hot spring during the battle between Romulus and Tatius: Juturna and the nymphs of the springs are clearly related to Janus as well as Venus, that in Ovid 's Metamorphoses cooperates in the miracle and may have been confused with Venilia, or perhaps the two might have been originally one. Janus has a direct link only to Venilia, with whom he fathered Canens. The magic role of the wild olive tree (oleaster) is prominent in the description of the duel between Aeneas and Turnus reflecting its religious significance and powers: it was sacred to sailors, also those who had shipwrecked as a protecting guide to the shore. It was probably venerated by a Prelatin culture in association with Faunus. In the story of Venulus coming back from Apulia too one may see the religious connotation of the wild olive: the king discovers one into which a local shepherd had been turned for failing to respect the nymphs he had come across in a nearby cavern, apparently Venilia, who was the deity associated with the magic virtues of such tree. Gagé finds it remarkable that the characters related to Janus are in the Aeneis on the side of the Rutuli. In the poem Janus would be represented by Tiberinus. Olistene, the daughter of Janus with Camese, may reflect in her name that of the olive or oleaster, or of Oreithyia. Camese may be reflected in Carmenta: Evander 's mother is from Arcadia, comes to Latium as an exile migrant and has her two festivals in January: Camese 's name at any rate does not look Latin. 5. Sociological remarks. a) The vagueness of Janus 's association with the cults of primitive Latium and his indifference towards the social composition of the Roman State suggest that he was a god of an earlier amphibious merchant society in which the role of the guardian god was indispensable. b) Janus bifrons and the Penates. Even though the cult of Janus can not be confused with that of the Penates, related with Dardanian migrants from Troy, the binary nature of the Penates and of Janus postulates a correspondent ethnic or social organisation. Here the model is thought to be provided by the cult of the Magni Dei or Cabeiri preserved at Samothrace and worshipped particularly among sailing merchants. The aetiological myth is noteworthy too: at the beginning one finds Dardanos and his brother Iasios appearing as auxiliary figures in a Phrygian cult to a Great Mother. In Italy there is a trace of a conflict between worshippers of the Argive Hera (Diomedes and the Diomedians of the south) and of the Penates. The cult of Janus looks to be related to social groups remained at the fringe of the Phrygian ones. They might or might not have been related to the cult of the Dioscuri. The relationship between Janus and Juno is defined by the closeness of the notions of beginning and transition and the functions of conception and delivery, result of youth and vital force. The reader is referred to the above sections Cult epithets and Tigillum Sororium of this article and the corresponding section of article Juno. Quirinus is a god that incarnates the quirites, i.e. the Romans in their civil capacity of producers and fathers. He is surnamed Mars tranquillus (peaceful Mars), Mars qui praeest paci (Mars who presides on peace). His function of custos guardian is highlighted by the location of his temple inside the pomerium but not far from the gate of Porta Collina or Quirinalis, near the shrines of Sancus and Salus. As a protector of peace he is nevertheless armed, in the same way as the quirites are, as they are potentially milites soldiers: his statue represents him is holding a spear. For this reason Janus, god of gates, is concerned with his function of protector of the civil community. For the same reason the flamen Portunalis oiled the arms of Quirinus, implying that they were to be kept in good order and ready even though they were not to be used immediately. Dumézil and Schilling remark that as a god of the third function Quirinus is peaceful and represents the ideal of the pax romana i.e. a peace resting on victory. Portunus may be defined as a sort of duplication inside the scope of the powers and attributes of Janus. His original definition shows he was the god of gates and doors and of harbours. In fact it is debated whether his original function was only that of god of gates and the function of god of harbours was a later addition: Paul the Deacon writes: "... he is depicted holding a key in his hand and was thought to be the god of gates ''. Varro would have stated that he was the god of harbours and patron of gates. His festival day named Portunalia fell on 17 August, and he was venerated on that day in a temple ad pontem Aemilium and ad pontem Sublicium that had been dedicated on that date. Portunus, unlike Janus, had his own flamen, named Portunalis. It is noteworthy that the temple of Janus in the Forum Holitorium had been consecrated on the day of the Portunalia and that the flamen Portunalis was in charge of oiling the arms of the statue of Quirinus. The relationship between Janus and Vesta touches on the question of the nature and function of the gods of beginning and ending in Indo - European religion. While Janus has the first place Vesta has the last, both in theology and in ritual (Ianus primus, Vesta extrema). The last place implies a direct connexion with the situation of the worshipper, in space and in time. Vesta is thence the goddess of the hearth of homes as well as of the city. Her inextinguishable fire is a means for men (as individuals and as a community) to keep in touch with the realm of gods. Thus there is a reciprocal link between the god of beginnings and unending motion, who bestows life to the beings of this world (Cerus Manus) as well as presiding over its end, and the goddess of the hearth of man, which symbolises through fire the presence of life. Vesta is a virgin goddess but at the same time she is considered the mother of Rome: she is thought to be indispensable to the existence and survival of the community. It has long been believed that Janus was present among the theonyms on the outer rim of the Piacenza Liver in case 3 under the name of Ani. This fact created a problem as the god of beginnings looked to be located in a situation other than the initial, i.e. the first case. After the new readings proposed by A. Maggiani, in case 3 one should read TINS: the difficulty has thus dissolved. Ani has thence been eliminated from Etruscan theology as this was his only attestation. Maggiani remarks that this earlier identification was in contradiction with the testimony ascribed to Varro by Johannes Lydus that Janus was named caelum among the Etruscans. On the other hand, as expected Janus is present in region I of Martianus Capella 's division of Heaven and in region XVI, the last one, are to be found the Ianitores terrestres (along with Nocturnus), perhaps to be identified in Forculus, Limentinus and Cardea, deities strictly related to Janus as his auxiliaries (or perhaps even no more than concrete subdivisions of his functions) as the meaning of their names implies: Forculus is the god of the forca, a iugum, low passage, Limentinus the guardian of the limes, boundary, Cardea the goddess of hinges, here of the gates separating Earth and Heaven. The problem posed by the qualifying adjective terrestres earthly, can be addressed in two different ways. One hypothesis is that Martianus 's depiction implies a descent from Heaven onto Earth. However Martianus 's depiction does not look to be confined to a division Heaven - Earth as it includes the Underworld and other obscure regions or remote recesses of Heaven. Thence one may argue that the articulation Ianus - Ianitores could be interpreted as connected to the theologem of the Gates of Heaven (the Synplegades) which open on the Heaven on one side and on Earth or the Underworld on the other. From other archaeological documents though it has become clear that the Etruscans had another god iconographically corresponding to Janus: Culśanś, of which there is a bronze statuette from Cortona (now at Cortona Museum). While Janus is a bearded adult Culśans may be an unbearded youth, making his identification with Hermes look possible. His name too is connected with the Etruscan word for doors and gates. According to Capdeville he may also be found on the outer rim of the Piacenza Liver on case 14 in the compound form CULALP, i.e., "of Culśanś and of Alpan (u) '' on the authority of Pfiffig, but perhaps here it is the female goddess Culśu, the guardian of the door of the Underworld. Although the location is not strictly identical there is some approximation in his situations on the Liver and in Martianus ' system. A. Audin connects the figure of Janus to Culśanś and Turms (Etruscan rendering of Hermes, the Greek god mediator between the different worlds, brought by the Etruscan from the Aegean Sea), considering these last two Etruscan deities as one. This interpretation would then identify Janus with Greek god Hermes. Etruscan medals from Volterra too show the double headed god and the Janus Quadrifrons from Falerii may have an Etruscan origin. Roman and Greek authors maintained Janus was an exclusively Roman god. This claim is excessive according to R. Schilling, at least as far as iconography is concerned. A god with two faces appears repeatedly in Sumerian and Babylonian art. The ancient Sumerian deity Isimud was commonly portrayed with two faces facing in opposite directions. Sumerian depictions of Isimud are often very similar to the typical portrayals of Janus in ancient Roman art. Unlike Janus, however, Isimud is not a god of doorways. Instead, he is the messenger of Enki, the ancient Sumerian god of water and civilization. Reproductions of the image of Isimud, whose Babylonian name was Usimu, on cylinders in Sumero - Accadic art can to be found in H. Frankfort 's work Cylinder seals (London 1939) especially in plates at p. 106, 123, 132, 133, 137, 165, 245, 247, 254. On plate XXI, c, Usmu is seen while introducing worshippers to a seated god. Janus - like heads of gods related to Hermes have been found in Greece, perhaps suggesting a compound god. William Betham argued that the cult arrived from the Middle East and that Janus corresponds to the Baal - ianus or Belinus of the Chaldeans, sharing a common origin with the Oannes of Berosus. P. Grimal considers Janus a conflation of a Roman god of doorways and an ancient Syro - Hittite uranic cosmogonic god. The Roman statue of the Janus of the Argiletum, traditionally ascribed to Numa, was possibly very ancient, perhaps a sort of xoanon, like the Greek ones of the 8th century BC. In Hinduism the image of double or four faced gods is quite common, as it is a symbolic depiction of the divine power of seeing through space and time. The supreme god Brahma is represented with four faces. Another instance of a four faced god is the Slavic god Svetovid. Other analogous or comparable deities of the prima in Indo - European religions have been analysed by G. Dumézil. They include the Indian goddess Aditi who is called two - faced as she is the one who starts and concludes ceremonies, and Scandinavian god Heimdallr. The theological features of Heimdallr look similar to Janus 's: both in space and time he stands at the limits. His abode is at the limits of Earth, at the extremity of Heaven; he is the protector of the gods; his birth is at the beginning of time; he is the forefather of mankind, the generator of classes and the founder of the social order. Nonetheless he is inferior to the sovereign god Oðinn: the Minor Völuspá defines his relationship to Oðinn almost with the same terms as those in which Varro defines that of Janus, god of the prima to Jupiter, god of the summa: Heimdallr is born as the firstborn (primigenius, var einn borinn í árdaga), Oðinn is born as the greatest (maximus, var einn borinn öllum meiri). Analogous Iranian formulae are to be found in an Avestic gāthā (Gathas). In other towns of ancient Latium the function of presiding over beginnings was probably performed by other deities of feminine sex, notably the Fortuna Primigenia of Praeneste. In the Middle Ages, Janus was also taken as the symbol of Genoa, whose Medieval Latin name was Ianua, as well as of other European communes. The comune of Selvazzano di Dentro near Padua has a grove and an altar of Janus depicted on its standard, but their existence is unproved. Cats with the congenital disorder Diprosopus, which causes the face to be partly or completely duplicated on the head, are known as Janus cats. In Act I Scene 2 of Shakespeare 's Othello, Iago invokes the name of Janus after the failure of his premiere plot to undo the titular character. As the story 's primary agent of change, it 's fitting that Iago align himself with Janus. His schemes prompt the beginning of each of the main characters ' ends: in his absence, Othello and Desdemona would likely have remained married and Cassio would have remained in his respected position of power. Iago guides (if not forces) the story through inception, climax, and finale. Furthermore, Janus ' common two - faced depiction is the perfect visual metaphor for Iago 's character. Othello 's characters believe him to have only the best of intentions, even going as far as to call him "honest Iago, '' completely unaware that he spends every unwatched second plotting their undoing. He appears selfless and compassionate but, in truth, is power - hungry, amoral, and without regard for the well - being of others. In the 1987 thriller novel The Janus Man by British novelist Raymond Harold Sawkins, Janus is used as a metaphor for a Soviet agent infiltrated into British Secret Intelligence Service - "The Janus Man who faces both East and West ''.
when was the last time the san diego padres made the playoffs
List of San Diego Padres seasons - wikipedia The San Diego Padres are a professional baseball franchise based in San Diego, California. They began play in 1969, but did not achieve their first winning season until 1978, though they failed to sustain this over the next half - decade. However, in 1984 the Padres surprisingly reached their first - ever postseason appearance and won the National League Championship before losing to a very strong Detroit Tiger outfit in the World Series. This did not usher in a prolonged period of success for the Padres, who failed to achieve a second postseason appearance until 1996, and after a disappointing 1997 they rebounded again with a franchise - best 98 wins and reached the World Series only to face another exceptionally formidable opponent in the late - 1990s Yankees dynasty. The Padres yet again faltered, but achieved four consecutive winning seasons for the only time in franchise history between 2004 and 2007 without winning more than one playoff game. The Padres have not played in the postseason since 2006 despite only the fourth 90 - win season in franchise history during 2010. The following table describes the Padres ' MLB win -- loss record by decade. These statistics are from Baseball-Reference.com 's San Diego Padres History & Encyclopedia, and are current as of October 18, 2016.
who played the original ridge on bold and the beautiful
Ronn Moss - wikipedia Ronald Montague "Ronn '' Moss (born March 4, 1952) is an American actor, musician and singer / songwriter, a member of the band Player, and best known for portraying Ridge Forrester, the dynamic fashion magnate on the CBS soap opera The Bold and the Beautiful from 1987 to 2012. Moss was born and raised in Los Angeles. He grew up surrounded by the theatre, concert, and rock & roll music world. At age 11, he started learning to play the drums, guitar and electric bass. In 1976, Moss joined creative forces with fellow singer / guitarist Peter Beckett, guitarist / keyboardist J.C Crowley, and drummer John Friesen to form the band Player, primarily as bassist and singer. In a garage in the Hollywood Hills, they wrote and rehearsed the music that would soon attract the attention of music impresario Robert Stigwood, who signed them to his RSO Records. In the first three weeks of 1978, their single "Baby Come Back '' occupied the # 1 position on the national pop charts & Player was voted to the Billboard Magazine honor roll of Top New Singles artist of 1978. Their follow - up single, "This Time I 'm in It for Love, '', also peaked at No. 10 the same year. "Baby Come Back '' was next to "Sara Smile '' by Daryl Hall & John Oates as well as "How Much I Feel '' by Ambrosia one of the very few songs by a white artist to be played on predominately black radio stations which enabled them to contribute to a Musical Genre known as "Blue Eyed Soul. '' where White Artists sing with a strong R & B Influence. The Australian edition of his 2002 solo CD "I 'm Your Man '' includes a fresh duet of the original hit "Baby Come Back '' & two bonus tracks, "That 's When I 'll Be Gone '' & "Mountain ''. Moss along with his band toured Australia between August and September 2006. In 2007, he released his 2nd solo album, Uncovered. Player went on tour in 2015, with Moss and Peter Beckett playing dates with Orleans and Ambrosia as well as a few "Yacht Rock '' shows with Little River Band. Moss played the Saracen hero Ruggero who fell in love with Bradamante in the 1983 Italian film Paladini - storia d'armi e d'amori (aka Paladins -- the story of love and arms, aka Hearts and Armor) -- a film based on the legends surrounding the Peers of Charlemagne. In 1987, Moss was offered the role of "Ridge Forrester '' on a new soap opera, The Bold and the Beautiful. He accepted the part, and the show was broadcast in many nations across the globe and his role has attracted millions of fans worldwide. To this day, Moss is best known for this role and he frequently travels overseas to visit other countries, make appearances, and promote his music. 1987 also marks the year of Moss ' most critically acclaimed theatrical role as Rowdy Abilene in the Andy Sidaris classic Hard Ticket to Hawaii. Moss has a large fan base in Australia. In 2006, a campaign surfaced to vote Moss as Australian of the Year. Moss was featured in a very popular television commercial for Berri, an Australian orange juice producer. The punchline of the advertisement was "you can tell when it 's not all Aussie '' -- a line intended to show (in jest) that, while Moss has longstanding connections with Australia, his Hollywood career has resulted in his persona differing significantly from that of the cliché Australian male. He has appeared occasionally on the former program Rove Live when in Australia. He takes part in sketches that are parodies of the daytime TV genre. The Bold and the Beautiful was a very popular prime - time show in much of Europe; in 2010, Moss was a participant in the Italian version of Dancing with the Stars, finishing second with his partner Sara Di Vaira. On August 11, 2012, it was announced that Moss would exit The Bold and the Beautiful, after having played the character of Ridge Forrester for 25 years. On May 12, 2014, Moss, along with his band Player, appeared on the ABC daytime soap opera General Hospital as part of the 2014 Nurses ' Ball, where they performed their hit single "Baby Come Back ''. As of 2015, he plays the character of Ian in the Flemish soap Familie.
american forces won or lost the battle of chu lai
Battle of Dak to - wikipedia Coordinates: 14 ° 39 ′ 4 '' N 107 ° 47 ′ 55 '' E  /  14.65111 ° N 107.79861 ° E  / 14.65111; 107.79861  (Dak To) American intervention 1968 Drawdown 1969 -- 71 Easter Offensive Post-Paris Peace Accords (1973 -- 1974) Spring ' 75 Air operations Naval operations The Battle of Đắk Tô was a series of major engagements of the Vietnam War that took place between November 3 to 23, 1967, in Kon Tum Province, in the Central Highlands of the Republic of Vietnam (South Vietnam). The action at Đắk Tô was one of a series of People 's Army of Vietnam (PAVN) offensive initiatives that began during the second half of the year. North Vietnamese attacks at Lộc Ninh (in Bình Long Province), Song Be (in Phước Long Province), and at Con Thien and Khe Sanh, (in Quảng Trị Province), were other actions which, combined with Đắk Tô, became known as "the border battles. '' The objective of the PAVN forces was to distract American and South Vietnamese forces away from cities towards the borders in preparation for the Tet Offensive. During the summer of 1967, heavy contact with PAVN forces in the area prompted the launching of Operation Greeley, a combined search and destroy effort by elements of the U.S. Army 's 4th Infantry Division and 173rd Airborne Brigade, along with the Army of the Republic of Vietnam 's 42nd Infantry Regiment and Airborne units. The fighting was intense and lasted into the fall, when the North Vietnamese seemingly withdrew. By late October, however, U.S. intelligence indicated that local communist units had been reinforced and combined into the 1st PAVN Division, which was tasked with the capture of Đắk Tô and the destruction of a brigade - size U.S. unit. Information provided by a PAVN defector provided the allies a good indication of the locations of North Vietnamese forces. This intelligence prompted the launching of Operation MacArthur, and brought the units back to the area along with more reinforcements from the ARVN Airborne Division. The battles that erupted on the hill masses south and southeast of Đắk Tô became some of the hardest - fought and bloodiest battles of the Vietnam War. During the early stages of the U.S. involvement in the Vietnam War, several U.S. Special Forces Civilian Irregular Defense Group (CIDG) camps were established along the borders of South Vietnam in order to both maintain surveillance of PAVN and National Front for the Liberation of South Vietnam (NLF or Viet Cong) infiltration and to provide support and training to isolated Montagnard villagers, who bore the brunt of the fighting in the area. One of these camps was built near the village and airstrip at Đắk Tô. After 1965, Đắk Tô was also utilized as a Forward Operations Base by the highly classified MACV - SOG, which launched reconnaissance teams from there to gather intelligence on the Ho Chi Minh Trail across the border in Laos. In 1967, under the overall direction of commander of Special Forces in Vietnam Colonel Jonathan Ladd, the camp began to take mortar fire. Ladd flew in, organized reconnaissance and identified the entrenched hill bunker complex as the source of the shelling. Đắk Tô lies on a flat valley floor, surrounded by waves of ridgelines that rise into peaks (some as high as 4,000 feet) that stretch westward and southwestward towards the tri-border region where South Vietnam, Laos, and Cambodia meet. Western Kon Tum Province is covered by double and triple - canopy rainforests, and the only open areas were filled in by bamboo groves whose stalks sometimes reached eight inches in diameter. Landing Zones (LZs) large enough for helicopters were few and far between, which meant that most troop movements could only be carried out on foot. Temperatures in the highlands could reach 95 ° Fahrenheit (35 ° Celsius) during the day and could drop to as low as 55 ° Fahrenheit (12.78 ° Celsius) in the evenings. In January 1967, Major - General (later LTG) William R. Peers had taken command of the 4th Infantry Division, which had responsibility for the defense of western Kon Tum Province. Prior to the onset of the summer monsoon, Peers set up blocking positions from the 4th Infantry Division 's 1st Brigade base camp at Jackson Hole, west of Pleiku, and launched Operation Francis Marion on 17 May. The 4th had on hand its 1st and 2nd Brigades while its 3rd Brigade operated with the 25th Infantry Division northwest of Saigon. Throughout the middle of 1967, however, western Kon Tum Province became a magnet for several PAVN spoiling attacks and it appeared that the North Vietnamese were paying an increasing amount of attention to the area. Immediately after taking command, Peers instituted guidelines for his units in order to prevent them from being isolated and overrun in the rugged terrain, which also did much to negate the U.S. superiority in firepower. Battalions were to act as single units instead of breaking down into individual companies in order to search for their enemy. If rifle companies had to act independently, they were not to operate more than one kilometer or one hour 's march from one another. If contact with the enemy was made, the unit was to be immediately reinforced. These measures went far in reducing the 4th Infantry 's casualties. These heavy enemy contacts prompted Peers to request reinforcement and, as a result, on 17 June, two battalions of Brigadier General John R. Deane 's 173rd Airborne Brigade were moved into the Đắk Tô area to begin sweeping the jungle - covered mountains in Operation Greeley. The 173rd had been operating near Bien Hoa Air Base outside Saigon and had been in combat only against NLF guerrillas. Prior to its deployment to the highlands, Peer 's operations officer, Colonel William J. Livsey, attempted to warn the Airborne officers of the hazards of campaigning in the highlands. He also advised them that PAVN regulars were a much better equipped and motivated force than the NLF. These warnings, however, made little impression on the paratroopers, who were unaccustomed to PAVN tactics and strength in the area. On 20 June, Charlie Company, 2nd battalion, 503rd Airborne Infantry (C / 2 / 503) discovered the bodies of a Special Forces CIDG unit that had been missing for four days on Hill 1338, the dominant hill mass south of Dak To. Supported by Alpha Company, the Americans moved up the hill and set up for the night. At 06: 58 the following morning, Alpha Company began moving alone up a ridge finger and triggered an ambush by the 6th Battalion of the 24th PAVN Regiment. Charlie Company was ordered to go to support, but heavy vegetation and difficult terrain made movement extremely difficult. Artillery support was rendered ineffective by the limited range of visibility and the "belt - grabbing '' - or "hugging '' - tactics of the North Vietnamese. Close air support was impossible for the same reasons. Alpha Company managed to survive repeated attacks throughout the day and night, but the cost was heavy. Of the 137 men that comprised the unit, 76 had been killed and another 23 wounded. A search of the battlefield revealed only 15 dead North Vietnamese. U.S. headquarters press releases, made four days after the conclusion of what came to be called "The Battle of the Slopes '', claimed that 475 North Vietnamese had been killed while the 173rd 's combat after action report claimed 513 enemy dead. The men of Alpha Company estimated that only 50 -- 75 PAVN troops had been killed during the entire action. Such losses among American troops could not go unpunished. The operations officer of the 4th Infantry went so far as to recommend that General Deane be relieved of command. Such a drastic measure, however, would only provide more grist for what was becoming a public relations fiasco. In the end, the commander and junior officers of Charlie Company (whose only crime was that of caution) were transferred to other units. In response to the destruction of Alpha Company, MACV ordered additional forces into the area. On 23 June, the 1st Battalion, 1st Brigade, 1st Air Cavalry Division arrived to bolster the 173rd. The following day, the elite ARVN 1st Airborne Task Force (the 5th and 8th Battalions) and the 3rd Brigade of the 1st Air Cavalry Division arrived to conduct search and destroy operations north and northeast of Kon Tum. General Deane sent his forces 20 kilometers west and southwest of Dak To in search of the 24th PAVN Regiment. After establishing Fire Support Base 4 on Hill 664, approximately 11 kilometers southwest of Đắk Tô, the 4th Battalion, 503rd Airborne Infantry found the North Vietnamese K - 101D Battalion of the Doc Lap Regiment on 10 July. As the four companies of the battalion neared the crest of Hill 830 they were struck by a wall of small arms and machine gun fire and blasted by B - 40 rocket - propelled grenades and mortar fire. Any advance was impossible, so the paratroopers remained in place for the night. The following morning, the North Vietnamese were gone. The 4th of the 503rd suffered 22 dead and 62 wounded. The bodies of three PAVN soldiers were found on the site. North Vietnamese pressure against CIDG outposts at Dak Seang and Dak Sek, 20 and 45 kilometers north of Đắk Tô respectively, was the impetus for dispatching the 42nd ARVN Infantry Regiment into the area while the ARVN Airborne battalion moved to Dak Seang. On 4 August, the 1st of the 42nd encountered the North Vietnamese on a hilltop west of Dak Seang, setting off a three - day battle that drew in the South Vietnamese paratroopers. The 8th Airborne, along with U.S. Army advisers, was airlifted into a small unimproved air field next to the Special Forces camp at Dak Seang. The camp was under sporadic fire and probing ground attack by PAVN forces. This occurred when its Special Forces commander and a patrol failed to return and the camp received what appeared to be preparatory fire for a full scale ground attack by PAVN. The terrain was high mountains with triple canopy jungle. The importance of the Dak Seang camp was that it lay astride the Ho Chi Minh Trail, the main infiltration route of the PAVN into the South. About a kilometer from the camp, the Army advisers and the 8th Airborne came upon the bodies of the lost Special Forces patrol, all dead, including the camp commander. As the 8th Airborne moved up the mountain, the lead elements were taking small arms fire. Before long, it was obvious that the PAVN troops had filtered down on all sides. By noon of 4 August, the 8th Airborne with its advisers were in a fight that lasted several days. When the unit finally overwhelmed the PAVN forces because of superior firepower in air and artillery, it reached the top of the mountain and found a fully operational PAVN Headquarters, complete with hospital facilities and anti-aircraft emplacements. During the three - day battle, the 8th Airborne Battalion alone withstood six separate ground attacks and casualties among all the South Vietnamese units were heavy. By mid-August, contact with communist forces decreased, leading the Americans to conclude that the North Vietnamese had withdrawn across the border. The bulk of the ARVN Airborne units were then returned to their bases around Saigon for rest and refitting. On 23 August, General Deane turned over command of the 173rd to Brigadier General Leo H. Schweiter. On 17 September, two battalions of the 173rd departed the area to protect the rice harvest in Phu Yen Province. The 2nd of the 503rd remained at Đắk Tô along with the 3rd ARVN Airborne Battalion to carry out a sweep of the Toumarong Valley north of Đắk Tô and the suspected location of a PAVN regimental headquarters. After three weeks of fruitless searching, however, the operation was halted on 11 October. Operation Greeley was over. By early October, U.S. intelligence reported that the North Vietnamese were withdrawing regiments from the Pleiku area to join those in Kon Tum Province, thereby dramatically increasing the strength of local forces to that of a full division. In response, the 4th Infantry began moving the 3rd Battalion, 12th Infantry and the 3rd Battalion, 8th Infantry into Đắk Tô to launch Operation MacArthur. On 29 October, the 4 / 503 of the 173rd Airborne Brigade was returned to the area as a reinforcement. The battalion was moved west of Đắk Tô to the Ben Het CIDG Camp to protect the construction of Fire Support Base 12 on 2 November. On 3 November, Sergeant Vu Hong, an artillery specialist with the 6th PAVN Regiment, defected to the South Vietnamese and was able to provide U.S. forces with detailed information on the disposition of PAVN forces and their objectives, both at Đắk Tô and at Ben Het, 18 kilometers to the west. The North Vietnamese had fed approximately 6,000 troops into the area, most of which made up the 1st PAVN Division. The 66th PAVN Regiment was southwest of Đắk Tô preparing to launch the main attack while the 32nd PAVN Regiment was moved south to prevent any counterattacks against the 66th. The independent 24th PAVN Regiment held positions northeast of Đắk Tô to prevent reinforcement of the base from that direction. The 174th PAVN Regiment was northwest of Đắk Tô, acting as a reserve or an offensive force as the situation dictated. In addition, the 1st PAVN Division was supported by the 40th PAVN Artillery Regiment. The goal of these units was the taking of Đắk Tô and the destruction of a brigade - size American unit. The communist actions around Đắk Tô were part of an overall strategy devised by the Hanoi leadership, primarily that of General Nguyen Chi Thanh. The goal of operations in the area, according to a captured document from the B - 3 Front Command, was "to annihilate a major U.S. element in order to force the enemy to deploy as many additional troops to the Central Highlands as possible. '' As the Americans quickly discovered, the area had been well prepared by the North Vietnamese. The number and elaborateness of defensive preparations found by U.S. and ARVN troops indicated that some had been prepared as much as six months in advance. As General Peers noted: Nearly every key terrain feature was heavily fortified with elaborate bunker and trench complexes. He had moved quantities of supplies and ammunition into the area. He was prepared to stay. After contact with the PAVN forces on the 4th and 5th of the month, General Schweiter received orders to move the rest of his brigade back to Đắk Tô. The immediate goal of the paratroopers was first to establish a base of operations and bolster the defenses at Ben Het. They would then begin to search for the headquarters of the 66th PAVN Regiment, which U.S. intelligence believed to be in the valley stretching south of FSB 12. Simultaneously, most of the remaining elements of the 4th Infantry Division moved into the area around Đắk Tô. They were joined by two First Air Cavalry battalions (the 1 / 12 and 2 / 8th Cavalry) and ARVN forces consisting of the four battalions of the 42nd Regiment and the 2nd and 3rd Airborne Battalions. By this time, the village and airstrip had become a major logistical base, supporting an entire U.S. division and airborne brigade and six ARVN battalions. The stage was set for a major pitched battle. The first fighting of the new operation erupted on 3 November and 4 November when companies of the 4th Infantry came across PAVN defensive positions. The next day the same thing occurred to elements of the 173rd. The American and ARVN troops soon applied a methodical approach to combat in the highlands. They combed the hills on foot, ran into fixed PAVN hill - top defensive positions, applied massive firepower, and then launched ground attacks to force the North Vietnamese off. In all of these instances, PAVN troops fought stubbornly, inflicted casualties on the Americans, and then withdrew. To expand the coverage of supporting artillery fires, the 4th Battalion of the 173rd was ordered to occupy Hill 823, south of Ben Het, for the construction of Fire Support Base 15. Since the rest of the battalion 's companies were already deployed elsewhere, the 120 men of Bravo Company would combat assault onto the hilltop by helicopter alone. After several attempts to denude the hilltop with airstrikes and artillery fire, Bravo company landed unopposed that afternoon, but the hill was not unoccupied. Fifteen minutes later, contact was made with the North Vietnamese. The battle that ensued raged at close quarters until early the following morning when elements of the 66th PAVN Regiment withdrew, leaving behind more than 100 bodies. Nine Americans of Bravo Company, 4 / 503 also lay dead and another 28 were wounded. The following morning Bravo Company was relieved by Lieutenant Colonel David J. Schumacher 's 1 / 503, which (against the admonitions of Colonel Livsey) was divided into two small Task Forces. Task Force Black consisted of Charlie Company supported by two platoons of Dog Company and Task Force Blue which was composed of Alpha Company and the remaining platoon of Dog. Task Force Black left Hill 823 to find the North Vietnamese who had attacked B / 4 / 403. At 08: 28 on 11 November, after leaving their overnight laager and following a PAVN communications wire, the force was ambushed by the 8th and 9th Battalions of the 66th PAVN Regiment and had to fight for its life. Task Force Blue and Charlie Company 4 / 503 drew the job of going to the relief of the beleaguered task force. They encountered fire from all sides during the relief attempt, but they made it, reaching the trapped men at 15: 37. U.S. losses were 20 killed, 154 wounded, and two missing. The commanding officer of Task Force Black, Captain Thomas McElwain, reported an enemy body count of 80 but was commanded by Schumacher (whose conduct of the action later came under severe criticism) to go out and count again. He then reported back that 175 PAVN soldiers had been killed. He later stated that "If you lost so many people killed and wounded, you had to have something to show for it. '' McElwain and Schumacher later clashed over McElwain 's recommendation for a decoration for Private First Class John Andrew Barnes, III, who had leapt on a grenade and sacrificed his life to save wounded comrades during the action. Schumacher refused to endorse the recommendation, stating that he did not think medals were for "men who committed suicide. '' Barnes was later awarded the Medal of Honor. The North Vietnamese simultaneously attacked the three companies of the 3 / 8th Infantry of the 4th Infantry Division on Hill 724. Beginning at 13: 07 and lasting for thirty minutes, a mortar barrage rained onto the battalion 's laager site. North Vietnamese troops then charged out of the jungle to the attack. By the time the action ended at 19: 03, 18 Americans were dead and another 118 were wounded. The 4th Infantry claimed that 92 North Vietnamese had died in the clash. On the night of 12 November, the North Vietnamese launched the first of many rocket attacks against the Đắk Tô airfield, firing 44 missiles. By 08: 00 on 15 November, three C - 130 Hercules transport aircraft were in the turnaround area as a PAVN mortar barrage landed. Two of them were destroyed. The resulting fires and additional incoming mortars set the ammunition dump and fuel storage areas ablaze. Explosions continued all day and into the night. During that night 's incoming shelling, a mortar round landed on two steel containers of C - 4 plastic explosive. They detonated simultaneously, sending a fireball and mushroom cloud high above the valley and leaving two craters 40 feet deep. This was said to be the largest explosion to occur in the Vietnam War, knocking men off their feet over a mile away. The explosion destroyed the entire 15th Light Equipment Company compound next to the ammunition dump although no one was killed. Engineer Lieutenant Fred Dyerson thought "it looked like Charlie had gotten hold of some nuclear weapons. '' Although more than 1,100 tons of ordnance were destroyed during the explosions and fires, this was as close as the North Vietnamese would get to taking Dak To. The rapid deployment of allied forces had upset the North Vietnamese offensive and had thrown them onto the defensive. Previous actions had battered the 66th and 33rd PAVN Regiments, and they began a southwesterly retreat, covered by the 174th Regiment. The Americans and the ARVN then began to run into tenacious rearguard actions. To prevent a repetition of the artillery attack against its base camp, the 4th Division 's 3rd Battalion, 12th Infantry Regiment was ordered to take Hill 1338, which had an excellent overview of Đắk Tô, only six kilometers away. For two days, the Americans fought their way up the steep slope of the hill and into the most elaborate bunker complex yet discovered, all of the fortifications of which were connected by field telephones. After scouring the area of the North Vietnamese who attacked Task Force Black, the three companies of 1 / 503 moved southwest to occupy Hill 882. The force was accompanied by approximately a dozen civilian news correspondents. On the morning of 15 November, the lead company crested the hill and discovered bunkers connected by telephone wire. They were then attacked, and the rest of the Americans rushed to the hilltop to take defensive positions. PAVN troops poured small arms, machine gun, and mortar fire on the Americans and launched several ground attacks. The U.S. commander requested helicopter evacuation for the most seriously wounded, but this request was denied by Colonel Schumacher, who demanded that the civilians be evacuated first. When the fighting ceased on 19 November the U.S. battalion had suffered seven killed and 34 wounded. The North Vietnamese 66th Regiment left behind 51 dead. While the action on Hill 882 was underway, tragedy struck Dog Company, 4 / 503. The unit was conducting road clearing operations around Ben Het while being accompanied by a CIDG Mike Force. While calling in an artillery fire mission, an error caused two rounds to fall on the company 's position. Six Americans and three CIDG were killed outright and 15 paratroopers and 13 CIDG troops were wounded in the friendly fire incident. South Vietnamese units had also found plenty of action in the Đắk Tô area. On 18 November, on Hill 1416 northeast of Tan Canh, the ARVN 3 / 42nd Infantry found the PAVN 24th Infantry Regiment in well - fortified defensive positions. The elite, all - volunteer ARVN 3rd and 9th Airborne Battalions joined the action, attacking the hill from another direction. The ARVN forces took the hill on 20 November after vicious close - quarters fighting that claimed 66 South Vietnamese dead and another 290 wounded. The North Vietnamese left behind 248 of their own. U.S intelligence indicated that the fresh 174th PAVN Regiment had slipped westward past Ben Het and had taken up positions on an 875 - meter - high hill just six kilometers from the border. The 174th had done so in order to cover the withdrawal of the 66th and 32nd Regiments, which were moving toward their sanctuaries across the Cambodian frontier. On 19 November, General Schweiter was informed that a Special Forces Mobile Strike Force company had run into heavy resistance while reconnoitering the area. He then ordered his 2nd Battalion to take the hill. At 09: 43 on 19 November, the three companies (330 men) of 2 / 503 moved into jumpoff positions from which to assault Hill 875. Charlie and Delta companies moved up the slope followed by two platoons of Alpha Company in the classic "two up one back '' formation utilized since World War I. The Weapons Platoon of Alpha remained behind at the bottom of the hill to cut out a landing zone. Instead of a frontal assault with massed troops, the unit would have been better served by advancing small teams to develop possible North Vietnamese positions and then calling in air and artillery support. At 10: 30, as the Americans moved to within 300 meters of the crest, PAVN machine gunners opened fire on the advancing paratroopers. Then B - 40 rockets and 57mm recoilless rifle fire were unleashed upon them. The paratroopers attempted to continue the advance, but the North Vietnamese, well concealed in interconnected bunkers and trenches, opened fire with small arms and grenades. The American advance was halted and the men went to ground, finding whatever cover they could. At 14: 30 PAVN troops hidden at the bottom of the hill launched a massed assault on Alpha Company. Unknown to the Americans, they had walked into a carefully prepared ambush by the 2nd Battalion of the 174th PAVN Regiment. The men of Alpha Company retreated up the slope, lest they be cut off from their comrades and annihilated. They were closely followed by the North Vietnamese. All that prevented the company - strength North Vietnamese onslaught from overrunning the entire battalion was the heroic efforts of American paratroopers who stood their ground and died to buy time for their comrades. Soon, U.S. air strikes and artillery fire were being called in, but they had little effect on the battle because of the dense foliage on the hillside. Resupply became a necessity because of high ammunition expenditures and lack of water, but it was also an impossibility. Six UH - 1 helicopters were shot down or badly damaged that afternoon trying to get to 2 / 503. At 18: 58 one of the worst friendly fire incidents of the Vietnam War occurred when a Marine Corps fighter - bomber, flown by the Commanding Officer, a LTC, of a Marine Air Group from Chu Lai, dropped two 500 - pound bombs into 2 / 503 's perimeter. One of the bombs exploded, a tree burst above the center of the position, where the combined command groups, the wounded, and the medics were all located. It killed 42 men outright and wounded 45 more, including the overall on - scene commander, Captain Harold Kaufman. 1Lt. Bartholomew O'Leary, Delta Company Commander, was seriously wounded. (Alpha company 's commander had been killed in the retreat up the slope). The next morning, the three companies of 4 / 503 were chosen to set out and relieve the men on Hill 875. Because of intense PAVN sniper and mortar fire (and the terrain) it took until nightfall for the relief force to reach the beleaguered battalion. On the afternoon of 21 November, both battalions moved out to take the crest. During fierce, close - quarters fighting, some of the paratroopers made it into the PAVN trenchline but were ordered to pull back as darkness fell. At approximately 23: 00, the 4th Division 's 1 / 12th Infantry was ordered to withdraw from an offensive operations in the southern Central Highlands and redeploy to Đắk Tô. In an almost flawless night - time air redeployment, the entire battalion redeployed and took up positions around the main fire support base at Đắk Tô in less than 12 hours. The following day was spent in launching airstrikes and a heavy artillery bombardment against the hilltop, totally denuding it of cover. On 23 November, the 2nd and 4th Battalions of the 503rd were ordered to renew their assault while the 1st Battalion of the 12th Infantry assaulted 875 from the south. This time the Americans gained the crest, but the North Vietnamese had already abandoned their positions, leaving only a few dozen charred bodies and weapons. The battle of Hill 875 had cost 2 / 503 87 killed, 130 wounded, and three missing. 4 / 503 suffered 28 killed 123 wounded, and four missing. Combined with noncombatant losses, this represented one - fifth of the 173rd Airborne Brigade 's total strength. For its combined actions during operations around Đắk Tô, the 173rd Airborne Brigade was awarded the Presidential Unit Citation. By the end of November, the North Vietnamese withdrew back into their sanctuaries in Cambodia and Laos, failing to wipe out a major American unit, yet forcing the U.S. Army to pay a high price. 376 U.S. troops had been killed or listed as missing - presumed dead and another 1,441 were wounded, in the fighting around Đắk Tô. The fighting had also taken a toll on South Vietnamese troops. 73 ARVN soldiers were killed in the fighting. U.S. munitions expenditures attested to the ferocity of the fighting: 151,000 artillery rounds, 2,096 tactical air sorties, 257 B - 52 strikes. 2,101 Army helicopter sorties were flown, and 40 helicopters were lost. The U.S. Army claimed that 1,644 PAVN troops had been killed by body count, but this figure quickly became a source of contention. In his memoirs, General William C. Westmoreland, U.S. commander in Vietnam, mentioned 1,400 North Vietnamese casualties, while Major General William B. Rossen, the MACV deputy commander, estimated that the North Vietnamese lost between 1,000 and 1,400 men. Not all American commanders were happy with the friendly to enemy loss ratio. U.S. Marine Corps General John Chaisson questioned "Is it a victory when you lose 362 friendlies in three weeks and by your own spurious body count you only get 1,200? '' Three of the four North Vietnamese regiments that participated in the fighting had been so battered that they played no part in the next phase of their winter - spring offensive. Only the 24th PAVN Regiment took the field during the Tet Offensive of January 1968. The 173rd Airborne Brigade and two battalions of the 4th Infantry Division were in no better shape. General Westmoreland claimed that "we had soundly defeated the enemy without unduly sacrificing operations in other areas. The enemy 's return was nil. '' But Westmoreland 's claim may have missed the point. The border battles fought that fall and winter had indeed cost the North Vietnamese dearly, but they had achieved their objective. By January 1968, one - half of all U.S. maneuver battalions in South Vietnam had been drawn away from the cities and lowlands and into the border areas. Several members of Westmoreland 's staff began to see an eerie resemblance to the Viet Minh campaign of 1953, when seemingly peripheral actions had led up to the climactic Battle of Dien Bien Phu. General Giap even laid claim to such a strategy in an announcement in September, but, to the Americans, it all seemed a bit too contrived. Yet, no understandable analysis seemed to explain Hanoi 's almost suicidal military actions. They could only be explained if a situation akin to Dien Bien Phu came into being. Then, almost overnight, one emerged. In the western corner of Quang Tri Province, an isolated Marine outpost at Khe Sanh came under siege by PAVN forces that would eventually number three divisions. Three members of the 173rd Airborne Brigade (Maj. Charles J. Watters, Pfc. John A. Barnes III and Pfc. Carlos Lozada) all posthumously received the Medal of Honor for their actions during the battle. Notes Bibliography Further reading
tracking by detection vs tracking by recursive bayesian filtering
Recursive Bayesian estimation - wikipedia Recursive Bayesian estimation, also known as a Bayes filter, is a general probabilistic approach for estimating an unknown probability density function recursively over time using incoming measurements and a mathematical process model. A Bayes filter is an algorithm used in computer science for calculating the probabilities of multiple beliefs to allow a robot to infer its position and orientation. Essentially, Bayes filters allow robots to continuously update their most likely position within a coordinate system, based on the most recently acquired sensor data. This is a recursive algorithm. It consists of two parts: prediction and innovation. If the variables are linear and normally distributed the Bayes filter becomes equal to the Kalman filter. In a simple example, a robot moving throughout a grid may have several different sensors that provide it with information about its surroundings. The robot may start out with certainty that it is at position (0, 0). However, as it moves farther and farther from its original position, the robot has continuously less certainty about its position; using a Bayes filter, a probability can be assigned to the robot 's belief about its current position, and that probability can be continuously updated from additional sensor information. The true state x (\ displaystyle x) is assumed to be an unobserved Markov process, and the measurements z (\ displaystyle z) are the observed states of a Hidden Markov model (HMM). The following picture presents a Bayesian Network of a HMM. Because of the Markov assumption, the probability of the current true state given the immediately previous one is conditionally independent of the other earlier states. Similarly, the measurement at the k - th timestep is dependent only upon the current state, so is conditionally independent of all other states given the current state. Using these assumptions the probability distribution over all states of the HMM can be written simply as: However, when using the Kalman filter to estimate the state x, the probability distribution of interest is associated with the current states conditioned on the measurements up to the current timestep. (This is achieved by marginalising out the previous states and dividing by the probability of the measurement set.) This leads to the predict and update steps of the Kalman filter written probabilistically. The probability distribution associated with the predicted state is the sum (integral) of the products of the probability distribution associated with the transition from the (k - 1) - th timestep to the k - th and the probability distribution associated with the previous state, over all possible x k − 1 (\ displaystyle x_ (k - 1)). The probability distribution of update is proportional to the product of the measurement likelihood and the predicted state. The denominator is constant relative to x (\ displaystyle x), so we can always substitute it for a coefficient α (\ displaystyle \ alpha), which can usually be ignored in practice. The numerator can be calculated and then simply normalized, since its integral must be unity. Sequential Bayesian filtering is the extension of the Bayesian estimation for the case when the observed value changes in time. It is a method to estimate the real value of an observed variable that evolves in time. The method is named: The notion of Sequential Bayesian filtering is extensively used in control and robotics.
who wrote the score for fantastic beasts and where to find them
Fantastic Beasts and Where to Find Them (film) - wikipedia Fantastic Beasts and Where to Find Them is a 2016 fantasy film directed by David Yates. It is a prequel to the Harry Potter film series, and it was produced and written by J.K. Rowling in her screenwriting debut, and inspired by her 2001 book of the same name. The film stars Eddie Redmayne as Newt Scamander, with Katherine Waterston, Dan Fogler, Alison Sudol, Ezra Miller, Samantha Morton, Jon Voight, Carmen Ejogo, Ron Perlman and Colin Farrell in supporting roles. It is the first installment in the Fantastic Beasts series, and the ninth overall in J.K. Rowling 's Wizarding World, the franchise that began with the Harry Potter films. Fantastic Beasts and Where to Find Them premiered in New York City on 10 November 2016 and was released worldwide on 18 November 2016 in 3D, IMAX 4K Laser and other large format cinemas. The film received generally positive reviews from critics and grossed $814 million worldwide, making it the eighth highest - grossing film of 2016. The film was nominated for five BAFTAs, winning for Best Production Design, as well as two Academy Awards, winning Best Costume Design, becoming the first film in J.K. Rowling 's Wizarding World to win an Academy Award. In 1926, British wizard and "magizoologist '' Newt Scamander arrives by ship to New York en route to Arizona. He encounters Mary Lou Barebone, a No - Maj woman who leads the New Salem Philanthropic Society. While Newt listens to her speech about how witches and wizards are real and dangerous, a Niffler escapes from his magically expanded suitcase and slips into a nearby bank. Newt re-captures the Niffler, but in doing so accidentally reveals magic to No - Maj Jacob Kowalski and they accidentally swap suitcases. Newt is detained by demoted Auror Porpentina ' Tina ' Goldstein, for failing to obliviate Jacob. Tina takes Newt to the Magical Congress of the United States of America (MACUSA) headquarters, hoping his arrest will help her to regain her former position. However, Jacob 's suitcase contains only baked goods, and Newt is released. At Jacob 's apartment several creatures escape from Newt 's suitcase and Jacob is bitten by a Murtlap, causing him to have a reaction that includes profuse sweating. After Tina and Newt find Jacob and the suitcase, Tina takes them to her apartment and introduces them to her Legilimens sister Queenie. Jacob and Queenie are immediately mutually attracted, but American wizards are forbidden to marry or even socially interact with No - Majs. Newt takes Jacob inside his magically expanded suitcase, where Jacob encounters a contained Obscurus. Newt extracted it from a young girl who died, as those afflicted rarely live past the age of ten. Newt persuades Jacob to help search for the missing creatures. After re-capturing two of the three escaped beasts, they re-enter the suitcase, which Tina takes to MACUSA. Officials arrest them, believing that one of Newt 's beasts is responsible for killing Senator Henry Shaw Jr. They decide to destroy Newt 's suitcase and erase Jacob 's recent memories. Director of Magical Security Percival Graves accuses Newt of conspiring with the infamous dark wizard Gellert Grindelwald. Newt and Tina are sentenced to immediate death, but a bowtruckle named Pickett frees them while Queenie rescues Jacob, retrieves Newt 's suitcase, and escapes. Thanks to the help of Goblin gangster Gnarlack, Tina 's old informant, the foursome then find and re-capture the last of the creatures. Tina also reveals to Newt that she was demoted for openly attacking Mary Lou Barebone after seeing how badly Barebone was beating her adopted son Credence. Meanwhile, Percival Graves approaches Credence and offers to free him from his abusive mother. In exchange, he wants Credence to find an Obscurus. Graves believes it has caused the mysterious destructive incidents around the city. Credence finds a wand under his adopted sister Modesty 's bed. Mary Lou assumes it is Credence 's wand, but Modesty says it is hers. When she is about to be punished, the Obscurus arrives and kills everyone except Modesty and Credence. Graves arrives and dismisses Credence as being a Squib and refuses to teach him magic. Credence reveals he is the Obscurus 's host, having lived longer than any other host due to the intensity of his magic. In a fit of rage Credence unleashes the Obscurus upon the city. Newt finds Credence hiding in a subway tunnel near the City Hall station. Graves arrives and attacks Newt, wanting Credence to leave with him and not Newt. Tina arrives and attempts to calm the Obscuris while Graves tries to convince Credence to listen to him. As Credence begins to settle into human form, Aurors arrive and disintegrate Credence to protect the magical society. Graves, enraged at the killing of Credence, admits his contempt for the wizarding laws that force the magical comminity to hide from the No - Majs. Graves attacks the Aurors and is subdued by one of Newt 's beasts. Using a revelio spell, Newt reveals Graves is actually renegade dark wizard Gellert Grindelwald in disguise. Grindlewald is arrested and taken to prison by MACUSA. President Picquery laments that the wizarding world is now likely irreversibly exposed due to the events of that evening, but Newt devises a way to obliviate the entire city. He releases his Thunderbird Frank, whom he uses to distribute a concentrated potion in the rainfall over the city that obliviates all of New York City. Picquery thanks Newt for his help, orders him to leave New York with his case, and orders Jacob also be obliviated. Newt, Tina, and Queenie accompany Jacob outside where he plans to step into the rain and be obliviated. Queenie kisses Jacob goodbye as the rain erases his memories. Newt departs for Europe, but promises to return and visit Tina when his book is finished; he also anonymously leaves Jacob a case of silver Occamy eggshells to fund his bakery. His breads and pastries are unconsciously inspired by Newt 's creatures, and Queenie visits him in his shop. Additionally, Johnny Depp portrays Gellert Grindelwald, one of the most dangerous dark wizards of all time, disguised as Percival Graves. Ronan Raftery portrays Langdon Shaw, the youngest of Henry Shaw Sr. 's sons, who begins to believe in magic, while Josh Cowdery portrays his brother Henry Shaw Jr., an arrogant and cruel U.S. senator who holds a rally picketed by the New Salem Philanthropic Society. Faith Wood - Blagrove portrays Modesty, a haunted young girl and the youngest of Mary Lou 's adopted children. Wood - Blagrove was chosen for the role following thousands of auditions in an open casting call. Jenn Murray portrays Chastity Barebone, the eldest of Mary Lou 's adopted children. Kevin Guthrie portrays Abernathy, Tina and Queenie 's MACUSA supervisor. Zoë Kravitz appears in a photo as Leta Lestrange, Newt 's former love. Fantastic Beasts and Where to Find Them is mentioned several times as a school textbook in the Harry Potter book series, although Scamander himself does not appear in any of the books. In 2001 Rowling published an edition of the "textbook '' to be sold to raise money for the British charity Comic Relief. The book is a directory of magical creatures written with an introduction by its author Newt Scamander; it does not contain a storyline narrative. (In literature, the creation of such a long work not part of a novel 's narrative storyline is known as a false document.) First announced in September 2013, the project marks Rowling 's debut as a screenwriter. The film sees the return of producer David Heyman, as well as writer Steve Kloves, both veterans of the Potter film franchise. After Alfonso Cuarón declined involvement, Warner Bros. announced that David Yates would direct at least the first instalment of a planned trilogy. James Newton Howard was contracted to compose the score. Principal photography commenced on 17 August 2015, at Warner Bros. Studios, Leavesden and was completed in January 2016. Several scenes were also shot on location in London. After two months, the production moved to St George 's Hall in Liverpool, which was transformed into 1920s New York City. Framestore in London produced the visual effects for the film. On 9 April 2016, it was announced that James Newton Howard would write and compose the film 's score. On 24 October, Pottermore published an official first look at the film 's main theme composed by Howard. The main theme incorporated John Williams ' themes from earlier films, such as "Hedwig 's Theme ''. The soundtrack was released by WaterTower Music on 18 November 2016. Fantastic Beasts and Where to Find Them held its world premiere at Alice Tully Hall in New York City on 10 November 2016. The film was released worldwide on 18 November 2016, in 2D, 3D and the new IMAX 4K Laser system. It would premiere one day earlier in a number of other countries, including Argentina, Australia, Brazil, Germany and Italy, on 17 November. The film will be released in a total of 1,028 IMAX screens worldwide (388 screens in the United States and Canada, 347 screens in China, 26 screens in Japan and 267 screens in other countries). This marked the second time -- after Doctor Strange -- that a film secured a release in over 1,000 IMAX screens worldwide. A "story pack '' based on Fantastic Beasts and Where to Find Them was released for the video game Lego Dimensions by WB Games and TT Games. The pack includes a constructible model of MACUSA, figures of Newt Scamander and a Niffler, and a six - level game campaign that adapts the film 's events. The pack was released on the same day as the film, alongside a "fun pack '' containing figures of Tina Goldstein and a Swooping Evil. The cast of the film reprises their roles in the game. On 4 November 2015, Entertainment Weekly released the first official publicity shots of the film, containing pictures of characters Newt, Tina, and Queenie, and production and filming being held in various sets designed to mirror 1920s New York City. On 10 December 2015, it was announced that an "announcement trailer '' would be released five days later, on 15 December. Along with the one - minute trailer, a teaser poster was released. During "A Celebration of Harry Potter '' at Universal Orlando Resort in February 2016, a featurette was released showcasing several interviews with various cast and crew members, as well as the first official behind - the - scenes footage. On 10 April 2016, the first "teaser trailer '' was released during the MTV Movie Awards. On 10 August, more information and publicity shots for the film were released through Entertainment Weekly, with new information on Ezra Miller 's character Credence Barebone and the news that Zoe Kravitz would have a role in the series. New images released include the quartet running down a New York City alleyway, David Yates chatting to stars Katherine Waterston and Eddie Redmayne on the set in front of a blown out Subway station, Colin Farrell 's character Percival Graves interrogating an arrested and handcuffed Newt, and Graves and Credence putting up anti-magic propaganda. On 26 April 2016, it was announced that the film 's script would be released in the form of a book on 19 November. The book, titled Fantastic Beasts and Where to Find Them: The Original Screenplay, was written by Rowling herself. In an effort to avoid revealing plot details before the film 's release, the novelization of the film was released the following day of the film 's premiere, on 19 November 2016. On 7 March 2016 a trailer - preview was released about the History of Magic in North America as it is in the Harry Potter universe. On 7 October 2016, Rowling also released on Pottermore four pieces of writing exclusively as an introduction to the film Fantastic Beasts and Where to Find Them, titled History of Magic in North America. It includes information about scourers in North America, brutal and violent magical mercenaries who played a significant role in the historic Salem witch trials of the 1600s, as well as info about various American wand makers; the role magic played in World War I; Native American magic; the foundation of MACUSA; the harsh enforcement No - Maj / Wizarding segregation; and life in 1920s Wizarding America; with info about wand permits and prohibition. On 28 June 2016, Rowling released a second part to her History of Magic in North America series, concerning the fictitious Ilvermorny School of Witchcraft and Wizardry, detailing the founding of the pre-eminent American Wizarding academy and allowing users to sort themselves into one of the four houses of the school. The school itself is mentioned in the film. Fantastic Beasts was released on Digital HD on 7 March 2017, and on 4K UHD, 3D Blu - ray, Blu - ray and DVD on 28 March 2017. Fantastic Beasts and Where to Find Them grossed $234 million in the United States and Canada and $580 million in other countries for a total of $814 million. The film was made on a budget of $180 million, with an additional $150 million spent on marketing. Worldwide, the film grossed $219.9 million during its opening weekend from around 64 markets in 24,200 screens, both the fifth biggest in Rowling 's wizarding cinematic universe, and the seventh - biggest of the month of November. IMAX totalled $15 million from 605 screens. Deadline.com calculated the net profit of the film to be $164 million, when factoring together all expenses and revenues for the film, making it the 9th most profitable release of 2016. Fantastic Beasts went on general release in the United Kingdom and Ireland on 18 November 2016. It debuted with £ 15.33 million ($19.15 million) from 666 cinemas, the biggest debut of any film this year, ahead of the previous record holder, Batman v Superman: Dawn of Justice (£ 14.62 million). The film was surpassed during the last days of 2016 by Rogue One: A Star Wars Story which gained over one million pounds. In the United States and Canada, tracking had the film grossing $68 -- 85 million in its opening weekend, with some estimates going as high as $100 million. The film was released on 18 November in 4,143 cinemas, of which 388 were IMAX screens, and over 3,600 were showing the film in 3D. It grossed $29.7 million on its first day, the second - lowest opening day among Rowling 's adaptations (behind the $29.6 million Friday of Harry Potter and the Chamber of Secrets). This included $8.75 million it earned from Thursday night preview screenings beginning at 6 pm in 3,700 cinemas. In total, the film earned $74.4 million in its opening weekend, falling in line with projections and finishing first at the box office, but recorded the lowest opening among Rowling 's Harry Potter universe. It made $8 million from 388 IMAX screens, $9 million from 500 premium large format locations and $1.75 million from Cinemark XD. The film 's opening was considered a hit taking into account how the story was not based on a popular existing source, and the film itself was void of the franchise 's main character, Harry Potter. It was the top choice among moviegoers, representing 47 % of the weekend 's total $157.6 million tickets sales. On its second Friday, it had a gradual drop of 37 % ($18.5 million) from the week before, the second best Friday drop for any Harry Potter film, behind The Philosopher 's Stone. This was in part due to Black Friday, the most lucrative day of the Thanksgiving Day stretch. It ended up grossing $45.1 million in its second weekend (a drop of just 39.4 %), finishing 2nd at the box office behind newcomer Moana. Outside North America, the film debuted day - and - date in 63 countries, along with its North American release, where it was projected to gross $90 -- 125 million in its opening weekend. It opened 16 November 2016 in 9 countries, earning $6.9 million from 5,070 screens. It opened in 38 more countries on 18 November, earning $16.6 million for a total of $23.5 million in two days. In three days, it made $53.6 million. Through Sunday, 20 November, the film had a five - day opening weekend of $145.5 million from 63 countries, which is way above the initial projections. It earned another $132 million in its second weekend after a large debut in China and Japan. It recorded the biggest opening day of all - time among the Harry Potter franchise in Korea ($1.7 million), the UAE ($429,000) and Ukraine, the second biggest in Mexico ($1.8 million), Russia and the CIS ($1.7 million), Brazil ($1.3 million) and in Indonesia ($480,000), all behind Harry Potter and the Deathly Hallows -- Part 2 and the third biggest in the United Kingdom ($5.4 million), behind Part 1 and Part 2. It also scored the second biggest Warner Bros. opening of all - time in the Czech Republic and Slovakia. Notably, France opened with $1.8 million, Spain with $1.4 million, and Germany with $1 million ($2 million including paid previews). In terms of opening weekend, the film posted the biggest opening among the Harry Potter franchise in 16 markets, including South Korea ($14.2 million, also the third - biggest opening for the studio), Russia ($9.8 million) and Brazil ($6.4 million), the biggest opener of the year in Germany ($10.2 million), Sweden, Belgium and Switzerland and the biggest Warner Bros. debut in those along with France ($10.2 million), Holland and Denmark. Italy debuted with $6.6 million, the biggest for a U.S. film in the country. Australia opened with $7.4 million, followed by Mexico ($5.8 million) and Spain ($4.5 million). It opened in China on 25 November alongside Disney 's animated Moana but did n't face significant competition from it. It earned $11.2 million on its opening day from 11,600 screens, the best among the Rowlings cinematic universe. In total, it had an opening weekend of $41.1 million, dominating 60 % of the top five films with 70,000 screenings per day. This alone surpassed the entire lifetime total of all Harry Potter films save the last one. Similarly in Japan -- typically the biggest or second biggest market for the previous Harry Potter films -- it debuted with $15.5 million, besting the total lifetime of all the previous films except for Harry Potter and the Half - Blood Prince and Deathly Hallows -- Part 2. The film also set a number of IMAX records. In total, the opening weekend was worth $7 million from 276 screens, which is the second - highest ever in the Wizarding World, behind Deathly Hallows -- Part 2. In 33 territories, it opened at number one. Moreover, it 's also the third highest - grossing November international IMAX opening ever, and the No. 1 start for IMAX in November in 19 countries including Japan ($1.1 million), the UK, Russia, Germany, and the Netherlands. In China, it had the biggest IMAX opening among the franchise with $5.1 million from 347 IMAX screens. Overall, the film has earned a global cumulative total of $19.1 million from the format. It has become the highest - grossing film in Rowling 's cinematic universe in Russia ($16.7 million) and the second - highest in South Korea ($24.6 million). China ($41.1 million) the United Kingdom ($37.6 million), followed by Germany ($18.4 million), France ($16.7 million), and Spain ($13.3 million) are the film 's biggest earning markets. The review aggregator website Rotten Tomatoes sampled 277 critics and judged 73 % of the reviews as positive, with an average rating of 6.8 / 10. The site 's critical consensus reads, "Fantastic Beasts and Where to Find Them draws on Harry Potter 's rich mythology to deliver a spinoff that dazzles with franchise - building magic all its own. '' Metacritic, which assigns a normalised rating to reviews, gives the film a score 66 out of 100, based on 50 critics, indicating mostly favourable reviews. Audiences surveyed by CinemaScore gave the film an average grade of "A '' on an A+ to F scale. Peter Bradshaw of The Guardian gave the film five out of five stars, hailing it as "a rich, baroque, intricately detailed entertainment '' and a "terrifically good - natured, unpretentious and irresistibly buoyant film ''. NME 's Larry Bartleet also gave it five stars, calling it "more enchanting to your inner kid than the Potter films ever were ''. IndieWire 's Eric Kohn gave the film a B+ saying that it "delivers the most satisfying period fantasy since Tim Burton 's Sweeney Todd: The Demon Barber of Fleet Street ", and that its layers of sophistication made it one of the best Hollywood blockbusters of the year. Mike Ryan of Uproxx gave the film a positive review, writing "Newt Scamander is nothing like Harry, but it has to be this way. It all has to be different. And it is, but, again, with just enough ' sameness ' to make us feel like we are at home again. I 'm looking forward to wherever these movies are taking us ''. John DeFore of The Hollywood Reporter wrote that the film is "likely to draw in just about everyone who followed the Potter series and to please most of them ''. New York Magazine 's David Edelstein deemed the film a "distinctly unmagical slog '', remarking that the beasts "are n't especially fantastic and the effects are too blandly corporate to be exhilarating ''. Initially, in October 2014, the studio announced the film would be the start of a trilogy. The sequel is set to be released on 16 November 2018, followed by the third on 20 November 2020. In July 2016, David Yates confirmed that Rowling had written the screenplay for the second film and has ideas for the third. In October 2016, Rowling confirmed that the series would comprise five films. In November 2016 it was confirmed that Johnny Depp will have a starring role in the sequel, reprising his role as Gellert Grindelwald. In April 2017, it was confirmed that Jude Law had been cast for the role of Albus Dumbledore at around the time he was the Transfiguration professor at Hogwarts. The second film will take place in the UK and Paris. Filming for the second film began in July 2017.
how far is montgomery alabama from mobile alabama
Alabama - wikipedia As of 2010 Alabama (/ ˌæləˈbæmə / (listen)) is a state in the southeastern region of the United States. It is bordered by Tennessee to the north, Georgia to the east, Florida and the Gulf of Mexico to the south, and Mississippi to the west. Alabama is the 30th largest by area and the 24th-most populous of the U.S. states. With a total of 1,500 miles (2,400 km) of inland waterways, Alabama has among the most of any state. Alabama is nicknamed the Yellowhammer State, after the state bird. Alabama is also known as the "Heart of Dixie '' and the Cotton State. The state tree is the longleaf pine, and the state flower is the camellia. Alabama 's capital is Montgomery. The largest city by population is Birmingham, which has long been the most industrialized city; the largest city by land area is Huntsville. The oldest city is Mobile, founded by French colonists in 1702 as the capital of French Louisiana. From the American Civil War until World War II, Alabama, like many states in the southern U.S., suffered economic hardship, in part because of its continued dependence on agriculture. Like other southern states, Alabama legislators disfranchised African Americans and many poor whites at the turn of the century. Despite the growth of major industries and urban centers, white rural interests dominated the state legislature from 1901 to the 1960s; urban interests and African Americans were markedly under - represented. Following World War II, Alabama grew as the state 's economy changed from one primarily based on agriculture to one with diversified interests. The state economy in the 21st century is based on management, automotive, finance, manufacturing, aerospace, mineral extraction, healthcare, education, retail, and technology. The European - American naming of the Alabama River and state was derived from the Alabama people, a Muskogean - speaking tribe whose members lived just below the confluence of the Coosa and Tallapoosa rivers on the upper reaches of the river. In the Alabama language, the word for a person of Alabama lineage is Albaamo (or variously Albaama or Albàamo in different dialects; the plural form is Albaamaha). The word Alabama is believed to have come from the Alabama language; a suggestion that the name was borrowed from the Choctaw language is unlikely. The word 's spelling varies significantly among historical sources. The first usage appears in three accounts of the Hernando de Soto expedition of 1540: Garcilaso de la Vega used Alibamo, while the Knight of Elvas and Rodrigo Ranjel wrote Alibamu and Limamu, respectively, in transliterations of the term. As early as 1702, the French called the tribe the Alibamon, with French maps identifying the river as Rivière des Alibamons. Other spellings of the name have included Alibamu, Alabamo, Albama, Alebamon, Alibama, Alibamou, Alabamu, Allibamou. Sources disagree on the word 's meaning. Some scholars suggest the word comes from the Choctaw alba (meaning "plants '' or "weeds '') and amo (meaning "to cut '', "to trim '', or "to gather ''). The meaning may have been "clearers of the thicket '' or "herb gatherers '', referring to clearing land for cultivation or collecting medicinal plants. The state has numerous place names of Native American origin. However, there are no correspondingly similar words in the Alabama language. An 1842 article in the Jacksonville Republican proposed it meant "Here We Rest. '' This notion was popularized in the 1850s through the writings of Alexander Beaufort Meek. Experts in the Muskogean languages have not found any evidence to support such a translation. Indigenous peoples of varying cultures lived in the area for thousands of years before the advent of European colonization. Trade with the northeastern tribes by the Ohio River began during the Burial Mound Period (1000 BC -- AD 700) and continued until European contact. The agrarian Mississippian culture covered most of the state from 1000 to 1600 AD, with one of its major centers built at what is now the Moundville Archaeological Site in Moundville, Alabama. This is the second - largest complex of the classic Middle Mississippian era, after Cahokia in present - day Illinois, which was the center of the culture. Analysis of artifacts from archaeological excavations at Moundville were the basis of scholars ' formulating the characteristics of the Southeastern Ceremonial Complex (SECC). Contrary to popular belief, the SECC appears to have no direct links to Mesoamerican culture, but developed independently. The Ceremonial Complex represents a major component of the religion of the Mississippian peoples; it is one of the primary means by which their religion is understood. Among the historical tribes of Native American people living in present - day Alabama at the time of European contact were the Cherokee, an Iroquoian language people; and the Muskogean - speaking Alabama (Alibamu), Chickasaw, Choctaw, Creek, and Koasati. While part of the same large language family, the Muskogee tribes developed distinct cultures and languages. With exploration in the 16th century, the Spanish were the first Europeans to reach Alabama. The expedition of Hernando de Soto passed through Mabila and other parts of the state in 1540. More than 160 years later, the French founded the region 's first European settlement at Old Mobile in 1702. The city was moved to the current site of Mobile in 1711. This area was claimed by the French from 1702 to 1763 as part of La Louisiane. After the French lost to the British in the Seven Years ' War, it became part of British West Florida from 1763 to 1783. After the United States victory in the American Revolutionary War, the territory was divided between the United States and Spain. The latter retained control of this western territory from 1783 until the surrender of the Spanish garrison at Mobile to U.S. forces on April 13, 1813. Thomas Bassett, a loyalist to the British monarchy during the Revolutionary era, was one of the earliest white settlers in the state outside Mobile. He settled in the Tombigbee District during the early 1770s. The district 's boundaries were roughly limited to the area within a few miles of the Tombigbee River and included portions of what is today southern Clarke County, northernmost Mobile County, and most of Washington County. What is now the counties of Baldwin and Mobile became part of Spanish West Florida in 1783, part of the independent Republic of West Florida in 1810, and was finally added to the Mississippi Territory in 1812. Most of what is now the northern two - thirds of Alabama was known as the Yazoo lands beginning during the British colonial period. It was claimed by the Province of Georgia from 1767 onwards. Following the Revolutionary War, it remained a part of Georgia, although heavily disputed. With the exception of the area around Mobile and the Yazoo lands, what is now the lower one - third Alabama was made part of the Mississippi Territory when it was organized in 1798. The Yazoo lands were added to the territory in 1804, following the Yazoo land scandal. Spain kept a claim on its former Spanish West Florida territory in what would become the coastal counties until the Adams -- Onís Treaty officially ceded it to the United States in 1819. Before Mississippi 's admission to statehood on December 10, 1817, the more sparsely settled eastern half of the territory was separated and named the Alabama Territory. The United States Congress created the Alabama Territory on March 3, 1817. St. Stephens, now abandoned, served as the territorial capital from 1817 to 1819. Alabama was admitted as the 22nd state on December 14, 1819, with Congress selecting Huntsville as the site for the first Constitutional Convention. From July 5 to August 2, 1819, delegates met to prepare the new state constitution. Huntsville served as temporary capital from 1819 to 1820, when the seat of government moved to Cahaba in Dallas County. Cahaba, now a ghost town, was the first permanent state capital from 1820 to 1825. Alabama Fever was underway when the state was admitted to the Union, with settlers and land speculators pouring into the state to take advantage of fertile land suitable for cotton cultivation. Part of the frontier in the 1820s and 1830s, its constitution provided for universal suffrage for white men. Southeastern planters and traders from the Upper South brought slaves with them as the cotton plantations in Alabama expanded. The economy of the central Black Belt (named for its dark, productive soil) was built around large cotton plantations whose owners ' wealth grew mainly from slave labor. The area also drew many poor, disfranchised people who became subsistence farmers. Alabama had an estimated population of under 10,000 people in 1810, but it increased to more than 300,000 people by 1830. Most Native American tribes were completely removed from the state within a few years of the passage of the Indian Removal Act by Congress in 1830. From 1826 to 1846, Tuscaloosa served as Alabama 's capital. On January 30, 1846, the Alabama legislature announced it had voted to move the capital city from Tuscaloosa to Montgomery. The first legislative session in the new capital met in December 1847. A new capitol building was erected under the direction of Stephen Decatur Button of Philadelphia. The first structure burned down in 1849, but was rebuilt on the same site in 1851. This second capitol building in Montgomery remains to the present day. It was designed by Barachias Holt of Exeter, Maine. By 1860, the population had increased to 964,201 people, of which nearly half, 435,080, were enslaved African Americans, and 2,690 were free people of color. On January 11, 1861, Alabama declared its secession from the Union. After remaining an independent republic for a few days, it joined the Confederate States of America. The Confederacy 's capital was initially at Montgomery. Alabama was heavily involved in the American Civil War. Although comparatively few battles were fought in the state, Alabama contributed about 120,000 soldiers to the war effort. A company of cavalry soldiers from Huntsville, Alabama, joined Nathan Bedford Forrest 's battalion in Hopkinsville, Kentucky. The company wore new uniforms with yellow trim on the sleeves, collar and coat tails. This led to them being greeted with "Yellowhammer '', and the name later was applied to all Alabama troops in the Confederate Army. Alabama 's slaves were freed by the 13th Amendment in 1865. Alabama was under military rule from the end of the war in May 1865 until its official restoration to the Union in 1868. From 1867 to 1874, with most white citizens barred temporarily from voting and freedmen enfranchised, many African Americans emerged as political leaders in the state. Alabama was represented in Congress during this period by three African - American congressmen: Jeremiah Haralson, Benjamin S. Turner, and James T. Rapier. Following the war, the state remained chiefly agricultural, with an economy tied to cotton. During Reconstruction, state legislators ratified a new state constitution in 1868 that created the state 's first public school system and expanded women 's rights. Legislators funded numerous public road and railroad projects, although these were plagued with allegations of fraud and misappropriation. Organized insurgent, resistance groups tried to suppress the freedmen and Republicans. Besides the short - lived original Ku Klux Klan, these included the Pale Faces, Knights of the White Camellia, Red Shirts, and the White League. Reconstruction in Alabama ended in 1874, when the Democrats regained control of the legislature and governor 's office through an election dominated by fraud and violence. They wrote another constitution in 1875, and the legislature passed the Blaine Amendment, prohibiting public money from being used to finance religious - affiliated schools. The same year, legislation was approved that called for racially segregated schools. Railroad passenger cars were segregated in 1891. After disfranchising most African Americans and many poor whites in the 1901 constitution, the Alabama legislature passed more Jim Crow laws at the beginning of the 20th century to impose segregation in everyday life. The new 1901 Constitution of Alabama included provisions for voter registration that effectively disenfranchised large portions of the population, including nearly all African Americans and Native Americans, and tens of thousands of poor whites, through making voter registration difficult, requiring a poll tax and literacy test. The 1901 constitution required racial segregation of public schools. By 1903, only 2,980 African Americans were registered in Alabama, although at least 74,000 were literate. This compared to more than 181,000 African Americans eligible to vote in 1900. The numbers dropped even more in later decades. The state legislature passed additional racial segregation laws related to public facilities into the 1950s: jails were segregated in 1911; hospitals in 1915; toilets, hotels, and restaurants in 1928; and bus stop waiting rooms in 1945. While the planter class had persuaded poor whites to vote for this legislative effort to suppress black voting, the new restrictions resulted in their disenfranchisement as well, due mostly to the imposition of a cumulative poll tax. By 1941, whites constituted a slight majority of those disenfranchised by these laws: 600,000 whites vs. 520,000 African - Americans. Nearly all African Americans had lost the ability to vote. Despite numerous legal challenges that succeeded in overturning certain provisions, the state legislature would create new ones to maintain disenfranchisement. The exclusion of blacks from the political system persisted until after passage of federal civil rights legislation in the 1965 to enforce their constitutional rights as citizens. The rural - dominated Alabama legislature consistently underfunded schools and services for the disenfranchised African Americans, but it did not relieve them of paying taxes. Partially as a response to chronic underfunding of education for African Americans in the South, the Rosenwald Fund began funding the construction of what came to be known as Rosenwald Schools. In Alabama these schools were designed and the construction partially financed with Rosenwald funds, which paid one - third of the construction costs. The fund required the local community and state to raise matching funds to pay the rest. Black residents effectively taxed themselves twice, by raising additional monies to supply matching funds for such schools, which were built in many rural areas. They often donated land and labor as well. Beginning in 1913, the first 80 Rosenwald Schools were built in Alabama for African - American children. A total of 387 schools, seven teachers ' houses, and several vocational buildings were completed by 1937 in the state. Several of the surviving school buildings in the state are now listed on the National Register of Historic Places. Continued racial discrimination and lynchings, agricultural depression, and the failure of the cotton crops due to boll weevil infestation led tens of thousands of African Americans from rural Alabama and other states to seek opportunities in northern and midwestern cities during the early decades of the 20th century as part of the Great Migration out of the South. Reflecting this emigration, the population growth rate in Alabama (see "historical populations '' table below) dropped by nearly half from 1910 to 1920. At the same time, many rural people, both white and African American, migrated to the city of Birmingham to work in new industrial jobs. Birmingham experienced such rapid growth that it was called the "Magic City ''. By the 1920s, Birmingham was the 19th - largest city in the United States and had more than 30 % of the state 's population. Heavy industry and mining were the basis of its economy. Its residents were under - represented for decades in the state legislature, which refused to redistrict after each decennial census according to population changes, as it was required by the state constitution. This did not change until the late 1960s following a lawsuit and court order. Beginning in the 1940s, when the courts started taking the first steps to recognize the voting rights of black voters, the Alabama legislature took several counter - steps designed to disfranchise black voters. The legislature passed, and the voters ratified (as these were mostly white voters), a state constitutional amendment that gave local registrars greater latitude to disqualify voter registration applicants. Black citizens in Mobile successfully challenged this amendment as a violation of the Fifteenth Amendment. The legislature also changed the boundaries of Tuskegee to a 28 - sided figure designed to fence out blacks from the city limits. The Supreme Court unanimously held that this racial "gerrymandering '' violated the Constitution. In 1961,... the Alabama legislature also intentionally diluted the effect of the black vote by instituting numbered place requirements for local elections. Industrial development related to the demands of World War II brought a level of prosperity to the state not seen since before the civil war. Rural workers poured into the largest cities in the state for better jobs and a higher standard of living. One example of this massive influx of workers occurred in Mobile. Between 1940 and 1943, more than 89,000 people moved into the city to work for war - related industries. Cotton and other cash crops faded in importance as the state developed a manufacturing and service base. Despite massive population changes in the state from 1901 to 1961, the rural - dominated legislature refused to reapportion House and Senate seats based on population, as required by the state constitution to follow the results of decennial censuses. They held on to old representation to maintain political and economic power in agricultural areas. In addition, the state legislature gerrymandered the few Birmingham legislative seats to ensure election by persons living outside Birmingham. One result was that Jefferson County, containing Birmingham 's industrial and economic powerhouse, contributed more than one - third of all tax revenue to the state, but did not receive a proportional amount in services. Urban interests were consistently underrepresented in the legislature. A 1960 study noted that because of rural domination, "a minority of about 25 per cent of the total state population is in majority control of the Alabama legislature. '' A class action suit initiated on behalf of plaintiffs in Lowndes County, Alabama, challenged the state legislature 's lack of redistricting for congressional seats. In 1962 White v. Crook, Judge Frank M. Johnson ordered the state to redistrict. United States Supreme Court cases of Baker v. Carr (1962) and Reynolds v. Sims (1964) ruled that the principle of "one man, one vote '' needed to be the basis of both houses of state legislatures as well, and that their districts had to be based on population, rather than geographic counties, as Alabama had used for its senate. In 1972, for the first time since 1901, the legislature completed the first congressional redistricting based on the decennial census. This benefited the urban areas that had developed, as well as all in the population who had been underrepresented for more than 60 years. Other changes were made to implement representative state house and senate districts. African Americans continued to press in the 1950s and 1960s to end disenfranchisement and segregation in the state through the Civil Rights Movement, including legal challenges. In 1954, the US Supreme Court ruled in Brown v. Board of Education that public schools had to be desegregated, but Alabama was slow to comply. During the 1960s, under Governor George Wallace, Alabama resisted compliance with federal demands for desegregation. The Civil Rights Movement had notable events in Alabama, including the Montgomery Bus Boycott (1955 -- 56), Freedom Rides in 1961, and 1965 Selma to Montgomery marches. These contributed to Congressional passage and enactment of the Civil Rights Act of 1964 and Voting Rights Act of 1965 by the U.S. Congress. Legal segregation ended in the states in 1964, but Jim Crow customs often continued until specifically challenged in court. Despite recommendations of a 1973 Alabama Constitutional Commission, the state legislature did not approve an amendment to establish home rule for counties. There is very limited home rule, but the legislature is deeply involved in passing legislation that applies to county - level functions and policies. This both deprives local residents of the ability to govern themselves and distracts the legislature from statewide issues. Alabama has made some changes since the late 20th century and has used new types of voting to increase representation. In the 1980s, an omnibus redistricting case, Dillard v. Crenshaw County, challenged the at - large voting for representative seats of 180 Alabama jurisdictions, including counties and school boards. At - large voting had diluted the votes of any minority in a county, as the majority tended to take all seats. Despite African Americans making up a significant minority in the state, they had been unable to elect any representatives in most of the at - large jurisdictions. As part of settlement of this case, five Alabama cites and counties, including Chilton County, adopted a system of cumulative voting for election of representatives in multi-seat jurisdictions. This has resulted in more proportional representation for voters. In another form of proportional representation, 23 jurisdictions use limited voting, as in Conecuh County. In 1982, limited voting was first tested in Conecuh County. Together use of these systems has increased the number of African Americans and women being elected to local offices, resulting in governments that are more representative of their citizens. Alabama is the thirtieth - largest state in the United States with 52,419 square miles (135,760 km) of total area: 3.2 % of the area is water, making Alabama 23rd in the amount of surface water, also giving it the second - largest inland waterway system in the United States. About three - fifths of the land area is a gentle plain with a general descent towards the Mississippi River and the Gulf of Mexico. The North Alabama region is mostly mountainous, with the Tennessee River cutting a large valley and creating numerous creeks, streams, rivers, mountains, and lakes. Alabama is bordered by the states of Tennessee to the north, Georgia to the east, Florida to the south, and Mississippi to the west. Alabama has coastline at the Gulf of Mexico, in the extreme southern edge of the state. The state ranges in elevation from sea level at Mobile Bay to over 1,800 feet (550 m) in the Appalachian Mountains in the northeast. The highest point is Mount Cheaha, at a height of 2,413 ft (735 m). Alabama 's land consists of 22 million acres (89,000 km) of forest or 67 % of total land area. Suburban Baldwin County, along the Gulf Coast, is the largest county in the state in both land area and water area. Areas in Alabama administered by the National Park Service include Horseshoe Bend National Military Park near Alexander City; Little River Canyon National Preserve near Fort Payne; Russell Cave National Monument in Bridgeport; Tuskegee Airmen National Historic Site in Tuskegee; and Tuskegee Institute National Historic Site near Tuskegee. Additionally, Alabama has four National Forests: Conecuh, Talladega, Tuskegee, and William B. Bankhead. Alabama also contains the Natchez Trace Parkway, the Selma To Montgomery National Historic Trail, and the Trail Of Tears National Historic Trail. A notable natural wonder in Alabama is "Natural Bridge '' rock, the longest natural bridge east of the Rockies, located just south of Haleyville. A 5 - mile (8 km) - wide meteorite impact crater is located in Elmore County, just north of Montgomery. This is the Wetumpka crater, the site of "Alabama 's greatest natural disaster. '' A 1,000 - foot (300 m) - wide meteorite hit the area about 80 million years ago. The hills just east of downtown Wetumpka showcase the eroded remains of the impact crater that was blasted into the bedrock, with the area labeled the Wetumpka crater or astrobleme ("star - wound '') because of the concentric rings of fractures and zones of shattered rock that can be found beneath the surface. In 2002, Christian Koeberl with the Institute of Geochemistry University of Vienna published evidence and established the site as the 157th recognized impact crater on Earth. The state is classified as humid subtropical (Cfa) under the Koppen Climate Classification. The average annual temperature is 64 ° F (18 ° C). Temperatures tend to be warmer in the southern part of the state with its proximity to the Gulf of Mexico, while the northern parts of the state, especially in the Appalachian Mountains in the northeast, tend to be slightly cooler. Generally, Alabama has very hot summers and mild winters with copious precipitation throughout the year. Alabama receives an average of 56 inches (1,400 mm) of rainfall annually and enjoys a lengthy growing season of up to 300 days in the southern part of the state. Summers in Alabama are among the hottest in the U.S., with high temperatures averaging over 90 ° F (32 ° C) throughout the summer in some parts of the state. Alabama is also prone to tropical storms and even hurricanes. Areas of the state far away from the Gulf are not immune to the effects of the storms, which often dump tremendous amounts of rain as they move inland and weaken. South Alabama reports many thunderstorms. The Gulf Coast, around Mobile Bay, averages between 70 and 80 days per year with thunder reported. This activity decreases somewhat further north in the state, but even the far north of the state reports thunder on about 60 days per year. Occasionally, thunderstorms are severe with frequent lightning and large hail; the central and northern parts of the state are most vulnerable to this type of storm. Alabama ranks ninth in the number of deaths from lightning and tenth in the number of deaths from lightning strikes per capita. Alabama, along with Oklahoma, has the most reported EF5 tornadoes of any state, according to statistics from the National Climatic Data Center for the period January 1, 1950, to June 2013. Several long - tracked F5 / EF5 tornadoes have contributed to Alabama reporting more tornado fatalities than any other state. The state was affected by the 1974 Super Outbreak and was devastated tremendously by the 2011 Super Outbreak. The 2011 Super Outbreak produced a record amount of tornadoes in the state. The tally reached 62. The peak season for tornadoes varies from the northern to southern parts of the state. Alabama is one of the few places in the world that has a secondary tornado season in November and December, along with the spring severe weather season. The northern part of the state -- along the Tennessee Valley -- is one of the areas in the U.S. most vulnerable to violent tornadoes. The area of Alabama and Mississippi most affected by tornadoes is sometimes referred to as Dixie Alley, as distinct from the Tornado Alley of the Southern Plains. Winters are generally mild in Alabama, as they are throughout most of the Southeastern United States, with average January low temperatures around 40 ° F (4 ° C) in Mobile and around 32 ° F (0 ° C) in Birmingham. Although snow is a rare event in much of Alabama, areas of the state north of Montgomery may receive a dusting of snow a few times every winter, with an occasional moderately heavy snowfall every few years. Historic snowfall events include New Year 's Eve 1963 snowstorm and the 1993 Storm of the Century. The annual average snowfall for the Birmingham area is 2 inches (51 mm) per year. In the southern Gulf coast, snowfall is less frequent, sometimes going several years without any snowfall. Alabama 's highest temperature of 112 ° F (44 ° C) was recorded on September 5, 1925, in the unincorporated community of Centerville. The record low of − 27 ° F (− 33 ° C) occurred on January 30, 1966, in New Market. Alabama is home to a diverse array of flora and fauna, due largely to a variety of habitats that range from the Tennessee Valley, Appalachian Plateau, and Ridge - and - Valley Appalachians of the north to the Piedmont, Canebrake and Black Belt of the central region to the Gulf Coastal Plain and beaches along the Gulf of Mexico in the south. The state is usually ranked among the top in nation for its range of overall biodiversity. Alabama is in the subtropical coniferous forest biome and once boasted huge expanses of pine forest, which still form the largest proportion of forests in the state. It currently ranks fifth in the nation for the diversity of its flora. It is home to nearly 4,000 pteridophyte and spermatophyte plant species. Indigenous animal species in the state include 62 mammal species, 93 reptile species, 73 amphibian species, roughly 307 native freshwater fish species, and 420 bird species that spend at least part of their year within the state. Invertebrates include 97 crayfish species and 383 mollusk species. 113 of these mollusk species have never been collected outside the state. The United States Census Bureau estimates that the population of Alabama was 4,858,979 on July 1, 2015, which represents an increase of 79,243, or 1.66 %, since the 2010 Census. This includes a natural increase since the last census of 121,054 people (that is 502,457 births minus 381,403 deaths) and an increase due to net migration of 104,991 people into the state. Immigration from outside the U.S. resulted in a net increase of 31,180 people, and migration within the country produced a net gain of 73,811 people. The state had 108,000 foreign - born (2.4 % of the state population), of which an estimated 22.2 % were undocumented (24,000). The center of population of Alabama is located in Chilton County, outside the town of Jemison. According to the 2010 Census, Alabama had a population of 4,779,736. The racial composition of the state was 68.5 % White (67.0 % Non-Hispanic White and 1.5 % Hispanic White), 26.2 % Black or African American, 3.9 % Hispanic or Latino of any race, 1.1 % Asian, 0.6 % American Indian and Alaska Native, 0.1 % Native Hawaiian and Other Pacific Islander, 2.0 % from Some Other Race, and 1.5 % from Two or More Races. In 2011, 46.6 % of Alabama 's population younger than age 1 were minorities. The largest reported ancestry groups in Alabama are: African (26.2 %), English (23.6 %), Irish (7.7 %), German (5.7 %), and Scots - Irish (2.0 %). Those citing "American '' ancestry in Alabama are generally of English or British ancestry; many Anglo - Americans identify as having American ancestry because their roots have been in North America for so long, in some cases since the 1600s. Demographers estimate that a minimum of 20 -- 23 % of people in Alabama are of predominantly English ancestry and that the figure is likely higher. In the 1980 census, 41 % of the people in Alabama identified as being of English ancestry, making them the largest ethnic group at the time. Based on historic migration and settlement patterns in the southern colonies and states, demographers estimated there are more people in Alabama of Scots - Irish origins than self - reported. Many people in Alabama claim Irish ancestry because of the term Scots - Irish but, based on historic immigration and settlement, their ancestors were more likely Protestant Scots - Irish coming from northern Ireland, where they had been for a few generations as part of the English colonization. The Scots - Irish were the largest non-English immigrant group from the British Isles before the American Revolution, and many settled in the South, later moving into the Deep South as it was developed. In 1984, under the Davis -- Strong Act, the state legislature established the Alabama Indian Affairs Commission. Native American groups within the state had increasingly been demanding recognition as ethnic groups and seeking an end to discrimination. Given the long history of slavery and associated racial segregation, the Native American peoples, who have sometimes been of mixed race, have insisted on having their cultural identification respected. In the past, their self - identification was often overlooked as the state tried to impose a binary breakdown of society into white and black. The state has officially recognized nine American Indian tribes in the state, descended mostly from the Five Civilized Tribes of the American Southeast. These are: The state government has promoted recognition of Native American contributions to the state, including the designation in 2000 for Columbus Day to be jointly celebrated as American Indian Heritage Day. 95.1 % of all Alabama residents five years old or older spoke only English at home in 2010, a minor decrease from 96.1 % in 2000. Alabama English is predominantly Southern, and is related to South Midland speech which was taken across the border from Tennessee. In the major Southern speech region, there is the decreasing loss of the final / r /, for example the / boyd / pronunciation of ' bird. ' In the northern third of the state, there is a South Midland ' arm ' and ' barb ' rhyming with ' form ' and ' orb. ' Unique words in Alabama English include: redworm (earthworm), peckerwood (woodpecker), snake doctor and snake feeder (dragonfly), tow sack (burlap bag), plum peach (clingstone), French harp (harmonica), and dog irons (andirons). In the 2008 American Religious Identification Survey, 86 % of Alabama respondents reported their religion as Christian, including 6 % Catholic, and 11 % as having no religion. The composition of other traditions is 0.5 % Mormon, 0.5 % Jewish, 0.5 % Muslim, 0.5 % Buddhist, and 0.5 % Hindu. Alabama is located in the middle of the Bible Belt, a region of numerous Protestant Christians. Alabama has been identified as one of the most religious states in the United States, with about 58 % of the population attending church regularly. A majority of people in the state identify as Evangelical Protestant. As of 2010, the three largest denominational groups in Alabama are the Southern Baptist Convention, The United Methodist Church, and non-denominational Evangelical Protestant. In Alabama, the Southern Baptist Convention has the highest number of adherents with 1,380,121; this is followed by the United Methodist Church with 327,734 adherents, non-denominational Evangelical Protestant with 220,938 adherents, and the Catholic Church with 150,647 adherents. Many Baptist and Methodist congregations became established in the Great Awakening of the early 19th century, when preachers proselytized across the South. The Assemblies of God had almost 60,000 members, the Churches of Christ had nearly 120,000 members. The Presbyterian churches, strongly associated with Scots - Irish immigrants of the 18th century and their descendants, had a combined membership around 75,000 (PCA -- 28,009 members in 108 congregations, PC (USA) -- 26,247 members in 147 congregations, the Cumberland Presbyterian Church -- 6,000 members in 59 congregations, the Cumberland Presbyterian Church in America -- 5,000 members and 50 congregations plus the EPC and Associate Reformed Presbyterians with 230 members and 9 congregations). In a 2007 survey, nearly 70 % of respondents could name all four of the Christian Gospels. Of those who indicated a religious preference, 59 % said they possessed a "full understanding '' of their faith and needed no further learning. In a 2007 poll, 92 % of Alabamians reported having at least some confidence in churches in the state. Although in much smaller numbers, many other religious faiths are represented in the state as well, including Judaism, Islam, Hinduism, Buddhism, Sikhism, the Bahá'í Faith, and Unitarian Universalism. Jews have been present in what is now Alabama since 1763, during the colonial era of Mobile, when Sephardic Jews immigrated from London. The oldest Jewish congregation in the state is Congregation Sha'arai Shomayim in Mobile. It was formally recognized by the state legislature on January 25, 1844. Later immigrants in the nineteenth and twentieth centuries tended to be Ashkenazi Jews from eastern Europe. Jewish denominations in the state include two Orthodox, four Conservative, ten Reform, and one Humanistic synagogue. Muslims have been increasing in Alabama, with 31 mosques built by 2011, many by African - American converts. Several Hindu temples and cultural centers in the state have been founded by Indian immigrants and their descendants, the best - known being the Shri Swaminarayan Mandir in Birmingham, the Hindu Temple and Cultural Center of Birmingham in Pelham, the Hindu Cultural Center of North Alabama in Capshaw, and the Hindu Mandir and Cultural Center in Tuscaloosa. There are six Dharma centers and organizations for Theravada Buddhists. Most monastic Buddhist temples are concentrated in southern Mobile County, near Bayou La Batre. This area has attracted an influx of refugees from Cambodia, Laos, and Vietnam during the 1970s and thereafter. The four temples within a ten - mile radius of Bayou La Batre, include Chua Chanh Giac, Wat Buddharaksa, and Wat Lao Phoutthavihan. The first community of adherents of the Baha'i Faith in Alabama was founded in 1896 by Paul K. Dealy, who moved from Chicago to Fairhope. Baha'i Centers in Alabama exist in Birmingham, Alabama, Huntsville, Alabama, and Florence, Alabama. A Centers for Disease Control and Prevention study in 2008 showed that obesity in Alabama was a problem, with most counties having over 29 % of adults obese, except for ten which had a rate between 26 % and 29 %. Residents of the state, along with those in five other states, were least likely in the nation to be physically active during leisure time. Alabama, and the southeastern U.S. in general, has one of the highest incidences of adult onset diabetes in the country, exceeding 10 % of adults. The state has invested in aerospace, education, health care, banking, and various heavy industries, including automobile manufacturing, mineral extraction, steel production and fabrication. By 2006, crop and animal production in Alabama was valued at $1.5 billion. In contrast to the primarily agricultural economy of the previous century, this was only about 1 % of the state 's gross domestic product. The number of private farms has declined at a steady rate since the 1960s, as land has been sold to developers, timber companies, and large farming conglomerates. Non-agricultural employment in 2008 was 121,800 in management occupations; 71,750 in business and financial operations; 36,790 in computer - related and mathematical occupation; 44,200 in architecture and engineering; 12,410 in life, physical, and social sciences; 32,260 in community and social services; 12,770 in legal occupations; 116,250 in education, training, and library services; 27,840 in art, design and media occupations; 121,110 in healthcare; 44,750 in fire fighting, law enforcement, and security; 154,040 in food preparation and serving; 76,650 in building and grounds cleaning and maintenance; 53,230 in personal care and services; 244,510 in sales; 338,760 in office and administration support; 20,510 in farming, fishing, and forestry; 120,155 in construction and mining, gas, and oil extraction; 106,280 in installation, maintenance, and repair; 224,110 in production; and 167,160 in transportation and material moving. According to the U.S. Bureau of Economic Analysis, the 2008 total gross state product was $170 billion, or $29,411 per capita. Alabama 's 2012 GDP increased 1.2 % from the previous year. The single largest increase came in the area of information. In 2010, per capita income for the state was $22,984. The state 's seasonally adjusted unemployment rate was 5.8 % in April 2015. This compared to a nationwide seasonally adjusted rate of 5.4 %. Alabama has no state minimum wage and uses the federal minimum wage of $7.25. In February 2016, the state passed legislation that prevents Alabama municipalities from raising the minimum wage in their locality. The legislation voids a Birmingham city ordinance that was to raise the city 's minimum wage to $10.10. The five employers that employed the most employees in Alabama in April 2011 were: The next twenty largest employers, as of 2011, included: Alabama 's agricultural outputs include poultry and eggs, cattle, fish, plant nursery items, peanuts, cotton, grains such as corn and sorghum, vegetables, milk, soybeans, and peaches. Although known as "The Cotton State '', Alabama ranks between eighth and tenth in national cotton production, according to various reports, with Texas, Georgia and Mississippi comprising the top three. Alabama 's industrial outputs include iron and steel products (including cast - iron and steel pipe); paper, lumber, and wood products; mining (mostly coal); plastic products; cars and trucks; and apparel. In addition, Alabama produces aerospace and electronic products, mostly in the Huntsville area, the location of NASA 's George C. Marshall Space Flight Center and the U.S. Army Materiel Command, headquartered at Redstone Arsenal. A great deal of Alabama 's economic growth since the 1990s has been due to the state 's expanding automotive manufacturing industry. Located in the state are Honda Manufacturing of Alabama, Hyundai Motor Manufacturing Alabama, Mercedes - Benz U.S. International, and Toyota Motor Manufacturing Alabama, as well as their various suppliers. Since 1993, the automobile industry has generated more than 67,800 new jobs in the state. Alabama currently ranks 4th in the nation for vehicle exports. Automakers accounted for approximately a third of the industrial expansion in the state in 2012. The eight models produced at the state 's auto factories totaled combined sales of 74,335 vehicles for 2012. The strongest model sales during this period were the Hyundai Elantra compact car, the Mercedes - Benz GL - Class sport utility vehicle and the Honda Ridgeline sport utility truck. Steel producers Outokumpu, Nucor, SSAB, ThyssenKrupp, and U.S. Steel have facilities in Alabama and employ over 10,000 people. In May 2007, German steelmaker ThyssenKrupp selected Calvert in Mobile County for a 4.65 billion combined stainless and carbon steel processing facility. ThyssenKrupp 's stainless steel division, Inoxum, including the stainless portion of the Calvert plant, was sold to Finnish stainless steel company Outokumpu in 2012. The remaining portion of the ThyssenKrupp plant had final bids submitted by ArcelorMittal and Nippon Steel for $1.6 billion in March 2013. Companhia Siderúrgica Nacional submitted a combined bid for the mill at Calvert, plus a majority stake in the ThyssenKrupp mill in Brazil, for $3.8 billion. In July 2013, the plant was sold to ArcelorMittal and Nippon Steel. The Hunt Refining Company, a subsidiary of Hunt Consolidated, Inc., is based in Tuscaloosa and operates a refinery there. The company also operates terminals in Mobile, Melvin, and Moundville. JVC America, Inc. operates an optical disc replication and packaging plant in Tuscaloosa. The Goodyear Tire and Rubber Company operates a large plant in Gadsden that employs about 1,400 people. It has been in operation since 1929. Construction of an Airbus A320 family aircraft assembly plant in Mobile was formally announced by Airbus CEO Fabrice Brégier from the Mobile Convention Center on July 2, 2012. The plans include a $600 million factory at the Brookley Aeroplex for the assembly of the A319, A320 and A321 aircraft. Construction began in 2013, with plans for it to become operable by 2015 and produce up to 50 aircraft per year by 2017. The assembly plant is the company 's first factory to be built within the United States. It was announced on February 1, 2013, that Airbus had hired Alabama - based Hoar Construction to oversee construction of the facility. An estimated 20 million tourists visit the state each year. Over 100,000 of these are from other countries, including from Canada, the United Kingdom, Germany and Japan. In 2006, 22.3 million tourists spent $8.3 billion providing an estimated 162,000 jobs in the state. Some of the most popular areas include the Rocket City of Huntsville, the beaches along the Gulf, and the state 's capitol in Montgomery. UAB Hospital is the only Level I trauma center in Alabama. UAB is the largest state government employer in Alabama, with a workforce of about 18,000. Alabama has the headquarters of Regions Financial Corporation, BBVA Compass, Superior Bancorp and the former Colonial Bancgroup. Birmingham - based Compass Banchshares was acquired by Spanish - based BBVA in September 2007, although the headquarters of BBVA Compass remains in Birmingham. In November 2006, Regions Financial completed its merger with AmSouth Bancorporation, which was also headquartered in Birmingham. SouthTrust Corporation, another large bank headquartered in Birmingham, was acquired by Wachovia in 2004 for $14.3 billion. The city still has major operations for Wachovia and its now post-operating bank Wells Fargo, which includes a regional headquarters, an operations center campus and a $400 million data center. Nearly a dozen smaller banks are also headquartered in the Birmingham, such as Superior Bancorp, ServisFirst and New South Federal Savings Bank. Birmingham also serves as the headquarters for several large investment management companies, including Harbert Management Corporation. Telecommunications provider AT&T, formerly BellSouth, has a major presence in Alabama with several large offices in Birmingham. The company has over 6,000 employees and more than 1,200 contract employees. Many commercial technology companies are headquartered in Huntsville, such as the network access company ADTRAN, computer graphics company Intergraph, design and manufacturer of IT infrastructure Avocent, and telecommunications provider Deltacom. Cinram manufactures and distributes 20th Century Fox DVDs and Blu - ray Discs out of their Huntsville plant. Rust International has grown to include Brasfield & Gorrie, BE&K, Hoar Construction and B.L. Harbert International, which all routinely are included in the Engineering News - Record lists of top design, international construction, and engineering firms. (Rust International was acquired in 2000 by Washington Group International, which was in turn acquired by San - Francisco based URS Corporation in 2007.) The foundational document for Alabama 's government is the Alabama Constitution, which was ratified in 1901. At almost 800 amendments and 310,000 words, it is by some accounts the world 's longest constitution and is roughly forty times the length of the United States Constitution. There has been a significant movement to rewrite and modernize Alabama 's constitution. Critics argue that Alabama 's constitution maintains highly centralized power with the state legislature, leaving practically no power in local hands. Most counties do not have home rule. Any policy changes proposed in different areas of the state must be approved by the entire Alabama legislature and, frequently, by state referendum. One criticism of the current constitution claims that its complexity and length intentionally codify segregation and racism. Alabama 's government is divided into three coequal branches. The legislative branch is the Alabama Legislature, a bicameral assembly composed of the Alabama House of Representatives, with 105 members, and the Alabama Senate, with 35 members. The Legislature is responsible for writing, debating, passing, or defeating state legislation. The Republican Party currently holds a majority in both houses of the Legislature. The Legislature has the power to override a gubernatorial veto by a simple majority (most state Legislatures require a two - thirds majority to override a veto). Until 1964, the state elected state senators on a geographic basis by county, with one per county. It had not redistricted congressional districts since passage of its constitution in 1901; as a result, urbanized areas were grossly underrepresented. It had not changed legislative districts to reflect the decennial censuses, either. In Reynolds v. Sims (1964), the US Supreme Court implemented the principle of "one man, one vote '', ruling that congressional districts had to be reapportioned based on censuses (as the state already included in its constitution but had not implemented.) Further, the court ruled that both houses of bicameral state legislatures had to be apportioned by population, as there was no constitutional basis for states to have geographically based systems. At that time, Alabama and many other states had to change their legislative districting, as many across the country had systems that underrepresented urban areas and districts. This had caused decades of underinvestment in such areas. For instance, Birmingham and Jefferson County taxes had supplied one - third of the state budget, but Jefferson County received only 1 / 67th of state services in funding. Through the legislative delegations, the Alabama legislature kept control of county governments. The executive branch is responsible for the execution and oversight of laws. It is headed by the Governor of Alabama. Other members of executive branch include the cabinet, the Attorney General of Alabama, the Alabama Secretary of State, the Alabama State Treasurer, and the State Auditor of Alabama. The current governor of the state is Republican Kay Ivey. The office of lieutenant governor is currently vacant. The members of the Legislature take office immediately after the November elections. Statewide officials, such as the governor, lieutenant governor, attorney general, and other constitutional officers, take office the following January. The judicial branch is responsible for interpreting the Constitution and applying the law in state criminal and civil cases. The state 's highest court is the Supreme Court of Alabama. Alabama uses partisan elections to select judges. Since the 1980s judicial campaigns have become increasingly politicized. The current chief justice of the Alabama Supreme Court is Republican Roy Moore. All sitting justices on the Alabama Supreme Court are members of the Republican Party. There are two intermediate appellate courts, the Court of Civil Appeals and the Court of Criminal Appeals, and four trial courts: the circuit court (trial court of general jurisdiction), and the district, probate, and municipal courts. Some critics believe that the election of judges has contributed to an exceedingly high rate of executions. Alabama has the highest per capita death penalty rate in the country. In some years, it imposes more death sentences than does Texas, a state which has a population five times larger. Some of its cases have been highly controversial; the Supreme Court has overturned 24 convictions in death penalty cases. It is the only state that still allows judges to override jury decisions in whether or not to use a death sentence; in 10 cases judges overturned sentences of life imprisonment without parole (LWOP) that were voted unanimously by juries. Alabama levies a 2, 4, or 5 percent personal income tax, depending upon the amount earned and filing status. Taxpayers are allowed to deduct their federal income tax from their Alabama state tax, and can do so even if taking the standard deduction. Taxpayers who file itemized deductions are also allowed to deduct the Federal Insurance Contributions Act tax (Social Security and Medicare tax). The state 's general sales tax rate is 4 %. Sales tax rates for cities and counties are also added to purchases. For example, the total sales tax rate in Mobile is 10 % and there is an additional restaurant tax of 1 %, which means that a diner in Mobile would pay an 11 % tax on a meal. As of 1999, sales and excise taxes in Alabama account for 51 % of all state and local revenue, compared with an average of about 36 % nationwide. Alabama is one of seven states that levy a tax on food at the same rate as other goods, and one of two states (the other being neighboring Mississippi) which fully taxes groceries without any offsetting relief for low - income families. (Most states exempt groceries from sales tax or apply a lower tax rate.) Alabama 's income tax on poor working families is among the highest in the United States. Alabama is the only state that levies income tax on a family of four with income as low as $4,600, which is barely one - quarter of the federal poverty line. Alabama 's threshold is the lowest among the 41 states and the District of Columbia with income taxes. The corporate income tax rate is currently 6.5 %. The overall federal, state, and local tax burden in Alabama ranks the state as the second least tax - burdened state in the country. Property taxes are the lowest in the U.S. The current state constitution requires a voter referendum to raise property taxes. Since Alabama 's tax structure largely depends on consumer spending, it is subject to high variable budget structure. For example, in 2003, Alabama had an annual budget deficit as high as $670 million. Alabama has 67 counties. Each county has its own elected legislative branch, usually called the county commission. It also has limited executive authority in the county. Because of the constraints of the Alabama Constitution, which centralizes power in the state legislature, only seven counties (Jefferson, Lee, Mobile, Madison, Montgomery, Shelby, and Tuscaloosa) in the state have limited home rule. Instead, most counties in the state must lobby the Local Legislation Committee of the state legislature to get simple local policies approved, ranging from waste disposal to land use zoning. The cumbersome process results in local jurisdictions being unable to manage their problems, and the state legislators being buried in local county issues. The state legislature has retained power over local governments by refusing to pass a constitutional amendment establishing home rule for counties, as recommended by the 1973 Alabama Constitutional Commission. Legislative delegations retain certain powers over each county. United States Supreme Court decisions in Baker v. Carr (1964) required that both houses have districts established on the basis of population, and redistricted after each census, in order to implement the principle of "one man, one vote ''. Before that, each county was represented by one state senator, leading to under - representation in the state senate for more urbanized, populous counties. The rural bias of the state legislature, which had also failed to redistrict seats in the state house, affected politics well into the 20th century, failing to recognize the rise of industrial cities and urbanized areas. "The lack of home rule for counties in Alabama has resulted in the proliferation of local legislation permitting counties to do things not authorized by the state constitution. Alabama 's constitution has been amended more than 700 times, and almost one - third of the amendments are local in nature, applying to only one county or city. A significant part of each legislative session is spent on local legislation, taking away time and attention of legislators from issues of statewide importance. '' Alabama is an alcoholic beverage control state, meaning that the state government holds a monopoly on the sale of alcohol. The Alabama Alcoholic Beverage Control Board controls the sale and distribution of alcoholic beverages in the state. Twenty - five of the 67 counties are "dry counties '' which ban the sale of alcohol, and there are many dry municipalities even in counties which permit alcohol sales. During Reconstruction following the American Civil War, Alabama was occupied by federal troops of the Third Military District under General John Pope. In 1874, the political coalition of white Democrats known as the Redeemers took control of the state government from the Republicans, in part by suppressing the African - American vote through violence, fraud and intimidation. After 1890, a coalition of White Democratic politicians passed laws to segregate and disenfranchise African American residents, a process completed in provisions of the 1901 constitution. Provisions which disenfranchised African Americans resulted in excluding many poor Whites. By 1941 more Whites than African Americans had been disenfranchised: 600,000 to 520,000. The total effects were greater on the African - American community, as almost all of its citizens were disfranchised and relegated to separate and unequal treatment under the law. From 1901 through the 1960s, the state did not redraw election districts as population grew and shifted within the state during urbanization and industrialization of certain areas. As counties were the basis of election districts, the result was a rural minority that dominated state politics through nearly three - quarters of the century, until a series of federal court cases required redistricting in 1972 to meet equal representation. Alabama state politics gained nationwide and international attention in the 1950s and 1960s during the Civil Rights Movement, when whites bureaucratically, and at times, violently resisted protests for electoral and social reform. Governor George Wallace, the state 's only four - term governor, was a controversial figure who vowed to maintain segregation. Only after passage of the federal Civil Rights Act of 1964 and Voting Rights Act of 1965 did African Americans regain the ability to exercise suffrage, among other civil rights. In many jurisdictions, they continued to be excluded from representation by at - large electoral systems, which allowed the majority of the population to dominate elections. Some changes at the county level have occurred following court challenges to establish single - member districts that enable a more diverse representation among county boards. In 2007, the Alabama Legislature passed, and Republican Governor Bob Riley signed a resolution expressing "profound regret '' over slavery and its lingering impact. In a symbolic ceremony, the bill was signed in the Alabama State Capitol, which housed Congress of the Confederate States of America. In 2010, Republicans won control of both houses of the legislature for the first time in 136 years, after a nearly complete realignment of political parties, who represent different visions in the 21st century. With the disfranchisement of African Americans in 1901, the state became part of the "Solid South '', a system in which the Democratic Party operated as effectively the only viable political party in every Southern state. For nearly 100 years, local and state elections in Alabama were decided in the Democratic Party primary, with generally only token Republican challengers running in the General Election. Since the mid to late 20th century, however, there has been a realignment among the two major political parties, and white conservatives started shifting to the Republican Party. In Alabama, majority - white districts are now expected to regularly elect Republican candidates to federal, state and local office. Members of the nine seats on the Alabama Supreme Court and all ten seats on the state appellate courts are elected to office. Until 1994, no Republicans held any of the court seats. In that general election, the then - incumbent Chief Justice of Alabama, Ernest C. Hornsby, refused to leave office after losing the election by approximately 3,000 votes to Republican Perry O. Hooper, Sr... Hornsby sued Alabama and defiantly remained in office for nearly a year before finally giving up the seat after losing in court. This ultimately led to a collapse of support for Democrats at the ballot box in the next three or four election cycles. The Democrats lost the last of the nineteen court seats in August 2011 with the resignation of the last Democrat on the bench. In the early 21st century, Republicans hold all seven of the statewide elected executive branch offices. Republicans hold six of the eight elected seats on the Alabama State Board of Education. In 2010, Republicans took large majorities of both chambers of the state legislature, giving them control of that body for the first time in 136 years. The last remaining statewide Democrat, who served on the Alabama Public Service Commission was defeated in 2012. Only two Republican Lieutenant Governors have been elected since the end of Reconstruction, when Republicans generally represented Reconstruction government, including the newly emancipated freedmen who had gained the franchise. The two GOP Lt. Governors were Steve Windom (1999 -- 2003) and the current Lt. Governor, Kay Ivey, who was elected in 2010 and re-elected in 2014. Many local offices (County Commissioners, Boards of Education, Tax Assessors, Tax Collectors, etc.) in the state are still held by Democrats. Many rural counties have voters who are majority Democrats, resulting in local elections being decided in the Democratic primary. Similarly many metropolitan and suburban counties are majority - Republican and elections are effectively decided in the Republican Primary, although there are exceptions. Alabama 's 67 County Sheriffs are elected in partisan, at - large races, and Democrats still retain the narrow majority of those posts. The current split is 35 Democrats, 31 Republicans, and one Independent Fayette. However, most of the Democratic sheriffs preside over rural and less populated counties. The majority of Republican sheriffs have been elected in the more urban / suburban and heavily populated counties. As of 2015, the state of Alabama has one female sheriff, in Morgan County, Alabama, and ten African - American sheriffs. The state 's two U.S. senators are Luther Strange and Richard C. Shelby, both Republicans. Shelby was originally elected to the Senate as a Democrat in 1986 and re-elected in 1992, but switched parties immediately following the November 1994 general election. In the U.S. House of Representatives, the state is represented by seven members, six of whom are Republicans: (Bradley Byrne, Mike D. Rogers, Robert Aderholt, Morris J. Brooks, Martha Roby, and Gary Palmer) and one Democrat: Terri Sewell who represents the Black Belt as well as most of the predominantly black portions of Birmingham, Tuscaloosa and Montgomery. Public primary and secondary education in Alabama is under the purview of the Alabama State Board of Education as well as local oversight by 67 county school boards and 60 city boards of education. Together, 1,496 individual schools provide education for 744,637 elementary and secondary students. Public school funding is appropriated through the Alabama Legislature through the Education Trust Fund. In FY 2006 -- 2007, Alabama appropriated $3,775,163,578 for primary and secondary education. That represented an increase of $444,736,387 over the previous fiscal year. In 2007, over 82 percent of schools made adequate yearly progress (AYP) toward student proficiency under the National No Child Left Behind law, using measures determined by the state of Alabama. While Alabama 's public education system has improved in recent decades, it lags behind in achievement compared to other states. According to U.S. Census data (2000), Alabama 's high school graduation rate -- 75 % -- is the fourth lowest in the U.S. (after Kentucky, Louisiana and Mississippi). The largest educational gains were among people with some college education but without degrees. Although unusual in the West, school corporal punishment is not uncommon in Alabama, with 27,260 public school students paddled at least one time, according to government data for the 2011 -- 2012 school year. The rate of school corporal punishment in Alabama is surpassed only by Mississippi and Arkansas. Alabama 's programs of higher education include 14 four - year public universities, two - year community colleges, and 17 private, undergraduate and graduate universities. In the state are four medical schools (as of fall 2015) (University of Alabama School of Medicine, University of South Alabama and Alabama College of Osteopathic Medicine and The Edward Via College of Osteopathic Medicine -- Auburn Campus), two veterinary colleges (Auburn University and Tuskegee University), a dental school (University of Alabama School of Dentistry), an optometry college (University of Alabama at Birmingham), two pharmacy schools (Auburn University and Samford University), and five law schools (University of Alabama School of Law, Birmingham School of Law, Cumberland School of Law, Miles Law School, and the Thomas Goode Jones School of Law). Public, post-secondary education in Alabama is overseen by the Alabama Commission on Higher Education and the Alabama Department of Postsecondary Education. Colleges and universities in Alabama offer degree programs from two - year associate degrees to a multitude of doctoral level programs. The largest single campus is the University of Alabama, located in Tuscaloosa, with 37,665 enrolled for fall 2016. Troy University was the largest institution in the state in 2010, with an enrollment of 29,689 students across four Alabama campuses (Troy, Dothan, Montgomery, and Phenix City), as well as sixty learning sites in seventeen other states and eleven other countries. The oldest institutions are the public University of North Alabama in Florence and the Catholic Church - affiliated Spring Hill College in Mobile, both founded in 1830. Accreditation of academic programs is through the Southern Association of Colleges and Schools (SACS) as well as other subject - focused national and international accreditation agencies such as the Association for Biblical Higher Education (ABHE), the Council on Occupational Education (COE), and the Accrediting Council for Independent Colleges and Schools (ACICS). According to the 2011 U.S. News & World Report, Alabama had three universities ranked in the top 100 Public Schools in America (University of Alabama at 31, Auburn University at 36, and University of Alabama at Birmingham at 73). According to the 2012 U.S. News & World Report, Alabama had four tier 1 universities (University of Alabama, Auburn University, University of Alabama at Birmingham and University of Alabama in Huntsville). Major newspapers include Birmingham News, Birmingham Post-Herald, Mobile Press - Register, and Montgomery Advertiser. Television news channels in Alabama include: ABC CBS Fox NBC PBS / Alabama Public Television College football is popular in Alabama, particularly the University of Alabama Crimson Tide and Auburn University Tigers, rivals in the Southeastern Conference. In the 2013 season, Alabama averaged over 100,000 fans per game and Auburn averaged over 80,000 fans, both numbers among the top 20 in the nation in average attendance. Bryant -- Denny Stadium is the home of the Alabama football team, and has a seating capacity of 101,821, and is the fifth largest stadium in America. Jordan - Hare Stadium is the home field of the Auburn football team and seats up to 87,451. Legion Field is home for the UAB Blazers football program and the Birmingham Bowl. It seats 80,601. Ladd -- Peebles Stadium in Mobile is the home of the University of South Alabama football team, and serves as the home of the NCAA Senior Bowl, Dollar General Bowl (formerly GoDaddy.com Bowl), and Alabama - Mississippi All Star Classic; the stadium seats 40,646. In 2009, Bryant -- Denny Stadium and Jordan - Hare Stadium became the homes of the Alabama High School Athletic Association state football championship games, after previously being held at Legion Field in Birmingham. Alabama has several professional and semi-professional sports teams, including three minor league baseball teams. The Talladega Superspeedway motorsports complex hosts a series of NASCAR events. It has a seating capacity of 143,000 and is the thirteenth largest stadium in the world and sixth largest stadium in America. Also, the Barber Motorsports Park has hosted IndyCar Series and Rolex Sports Car Series races. The ATP Birmingham was a World Championship Tennis tournament held from 1973 to 1980. Alabama has hosted several professional golf tournaments, such as the 1984 and 1990 PGA Championship at Shoal Creek, the Barbasol Championship (PGA Tour), the Mobile LPGA Tournament of Champions, Airbus LPGA Classic and Yokohama Tire LPGA Classic (LPGA Tour), and The Tradition (Champions Tour). Major airports with sustained commercial operations in Alabama include Birmingham - Shuttlesworth International Airport (BHM), Huntsville International Airport (HSV), Dothan Regional Airport (DHN), Mobile Regional Airport (MOB), Montgomery Regional Airport (MGM), and Muscle Shoals -- Northwest Alabama Regional Airport (MSL). For rail transport, Amtrak schedules the Crescent, a daily passenger train, running from New York to New Orleans with station stops at Anniston, Birmingham, and Tuscaloosa. Alabama has six major interstate roads that cross the state: Interstate 65 (I - 65) travels north -- south roughly through the middle of the state; I - 20 / I - 59 travel from the central west Mississippi state line to Birmingham, where I - 59 continues to the north - east corner of the state and I - 20 continues east towards Atlanta; I - 85 originates in Montgomery and travels east - northeast to the Georgia state line, providing a main thoroughfare to Atlanta; and I - 10 traverses the southernmost portion of the state, traveling from west to east through Mobile. I - 22 enters the state from Mississippi and connects Birmingham with Memphis, Tennessee. In addition, there are currently five auxiliary interstate routes in the state: I - 165 in Mobile, I - 359 in Tuscaloosa, I - 459 around Birmingham, I - 565 in Decatur and Huntsville, and I - 759 in Gadsden. A sixth route, I - 685, will be formed when I - 85 is rerouted along a new southern bypass of Montgomery. A proposed northern bypass of Birmingham will be designated as I - 422. Since a direct connection from I - 22 to I - 422 will not be possible, I - 222 has been proposed, as well. Several U.S. Highways also pass through the state, such as U.S. Route 11 (US - 11), US - 29, US - 31, US - 43, US - 45, US - 72, US - 78, US - 80, US - 82, US - 84, US - 90, US - 98, US - 231, US - 278, US - 280, US - 331, US - 411, and US - 431. There are four toll roads in the state: Montgomery Expressway in Montgomery; Tuscaloosa Bypass in Tuscaloosa; Emerald Mountain Expressway in Wetumpka; and Beach Express in Orange Beach. The Port of Mobile, Alabama 's only saltwater port, is a large seaport on the Gulf of Mexico with inland waterway access to the Midwest by way of the Tennessee - Tombigbee Waterway. The Port of Mobile was ranked 12th by tons of traffic in the United States during 2009. The newly expanded container terminal at the Port of Mobile was ranked as the 25th busiest for container traffic in the nation during 2011. The state 's other ports are on rivers with access to the Gulf of Mexico. Water ports of Alabama, listed from north to south: Coordinates: 32 ° 42 ′ N 86 ° 42 ′ W  /  32.7 ° N 86.7 ° W  / 32.7; - 86.7
the far side of the moon cannot be seen from earth because
Far side of the Moon - wikipedia The far side of the Moon is the hemisphere of the Moon that always faces away from Earth. The far side 's terrain is rugged, with a multitude of impact craters and relatively few flat lunar maria. It has one of the largest craters in the Solar System, the South Pole -- Aitken basin. Although both sides of the moon experience two weeks of sunlight followed by two weeks of night, the far side is sometimes called the "dark side of the Moon, '' with "dark '' meaning "unknown '' rather than lack of light. About 18 % of the far side is occasionally visible from Earth due to libration. The remaining 82 % remained unobserved until 1959, when the Soviet Union 's Luna 3 space probe photographed it. The Soviet Academy of Sciences published the first atlas of the far side in 1960. In 1968, the Apollo 8 mission 's astronauts were the first humans to view this region directly when they orbited the Moon. To date, no human being has ever stood on the surface of the far side of the Moon. Astronomers have suggested installing a large radio telescope on the far side, where the Moon would shield it from possible radio interference from Earth. Tidal forces from Earth have slowed down the Moon 's rotation so that the same side is always facing the Earth, a phenomenon called tidal locking. The other face, most of which is never visible from the Earth, is therefore called the "far side of the Moon ''. Over time some parts of the far side can be seen due to libration. In total 59 percent of the Moon 's surface is visible from Earth at one time or another. Useful observation of the parts of the far side of the Moon occasionally visible from Earth is difficult because of the low viewing angle from Earth (they can not be observed "full on ''). The idiomatic phrase "dark side of the Moon '' does not refer to "dark '' as in the absence of light, but rather "dark '' as in unknown: until humans were able to send spacecraft around the Moon, this area had never been seen. While many misconstrue this to think that the "dark side '' receives little to no sunlight, in reality, both the near and far sides receive (on average) almost equal amounts of light directly from the Sun. However, the near side also receives sunlight reflected from the Earth, known as earthshine. Earthshine does not reach the area of the far side which can not be seen from Earth. Only during a full moon (as viewed from Earth) is the whole far side of the Moon dark. The word "dark '' has expanded to also refer to the fact that communication with spacecraft can be blocked while on the far side of the Moon, during Apollo space missions for example. The two hemispheres have distinctly different appearances, with the near side covered in multiple, large maria (Latin for ' seas, ' since the earliest astronomers incorrectly thought that these plains were seas of lunar water). The far side has a battered, densely cratered appearance with few maria. Only 1 % of the surface of the far side is covered by maria, compared to 31.2 % on the near side. One commonly accepted explanation for this difference is related to a higher concentration of heat - producing elements on the near - side hemisphere, as has been demonstrated by geochemical maps obtained from the Lunar Prospector gamma - ray spectrometer. While other factors such as surface elevation and crustal thickness could also affect where basalts erupt, these do not explain why the farside South Pole -- Aitken basin (which contains the lowest elevations of the Moon and possesses a thin crust) was not as volcanically active as Oceanus Procellarum on the near side. It has also been proposed that the differences between the two hemispheres may have been caused by a collision with a smaller companion moon that also originated from the Theia collision. In this model the impact led to an accretionary pile rather than a crater, contributing a hemispheric layer of extent and thickness that may be consistent with the dimensions of the farside highlands. The far side has more visible craters. This was thought to be a result of the effects of lunar lava flows, which cover and obscure craters, rather than a shielding effect from the Earth. NASA calculates that the Earth obscures only about 4 square degrees out of 41,000 square degrees of the sky as seen from the Moon. "This makes the Earth negligible as a shield for the Moon... It is likely that each side of the Moon has received equal numbers of impacts, but the resurfacing by lava results in fewer craters visible on the near side than the far side, even though the (sic) both sides have received the same number of impacts. '' Newer research suggests that the reason the side of the moon facing Earth has fewer impact craters is heat from Earth at the time when the Moon was formed. The lunar crust consists primarily of plagioclases formed when aluminium and calcium condensed and combined with silicates in the mantle. The cooler far side experienced condensation of these elements sooner and so formed a thicker crust; meteoroid impacts on the near side would sometimes penetrate the thinner crust here and release basaltic lava that created the maria, but would rarely do so on the far side. Until the late 1950s, little was known about the far side of the Moon. Librations of the Moon periodically allowed limited glimpses of features near the lunar limb on the far side. These features, however, were seen from a low angle, hindering useful observation. (It proved difficult to distinguish a crater from a mountain range.) The remaining 82 % of the surface on the far side remained unknown, and its properties were subject to much speculation. An example of a far side feature that can be seen through libration is the Mare Orientale, which is a prominent impact basin spanning almost 1,000 kilometres (600 mi), yet this was not even named as a feature until 1906, by Julius Franz in Der Mond. The true nature of the basin was discovered in the 1960s when rectified images were projected onto a globe. The basin was photographed in fine detail by Lunar Orbiter 4 in 1967. On October 7, 1959, the Soviet probe Luna 3 took the first photographs of the lunar far side, eighteen of them resolvable, covering one - third of the surface invisible from the Earth. The images were analysed, and the first atlas of the far side of the Moon was published by the USSR Academy of Sciences on November 6, 1960. It included a catalog of 500 distinguished features of the landscape. A year later the first globe (1: 13 600 000 scale) containing lunar features invisible from the Earth was released in the USSR, based on images from Luna 3. On July 20, 1965 another Soviet probe, Zond 3, transmitted 25 pictures of very good quality of the lunar far side, with much better resolution than those from Luna 3. In particular, they revealed chains of craters, hundreds of kilometers in length. In 1967 the second part of the "Atlas of the Far Side of the Moon '' was published in Moscow, based on data from Zond 3, with the catalog now including 4,000 newly discovered features of the lunar far side landscape. In the same year the first "Complete Map of the Moon '' (1: 5 000 000 scale) and updated complete globe (1: 10 000 000 scale), featuring 95 percent of the lunar surface, were released in the Soviet Union. As many prominent landscape features of the far side were discovered by Soviet space probes, Soviet scientists selected names for them. This caused some controversy, and the International Astronomical Union, leaving many of those names intact, later assumed the role of naming lunar features on this hemisphere. On April 26, 1962, NASA 's Ranger 4 space probe became the first spacecraft to impact the far side of the Moon, although it failed to return any scientific data before impact. The first truly comprehensive and detailed mapping survey of the far side was undertaken by the American unmanned Lunar Orbiter program launched by NASA from 1966 to 1967. Most of the coverage of the far side was provided by the final probe in the series, Lunar Orbiter 5. The far side was first seen directly by human eyes during the Apollo 8 mission in 1968. Astronaut William Anders described the view: The backside looks like a sand pile my kids have played in for some time. It 's all beat up, no definition, just a lot of bumps and holes. It has been seen by all crew members of the Apollo 8 and Apollo 10 through Apollo 17 missions since that time, and photographed by multiple lunar probes. Spacecraft passing behind the Moon were out of direct radio communication with the Earth, and had to wait until the orbit allowed transmission. During the Apollo missions, the main engine of the Service Module was fired when the vessel was behind the Moon, producing some tense moments in Mission Control before the craft reappeared. Geologist - astronaut Harrison Schmitt, who became the last to step onto the Moon, had aggressively lobbied for his landing site to be on the far side of the Moon, targeting the lava - filled crater Tsiolkovskiy. Schmitt 's ambitious proposal included a special communications satellite based on the existing TIROS satellites to be launched into a Farquhar -- Lissajous halo orbit around the L2 point so as to maintain line - of - sight contact with the astronauts during their powered descent and lunar surface operations. NASA administrators rejected these plans on the grounds of added risk and lack of funding. The China National Space Administration plans for its Chang'e 4 mission to achieve the first landing on the lunar far side. In May 2015, Chief lunar exploration engineer Wu Weiren told China Central Television, "We are currently discussing the next moon landing site for Chang'e 4 (...) Our next move probably will see some spacecraft land on the far side of the moon. '' In September 2015, the Xinhua news agency confirmed that Chang'e 4 would attempt a far side landing before 2020, equipped with a low frequency radio spectrograph and geological research tools. Meanwhile, the plans have assumed concrete form, with a planned launch in late 2018 and several landing sites proposed. Because the far side of the Moon is shielded from radio transmissions from the Earth, it is considered a good location for placing radio telescopes for use by astronomers. Small, bowl - shaped craters provide a natural formation for a stationary telescope similar to Arecibo in Puerto Rico. For much larger - scale telescopes, the 100 - kilometer (62 mi) diameter crater Daedalus is situated near the center of the far side, and the 3 km (2 mi) - high rim would help to block stray communications from orbiting satellites. Another potential candidate for a radio telescope is the Saha crater. Before deploying radio telescopes to the far side, several problems must be overcome. The fine lunar dust can contaminate equipment, vehicles, and space suits. The conducting materials used for the radio dishes must also be carefully shielded against the effects of solar flares. Finally the area around the telescopes must be protected against contamination by other radio sources. The L Lagrange point of the Earth -- Moon system is located about 62,800 km (39,000 mi) above the far side, which has also been proposed as a location for a future radio telescope which would perform a Lissajous orbit about the Lagrangian point. One of the NASA missions to the Moon under study would send a sample - return lander to the South Pole -- Aitken basin, the location of a major impact event that created a formation nearly 2,400 kilometers (1,491 mi) across. The force of this impact has created a deep penetration into the lunar surface, and a sample returned from this site could be analyzed for information concerning the interior of the Moon. Because the near side is partly shielded from the solar wind by the Earth, the far side maria are expected to have the highest concentration of helium - 3 on the surface of the Moon. This isotope is relatively rare on the Earth, but has good potential for use as a fuel in fusion reactors. Proponents of lunar settlement have cited the presence of this material as a reason for developing a Moon base. Some conspiracy theorists, notably Milton William Cooper, have alleged that some Apollo astronauts had seen UFOs on the far side of the Moon but were told to keep quiet about them. Some have allegedly reported seeing an alien base (code named "Luna '') and even encountered aliens who told them to stay off the Moon. Some photographs circulated on the Internet purport to show a large "castle '' on the Moon. NASA states that these claims are hoaxes. The late Secretary of Defense Robert McNamara stated that several unspecified officials ("Chiefs '') within The Pentagon were opposed to a Nuclear Test Ban Treaty between the United States and the Soviet Union, on the premise that the Soviets would continue nuclear weapons testing on the far side of the Moon, far from the observations of American observers. McNamara considered this premise "... absurd '' and that "... (they were) out of (their) minds '', but he believed that it was an example of the state of mind of some Pentagon officials during the Cold War. Ironically, it was later revealed that the Pentagon had their own plan to detonate a nuclear weapon on the moon as part of the experiment Project A119. The project was created not only to help in answering some of the mysteries in planetary astronomy and astrogeology, but also as a show of force intended to boost domestic confidence in the astro - capabilities of the United States, a boost that was needed after the Soviet Union took an early lead in the Space Race and who were thought by some to be working on a similar project.
when was the first pair of headphones invented
Headphones - wikipedia Headphones (or head - phones in the early days of telephony and radio) are a pair of small loudspeaker drivers worn on or around the head over a user 's ears. They are electroacoustic transducers, which convert an electrical signal to a corresponding sound. Headphones let a single user listen to an audio source privately, in contrast to a loudspeaker, which emits sound into the open air for anyone nearby to hear. Headphones are also known as earspeakers, earphones or, colloquially, cans. Circumaural and supra - aural headphones use a band over the top of the head to hold the speakers in place. The other type, known as earbuds or earpieces consist of individual units that plug into the user 's ear canal. In the context of telecommunication, a headset is a combination of headphone and microphone. Headphones connect to a signal source such as an audio amplifier, radio, CD player, portable media player, mobile phone, video game console, or electronic musical instrument, either directly using a cord, or using wireless technology such as bluetooth, DECT or FM radio. The first headphones were developed in the late 19th century for use by telephone operators, to keep their hands free. Initially the audio quality was mediocre and a step forward was the invention of high fidelity headphones. Headphones are made in a range of different audio reproduction quality capabilities. Headsets designed for telephone use typically can not reproduce sound with the high fidelity of expensive units designed for music listening by audiophiles. Headphones that use cables typically have either a 1 / 4 inch (6.35 mm) or 1 / 8 inch (3.5 mm) phone jack for plugging the headphones into the audio source. Some stereo earbuds are wireless, using Bluetooth connectivity to transmit the audio signal by radio waves from source devices like cellphones and digital players. Due to the spread of wireless devices in recent years headphones are increasingly used by people in public places such as sidewalks, grocery stores, and public transit. Headphones are also used by people in various professional contexts, such as audio engineers mixing sound for live concerts or sound recordings and DJs, who use headphones to cue up the next song without the audience hearing, aircraft pilots and call center employees. The latter two types of employees use headphones with an integrated microphone. Headphones originated from the telephone receiver earpiece, and were the only way to listen to electrical audio signals before amplifiers were developed. The first truly successful set was developed in 1910 by Nathaniel Baldwin, who made them by hand in his kitchen and sold them to the United States Navy. These early headphones used moving iron drivers, with either single - ended or balanced armatures. The common single - ended type used voice coils wound around the poles of a permanent magnet, which were positioned close to a flexible steel diaphragm. The audio current through the coils varied the magnetic field of the magnet, exerting a varying force on the diaphragm, causing it to vibrate, creating sound waves. The requirement for high sensitivity meant that no damping was used, so the frequency response of the diaphragm had large peaks due to resonance, resulting in poor sound quality. These early models lacked padding, and were often uncomfortable to wear for long periods. Their impedance varied; headphones used in telegraph and telephone work had an impedance of 75 ohms. Those used with early wireless radio had more turns of finer wire to increase sensitivity. Impedance of 1000 to 2000 ohms was common, which suited both crystal sets and triode receivers. Some very sensitive headphones, such as those manufactured by Brandes around 1919, were commonly used for early radio work. In early powered radios, the headphone was part of the vacuum tube 's plate circuit and carried dangerous voltages. It was normally connected directly to the positive high voltage battery terminal, and the other battery terminal was securely grounded. The use of bare electrical connections meant that users could be shocked if they touched the bare headphone connections while adjusting an uncomfortable headset. In 1958, John C. Koss, an audiophile and jazz musician from Milwaukee, produced the first stereo headphones. Previously, headphones were used only by the US navy, telephone and radio operators, and individuals in similar industries. Smaller earbud type earpieces, which plugged into the user 's ear canal, were first developed for hearing aids. They became widely used with transistor radios, which commercially appeared in 1954 with the introduction of the Regency TR - 1. The most popular audio device in history, the transistor radio changed listening habits, allowing people to listen to radio anywhere. The earbud uses either a moving iron driver or a piezoelectric crystal to produce sound. The 3.5 mm radio and phone connector, which is the most commonly used in portable application today, has been used at least since the Sony EFM - 117J transistor radio, which was released in 1964. Its popularity was reinforced with its use on the Walkman portable tape player in 1979. Headphones may be used with stationary CD and DVD players, home theater, personal computers, or portable devices (e.g., digital audio player / MP3 player, mobile phone). Cordless headphones are not connected to their source by a cable. Instead, they receive a radio or infrared signal encoded using a radio or infrared transmission link, such as FM, Bluetooth or Wi - Fi. These are powered receiver systems, of which the headphone is only a component. Cordless headphones are used with events such as a Silent disco or Silent Gig. In the professional audio sector, headphones are used in live situations by disc jockeys with a DJ mixer, and sound engineers for monitoring signal sources. In radio studios, DJs use a pair of headphones when talking to the microphone while the speakers are turned off to eliminate acoustic feedback while monitoring their own voice. In studio recordings, musicians and singers use headphones to play or sing along to a backing track or band. In military applications, audio signals of many varieties are monitored using headphones. Wired headphones are attached to an audio source by a cable. The most common connectors are 6.35 mm (1⁄4 '') and 3.5 mm phone connectors. The larger 6.35 mm connector is more common on fixed location home or professional equipment. The 3.5 mm connector remains the most widely used connector for portable application today. Adapters are available for converting between 6.35 mm and 3.5 mm devices. Electrical characteristics of dynamic loudspeakers may be readily applied to headphones, because most headphones are small dynamic loudspeakers. Headphones are available with low or high impedance (typically measured at 1 kHz). Low - impedance headphones are in the range 16 to 32 ohms and high - impedance headphones are about 100 - 600 ohms. As the impedance of a pair of headphones increases, more voltage (at a given current) is required to drive it, and the loudness of the headphones for a given voltage decreases. In recent years, impedance of newer headphones has generally decreased to accommodate lower voltages available on battery powered CMOS - based portable electronics. This has resulted in headphones that can be more efficiently driven by battery - powered electronics. Consequently, newer amplifiers are based on designs with relatively low output impedance. The impedance of headphones is of concern because of the output limitations of amplifiers. A modern pair of headphones is driven by an amplifier, with lower impedance headphones presenting a larger load. Amplifiers are not ideal; they also have some output impedance that limits the amount of power they can provide. To ensure an even frequency response, adequate damping factor, and undistorted sound, an amplifier should have an output impedance less than 1 / 8 that of the headphones it is driving (and ideally, as low as possible). If output impedance is large compared to the impedance of the headphones, significantly higher distortion is present. Therefore, lower impedance headphones tend to be louder and more efficient, but also demand a more capable amplifier. Higher impedance headphones are more tolerant of amplifier limitations, but produce less volume for a given output level. Historically, many headphones had relatively high impedance, often over 500 ohms so they could operate well with high - impedance tube amplifiers. In contrast, modern transistor amplifiers can have very low output impedance, enabling lower - impedance headphones. Unfortunately, this means that older audio amplifiers or stereos often produce poor - quality output on some modern, low - impedance headphones. In this case, an external headphone amplifier may be beneficial. Sensitivity is a measure of how effectively an earpiece converts an incoming electrical signal into an audible sound. It thus indicates how loud the headphones are for a given electrical drive level. It can be measured in decibels of sound pressure level per milli watt (dB (SPL) / mW) or decibels of sound pressure level per volt (dB (SPL) / V). Unfortunately, both definitions are widely used, often interchangeably. As the output voltage (but not power) of a headphone amplifier is essentially constant for most common headphones, dB / mW is often more useful if converted into dB / V using Ohm 's law: Alternatively, online calculators can be used. Once the sensitivity per volt is known, the maximum volume for a pair of headphones can be easily calculated from the maximum amplifier output voltage. For example, for a headphone with a sensitivity of 100 dB (SPL) / V, an amplifier with an output of 1 root mean square (RMS) voltage produces a maximum volume of 100 dB. Pairing high sensitivity headphones with power amplifiers can produce dangerously high volumes and damage headphones. The maximum sound pressure level is a matter of preference, with some sources recommending no higher than 110 to 120 dB. In contrast, the American Occupational Safety and Health Administration recommends an average SPL of no more than 85 dB (A) to avoid long - term hearing loss, while the European Union standard EN 50332 - 1: 2013 recommends that volumes above 85 dB (A) include a warning, with an absolute maximum volume (defined using 40 -- 4000 Hz noise) of no more than 100 dB to avoid accidental hearing damage. Using this standard, headphones with sensitivities of 90, 100 and 110 dB (SPL) / V should be driven by an amplifier capable of no more than 3.162, 1.0 and 0.3162 RMS volts at maximum volume setting, respectively to reduce the risk of hearing damage. The sensitivity of headphones is usually between about 80 and 125 dB / mW and usually measured at 1 kHz. Headphone size can affect the balance between fidelity and portability. Generally, headphone form factors can be divided into four separate categories: circumaural (over-ear), supra - aural (on - ear), earbud and in - ear. Circumaural headphones (sometimes called full size headphones or over-ear headphones) have circular or ellipsoid earpads that encompass the ears. Because these headphones completely surround the ear, circumaural headphones can be designed to fully seal against the head to attenuate external noise. Because of their size, circumaural headphones can be heavy and there are some sets that weigh over 500 grams (1 lb). Ergonomic headband and earpad design is required to reduce discomfort resulting from weight. These are commonly used by drummers in recording. Supra - aural headphones or on - ear headphones have pads that press against the ears, rather than around them. They were commonly bundled with personal stereos during the 1980s. This type of headphone generally tends to be smaller and lighter than circumaural headphones, resulting in less attenuation of outside noise. Supra - aural headphones can also lead to discomfort due to the pressure on the ear as compared to circumaural headphones that sit around the ear. Comfort may vary due to the earcup material. Both circumaural and supra - aural headphones can be further differentiated by the type of earcups: Open - back headphones have the back of the earcups open. This leaks more sound out of the headphone and also lets more ambient sounds into the headphone, but gives a more natural or speaker - like sound, due to including sounds from the environment. Closed - back (or sealed) styles have the back of the earcups closed. They usually block some of the ambient noise. Closed - back headphones usually can produce stronger low frequencies than open - back headphones. Semi-open headphones, have a design that can be considered as a compromise between open - back headphones and closed - back headphones. Some believe the term "semi-open '' is purely there for marketing purposes. There is no exact definition for the term semi-open headphone. Where the open - back approach has hardly any measure to block sound at the outer side of the diaphragm and the closed - back approach really has a closed chamber at the outer side of the diaphragm, a semi-open headphone can have a chamber to partially block sound while letting some sound through via openings or vents. Earphones are very small headphones that are fitted directly in the outer ear, facing but not inserted in the ear canal. Earphones are portable and convenient, but many people consider them uncomfortable. They provide hardly any acoustic isolation and leave room for ambient noise to seep in; users may turn up the volume dangerously high to compensate, at the risk of causing hearing loss. On the other hand, they let the user be better aware of their surroundings. Since the early days of the transistor radio, earphones have commonly been bundled with personal music devices. They are sold at times with foam pads for comfort. (The use of the term earbuds, which has been around since at least 1984, did not hit its peak until after 2001, with the success of Apple 's MP3 player.) In - ear headphones, also known as in - ear monitors (IEMs) or canalphones, are small headphones with similar portability to earbuds that are inserted in the ear canal itself. IEMs are higher - quality in - ear headphones and are used by audio engineers and musicians as well as audiophiles. The outer shells of in - ear headphones are made up of a variety of materials, such as plastic, aluminum, ceramic and other metal alloys. Because in - ear headphones engage the ear canal, they can be prone to sliding out, and they block out much environmental noise. Lack of sound from the environment can be a problem when sound is a necessary cue for safety or other reasons, as when walking, driving, or riding near or in vehicular traffic. Generic or custom - fitting ear canal plugs are made from silicone rubber, elastomer, or foam. Custom in - ear headphones use castings of the ear canal to create custom - molded plugs that provide added comfort and noise isolation. This type combines advantages of earbuds and in - ear headphones -- depending on the environment and requirements of the user, they provide passive noise reduction for quality mode (conversation or active music listening) or they give control over the sound environment around user in comfort mode (stand by or background voice / music listening). A headset is a headphone combined with a microphone. Headsets provide the equivalent functionality of a telephone handset with hands - free operation. Among applications for headsets, besides telephone use, are aviation, theatre or television studio intercom systems, and console or PC gaming. Headsets are made with either a single - earpiece (mono) or a double - earpiece (mono to both ears or stereo). The microphone arm of headsets is either an external microphone type where the microphone is held in front of the user 's mouth, or a voicetube type where the microphone is housed in the earpiece and speech reaches it by means of a hollow tube. Telephone headsets connect to a fixed - line telephone system. A telephone headset functions by replacing the handset of a telephone. Headsets for standard corded telephones are fitted with a standard 4P4C commonly called an RJ - 9 connector. Headsets are also available with 2.5 mm jack sockets for many DECT phones and other applications. Cordless bluetooth headsets are available, and often used with mobile telephones. Headsets are widely used for telephone - intensive jobs, in particular by call centre workers. They are also used by anyone wishing to hold telephone conversations with both hands free. For older models of telephones, the headset microphone impedance is different from that of the original handset, requiring a telephone amplifier for the telephone headset. A telephone amplifier provides basic pin - alignment similar to a telephone headset adaptor, but it also offers sound amplification for the microphone as well as the loudspeakers. Most models of telephone amplifiers offer volume control for loudspeaker as well as microphone, mute function and switching between headset and handset. Telephone amplifiers are powered by batteries or AC adaptors. Unwanted sound from the environment can be reduced by excluding sound from the ear by passive noise isolation, or, often in conjunction with isolation, by active noise cancellation. Passive noise isolation is essentially using the body of the earphone, either over or in the ear, as a passive earplug that simply blocks out sound. The headphone types that provide most attenuation are in - ear canal headphones and closed - back headphones, both circumaural and supra aural. Open - back and earbud headphones provide some passive noise isolation, but much less than the others. Typical closed - back headphones block 8 to 12 dB, and in - ears anywhere from 10 to 15 dB. Some models have been specifically designed for drummers to facilitate the drummer monitoring the recorded sound while reducing sound directly from the drums as much as possible. Such headphones claim to reduce ambient noise by around 25 dB. Active noise - cancelling headphones use a microphone, amplifier, and speaker to pick up, amplify, and play ambient noise in phase - reversed form; this to some extent cancels out unwanted noise from the environment without affecting the desired sound source, which is not picked up and reversed by the microphone. They require a power source, usually a battery, to drive their circuitry. Active noise cancelling headphones can attenuate ambient noise by 20 dB or more, but the active circuitry is mainly effective on constant sounds and at lower frequencies, rather than sharp sounds and voices. Some noise cancelling headphones are designed mainly to reduce low - frequency engine and travel noise in aircraft, trains, and automobiles, and are less effective in environments with other types of noise. Headphones use various types of transducer to convert electrical signals to sound. The moving coil driver, more commonly referred to as a "dynamic '' driver is the most common type used in headphones. It consists of a stationary magnet element affixed to the frame of the headphone, which sets up a static magnetic field. The magnet in headphones is typically composed of ferrite or neodymium. A voice coil, a light coil of wire, is suspended in the magnetic field of the magnet, attached to a diaphragm, typically fabricated from lightweight, high - stiffness - to - mass - ratio cellulose, polymer, carbon material, paper or the like. When the varying current of an audio signal is passed through the coil, it creates a varying magnetic field that reacts against the static magnetic field, exerting a varying force on the coil causing it and the attached diaphragm to vibrate. The vibrating diaphragm pushes on the air to produce sound waves. Electrostatic drivers consist of a thin, electrically charged diaphragm, typically a coated PET film membrane, suspended between two perforated metal plates (electrodes). The electrical sound signal is applied to the electrodes creating an electrical field; depending on the polarity of this field, the diaphragm is drawn towards one of the plates. Air is forced through the perforations; combined with a continuously changing electrical signal driving the membrane, a sound wave is generated. Electrostatic headphones are usually more expensive than moving - coil ones, and are comparatively uncommon. In addition, a special amplifier is required to amplify the signal to deflect the membrane, which often requires electrical potentials in the range of 100 to 1000 volts. Due to the extremely thin and light diaphragm membrane, often only a few micrometers thick, and the complete absence of moving metalwork, the frequency response of electrostatic headphones usually extends well above the audible limit of approximately 20 kHz. The high frequency response means that the low midband distortion level is maintained to the top of the audible frequency band, which is generally not the case with moving coil drivers. Also, the frequency response peakiness regularly seen in the high frequency region with moving coil drivers is absent. Well - designed electrostatic headphones can produce significantly better sound quality than other types. Electrostatic headphones require a voltage source generating 100 V to over 1 kV, and are on the user 's head. Since the invention of insulators, there 's no actual danger. They do not need to deliver significant electric current, which further limits the electrical hazard to the wearer in case of fault. An electret driver functions along the same electromechanical means as an electrostatic driver. However the electret driver has a permanent charge built into it, whereas electrostatics have the charge applied to the driver by an external generator. Electret and electrostatic headphones are relatively uncommon. Original electrets were also typically cheaper and lower in technical capability and fidelity than electrostatics. Patent applications from 2009 - 2013 have been approved that show by using different materials, i.e. a "Fluorinated cyclic olefin electret film '', Frequency response chart readings can reach 50 kHz at 100db. When these new improved electrets are combined with a traditional dome headphone driver, headphones can be produced that are recognised by the Japan Audio Society as worthy of joining the Hi Res Audio program. US patents 8,559,660 B2. 7,732,547 B2. 7,879,446 B2. 7,498,699 B2. Orthodynamic (also known as Planar Magnetic) headphones use similar technology to electrostatic headphones, with some fundamental differences. They operate similarly to Planar Magnetic Loudspeakers. An orthodynamic driver consists of a relatively large membrane that contains an embedded wire pattern. This membrane is suspended between two sets of permanent, oppositely aligned, magnets. A current passed through the wires embedded in the membrane produces a magnetic field that reacts with the field of the permanent magnets to induce movement in the membrane, which produces sound. A balanced armature is a sound transducer design primarily intended to increase the electrical efficiency of the element by eliminating the stress on the diaphragm characteristic of many other magnetic transducer systems. As shown schematically in the first diagram, it consists of a moving magnetic armature that is pivoted so it can move in the field of the permanent magnet. When precisely centered in the magnetic field there is no net force on the armature, hence the term ' balanced. ' As illustrated in the second diagram, when there is electric current through the coil, it magnetizes the armature one way or the other, causing it to rotate slightly one way or the other about the pivot thus moving the diaphragm to make sound. The design is not mechanically stable; a slight imbalance makes the armature stick to one pole of the magnet. A fairly stiff restoring force is required to hold the armature in the ' balance ' position. Although this reduces its efficiency, this design can still produce more sound from less power than any other. Popularized in the 1920s as Baldwin Mica Diaphragm radio headphones, balanced armature transducers were refined during World War II for use in military sound powered telephones. Some of these achieved astonishing electro - acoustic conversion efficiencies, in the range of 20 % to 40 %, for narrow bandwidth voice signals. Today they are typically used only in in - ear headphones and hearing aids, where their high efficiency and diminutive size is a major advantage. They generally are limited at the extremes of the hearing spectrum (e.g. below 20 Hz and above 16 kHz) and require a better seal than other types of drivers to deliver their full potential. Higher - end models may employ multiple armature drivers, dividing the frequency ranges between them using a passive crossover network. A few combine an armature driver with a small moving - coil driver for increased bass output. The earliest loudspeakers for radio receivers used balanced armature drivers for their cones. The thermoacoustic effect generates sound from the audio frequency Joule heating of the conductor, an effect that is not magnetic and does not vibrate the speaker. In 2013 a carbon nanotube thin - yarn earphone based on the thermoacoustic mechanism was demonstrated by a research group in Tsinghua University. The as - produced CNT thin yarn earphone has a working element called CNT thin yarn thermoacoustic chip. Such a chip is composed of a layer of CNT thin yarn array supported by the silicon wafer, and periodic grooves with certain depth are made on the wafer by micro-fabrication methods to suppress the heat leakage from the CNT yarn to the substrate. Transducer technologies employed much less commonly for headphones include the Heil Air Motion Transformer (AMT); Piezoelectric film; Ribbon planar magnetic; Magnetostriction and Plasma - ionisation. The first Heil AMT headphone was marketed by ESS Laboratories and was essentially an ESS AMT tweeter from one of the company 's speakers being driven at full range. Since the turn of the century, only Precide of Switzerland have manufactured an AMT headphone. Piezoelectric film headphones were first developed by Pioneer, their two models used a flat sheet of film that limited the maximum volume of air movement. Currently, TakeT produces a piezoelectric film headphone shaped similarly to an AMT transducer but, which like the Precide driver, has a variation in the size of transducer folds over the diaphragm. It additionally incorporates a two way design by its inclusion of a dedicated tweeter / supertweeter panel. The folded shape of a diaphragm allows a transducer with a larger surface area to fit within smaller space constraints. This increases the total volume of air that can be moved on each excursion of the transducer given that radiating area. Magnetostriction headphones, sometimes sold under the label Bonephones, work by vibrating against the side of head, transmitting sound via bone conduction. This is particularly helpful in situations where the ears must be unobstructed, or for people who are deaf for reasons that do n't affect the nervous apparatus of hearing. Magnetostriction headphones though, are limited in their fidelity compared to conventional headphones that rely on the normal workings of the ear. Additionally, in the early 1990s, a French company called Plasmasonics tried to market a plasma - ionisation headphone. There are no known functioning examples left. Headphones can prevent other people from hearing the sound, either for privacy or to prevent disturbing others, as in listening in a public library. They can also provide a level of sound fidelity greater than loudspeakers of similar cost. Part of their ability to do so comes from the lack of any need to perform room correction treatments with headphones. High - quality headphones can have an extremely flat low - frequency response down to 20 Hz within 3 dB. While a loudspeaker must use a relatively large (often 15 '' or 18 ") speaker driver to reproduce low frequencies, headphones can accurately reproduce bass and sub-bass frequencies with speaker drivers only 40 - 50 millimeters wide (or much smaller, as is the case with in - ear monitor headphones). Headphones ' impressive low - frequency performance is possible because they are so much closer to the ear that they only need to move relatively small volumes of air. Marketed claims such as ' frequency response 4 Hz to 20 kHz ' are usually overstatements; the product 's response at frequencies lower than 20 Hz is typically very small. Headphones are also useful for video games that use 3D positional audio processing algorithms, as they allow players to better judge the position of an off - screen sound source (such as the footsteps of an opponent or their gun fire). Although modern headphones have been particularly widely sold and used for listening to stereo recordings since the release of the Walkman, there is subjective debate regarding the nature of their reproduction of stereo sound. Stereo recordings represent the position of horizontal depth cues (stereo separation) via volume and phase differences of the sound in question between the two channels. When the sounds from two speakers mix, they create the phase difference the brain uses to locate direction. Through most headphones, because the right and left channels do not combine in this manner, the illusion of the phantom center can be perceived as lost. Hard panned sounds are also heard only in one ear rather than from one side. Binaural recordings use a different microphone technique to encode direction directly as phase, with very little amplitude difference below 2 kHz, often using a dummy head. They can produce a surprisingly lifelike spatial impression through headphones. Commercial recordings almost always use stereo recording, rather than binaural, because loudspeaker listening is more common than headphone listening. It is possible to change the spatial effects of stereo sound on headphones, to better approximate the presentation of speaker reproduction, by using frequency - dependent cross-feed between the channels. Headsets can have ergonomic benefits over traditional telephone handsets. They allow call center agents to maintain better posture without needing to hand - hold a handset or tilt their head sideways to cradle it. Using headphones at a sufficiently high volume level may cause temporary or permanent hearing impairment or deafness. The headphone volume often has to compete with the background noise, especially in loud places such as subway stations, aircraft, and large crowds. Extended periods of exposure to high sound pressure levels created by headphones at high volume settings may be damaging; however, one hearing expert found that "fewer than 5 % of users select volume levels and listen frequently enough to risk hearing loss. '' Some manufacturers of portable music devices have attempted to introduce safety circuitry that limited output volume or warned the user when dangerous volume was being used, but the concept has been rejected by most of the buying public, which favors the personal choice of high volume. Koss introduced the "Safelite '' line of cassette players in 1983 with such a warning light. The line was discontinued two years later for lack of interest. The government of France has imposed a limit on all music players sold in the country: they must not be capable of producing more than 100dBA (the threshold of hearing damage during extended listening is 80 dB, and the threshold of pain, or theoretically of immediate hearing loss, is 130 dB). Motorcycle and other power - sport riders benefit by wearing foam earplugs when legal to do so to avoid excessive road, engine, and wind noise, but their ability to hear music and intercom speech is actually enhanced when doing so. The ear can normally detect 1 - billionth of an atmosphere of sound pressure level, hence it is incredibly sensitive. At very high sound pressure levels, muscles in the ear tighten the tympanic membrane and this leads to a small change in the geometry of the ossicles and stirrup that results in lower transfer of force to the oval window of the inner ear (the acoustic reflex). The risk of hearing damage also depends on the exposure time. The higher the volume, the faster hearing loss occurs. According to OSHA work safety guidelines, workers should be exposed to 90dBA noise for a maximum of 8 hours to avoid hearing loss. An increase to just 95dBA cuts the safe exposure to only 4 hours. But there is little specific consensus on the safe level of exposure. NIOSH has an 8 - hour recommended exposure limit set to 85dBA. Additionally, an increase to 93dBA instead of 95dBA halves the safe exposure time to 4 hours. Some studies have found that people are more likely to raise volumes to unsafe levels while performing strenuous exercise. A Finnish study recommended that exercisers should set their headphone volumes to half of their normal loudness and only use them for half an hour. Noise cancelling headphones can be considered dangerous because of a lack of awareness the listener may have with their environment. Noise cancelling headphones are so effective that a person may not be able to hear oncoming traffic or pay attention to people around them. There is also a general danger that music in headphones can distract the listener and lead to dangerous situations. The usual way of limiting sound volume on devices driving headphones is by limiting output power. This has the additional undesirable effect of being dependent of the efficiency of the headphones; a device producing the maximum allowed power may not produce adequate volume when paired with low - efficiency, high - impedance equipment, while the same amount of power can reach dangerous levels with very efficient earphones.
who plays cane on the young and the restless
Daniel Goddard (actor) - Wikipedia Daniel Richard Goddard (born 28 August 1971) is an Australian model and actor. He is known for his starring role as Dar on the syndicated action drama BeastMaster, based on the 1982 film The Beastmaster, and for playing Cane Ashby on the CBS daytime soap opera The Young and the Restless since 2007. He had nearly completed a degree in finance, but he transferred to the Ensemble Actors Studio without finishing the degree. After several theatrical appearances, Goddard landed his first television role in the Australian soap opera Home and Away, as Eric Phillips. Goddard then departed for Hollywood, modelling for Calvin Klein and Dolce & Gabbana. He landed the lead role of Dar on the series BeastMaster from 1999 to 2002, returning to his native country, Australia, to film the series. In January 2007 he joined the cast of the American soap The Young and the Restless, playing the role of Cane Ashby. Owing to his popularity with the female audience, on occasion Goddard has modelled on another CBS Daytime television series, The Price Is Right. Goddard was born in Sydney, New South Wales, Australia. Goddard began dating interior designer Rachael Marcus in August 1998, and they married on 3 February 2002. They have two sons, born February 2006 and December 2008.
stesen putra lrt wangsa maju (kj35) kuala lumpur federal territory of kuala lumpur
Wangsa Maju LRT station - wikipedia Wangsa Maju LRT station is an elevated rapid transit station in Wangsa Maju, Kuala Lumpur, Malaysia, forming part of the Kelana Jaya Line (formerly known as PUTRA). The station was opened on June 1, 1999, as part of the line 's second segment encompassing 12 stations between Kelana Jaya station and Terminal PUTRA and an underground line. Wangsa Maju station is the third last station northwards to Gombak. The station is situated directly within the northern Kuala Lumpur suburb of Wangsa Maju. The station, located along the main thoroughfare of Jalan 1 / 27A (Malay; English: 1 / 27A Road) running from the northwest to the southeast, is wedged between two residential estates: Section 1 of Wangsa Maju to the southwest and Desa Setapak (Setapak Countryside) to the northwest, with Section 2 of Wangsa Maju located further northwest. In addition to accessibility from Jalan 1 / 27A, the station is also connected via Jalan 16 / 27B (16 / 27B Road), Desa Setapak 's residential road. Due to its proximity to various shopping centres, Alpha Angle and Aeon Big, and two higher education institutions - which are TARC and UTAR - in the region, the station is usually busy during the weekdays. On June 3, 2007, two men wearing full face motorcycle helmets and wielding parangs robbed the Wangsa Maju station at 10: 10 pm (MST) and relieved RM7, 000 from its ticket counter. No injuries were reported. The robbery, having taken place at a mass transit station, is the first of its kind in the country.
when did brazil win the soccer world cup
Brazil at the FIFA World Cup - wikipedia This is a record of Brazil 's results at the FIFA World Cup. The tournament consists of two parts, the qualification phase and the final phase (officially called the World Cup Finals). The qualification phase, which currently take place over the three years preceding the Finals, is used to determine which teams qualify for the Finals. The current format of the Finals involves 32 teams competing for the title, at venues within the host nation (or nations) over a period of about a month. The World Cup Finals is the most widely viewed sporting event in the world, with an estimated 715.1 million people watching the 2006 tournament final. Brazil is the most successful national team in the history of the World Cup, having won five titles, earning second - place, third - place and fourth - place finishes twice each. Brazil is one of the countries besides Argentina, Spain and Germany to win a FIFA World Cup away from its continent (Sweden 1958, Mexico 1970, USA 1994 and South Korea / Japan 2002). Brazil is the only national team to have played in all FIFA World Cup editions without any absence nor need for playoffs. Brazil has also the best overall performance in World Cup history in both proportional and absolute terms with a record of 70 victories in 104 matches played, 119 goal difference, 227 points and only 17 losses. Traditionally, Brazil 's greatest rival is Argentina. The two countries have met each other four times in the history of the FIFA World Cup, with two wins for Brazil (West Germany 1974 and Spain 1982), one for Argentina (Italy 1990) and a draw (Argentina 1978). The country that played most against Brazil in the finals is Sweden: 7 times, with five wins for Brazil and two draws. Three other historical rivals are Italy, which lost two World Cup finals against Brazil and eliminated the Brazilians in two tournaments (France 1938 and Spain 1982), France, which has eliminated Brazil on three occasions (Mexico 1986, France 1998 and Germany 2006), and the Netherlands which has eliminated Brazil at two of their five meetings (West Germany 1974 and South Africa 2010) and has won the third place match in Brazil 2014. * Draws include knockout matches decided on penalty kicks. - 1970 Mário Zagallo Carlos Alberto Pelé, Gérson, Jairzinho, Carlos Alberto - 1994 Carlos Alberto Parreira Dunga - - 2002 Luiz Felipe Scolari Cafu Ronaldo -)
who are the judges on america's got talent this season
America 's Got Talent (season 13) - wikipedia Season thirteen of the reality competition series America 's Got Talent premiered on May 29, 2018, on NBC. Howie Mandel, Mel B, Heidi Klum and Simon Cowell returned as judges for their respective ninth, sixth, sixth, and third seasons. Meanwhile, Tyra Banks returned for her second season as host. The live shows return to the Dolby Theatre, beginning August 14, 2018. AGT is co-produced by FremantleMedia North America and Syco Entertainment, Cowell 's company. Dunkin Donuts is a sponsor for a fourth consecutive season. The season had preliminary open call auditions in Orlando, Cincinnati, Savannah, Milwaukee, Houston, Las Vegas, New York City, Nashville, and Los Angeles. As in years past, prospective contestants could also submit online auditions. Judges ' auditions were taped from March 6 - 23 at the Pasadena Civic Auditorium in Los Angeles. The golden buzzer will return for its fifth consecutive season. Any act that receives the golden buzzer during the judges ' auditions will be sent directly to the live shows and will not compete in the Judge Cuts round. In the first episode, Tyra Banks was the first to press the golden buzzer for acrobatic and dance group Zurcaroh. Simon Cowell was second to press it for 40 - year - old singer Michael Ketterer. Howie Mandel was third to press it for 13 year old singer Courtney Hadwin. This season is being aired in Asia within 48 hours of being broadcast in the US. It airs on AXN starting May 31, 2018 (UTC + 8). In the United Kingdom and Ireland the first episode was brought to Netflix on May 31. In Singapore, the show was broadcast on MediaCorp Channel 5 every Thursdays starting May 31, 2018.
how many ships and planes were destroyed at pearl harbor
Attack on Pearl Harbor - wikipedia Coordinates: 21 ° 22 ′ N 157 ° 57 ′ W  /  21.367 ° N 157.950 ° W  / 21.367; - 157.950 Major Japanese tactical victory; precipitated the entrance of the United States into World War II Southeast Asia Southwest Pacific North America Japan Manchuria The attack on Pearl Harbor was a surprise military strike by the Imperial Japanese Navy Air Service against the United States naval base at Pearl Harbor, Hawaii Territory, on the morning of December 7, 1941. The attack, also known as the Battle of Pearl Harbor, led to the United States ' entry into World War II. The Japanese military leadership referred to the attack as the Hawaii Operation and Operation AI, and as Operation Z during its planning. Japan intended the attack as a preventive action to keep the U.S. Pacific Fleet from interfering with military actions they planned in Southeast Asia against overseas territories of the United Kingdom, the Netherlands, and the United States. Over the next seven hours there were coordinated Japanese attacks on the U.S. - held Philippines, Guam and Wake Island and on the British Empire in Malaya, Singapore, and Hong Kong. The attack commenced at 7: 48 a.m. Hawaiian Time (18: 18 UTC). The base was attacked by 353 Imperial Japanese aircraft (including fighters, level and dive bombers, and torpedo bombers) in two waves, launched from six aircraft carriers. All eight U.S. Navy battleships were damaged, with four sunk. All but the USS Arizona were later raised, and six were returned to service and went on to fight in the war. The Japanese also sank or damaged three cruisers, three destroyers, an anti-aircraft training ship, and one minelayer. 188 U.S. aircraft were destroyed; 2,403 Americans were killed and 1,178 others were wounded. Important base installations such as the power station, dry dock, shipyard, maintenance, and fuel and torpedo storage facilities, as well as the submarine piers and headquarters building (also home of the intelligence section), were not attacked. Japanese losses were light: 29 aircraft and five midget submarines lost, and 64 servicemen killed. One Japanese sailor, Kazuo Sakamaki, was captured. The surprise attack came as a profound shock to the American people and led directly to the American entry into World War II in both the Pacific and European theaters. The following day, December 8, the United States declared war on Japan, and several days later, on December 11, Germany and Italy declared war on the U.S. The U.S. responded with a declaration of war against Germany and Italy. Domestic support for non-interventionism, which had been fading since the Fall of France in 1940, disappeared. There were numerous historical precedents for unannounced military action by Japan, but the lack of any formal warning, particularly while negotiations were still apparently ongoing, led President Franklin D. Roosevelt to proclaim December 7, 1941, "a date which will live in infamy ''. Because the attack happened without a declaration of war and without explicit warning, the attack on Pearl Harbor was later judged in the Tokyo Trials to be a war crime. War between Japan and the United States had been a possibility that each nation had been aware of (and developed contingency plans for) since the 1920s, though tensions did not begin to grow seriously until Japan 's 1931 invasion of Manchuria. Over the next decade, Japan continued to expand into China, leading to all - out war between those countries in 1937. Japan spent considerable effort trying to isolate China and achieve sufficient resource independence to attain victory on the mainland; the "Southern Operation '' was designed to assist these efforts. From December 1937, events such as the Japanese attack on USS Panay, the Allison incident, and the Nanking Massacre (the International Military Tribunal of the Far East concluded that more than 200,000 Chinese non-combatants were killed in indiscriminate massacres, though other estimates have ranged from 40,000 to more than 300,000) swung public opinion in the West sharply against Japan. Fearing Japanese expansion, the United States, the United Kingdom, and France provided loan assistance for war supply contracts to China. In 1940, Japan invaded French Indochina in an effort to control supplies reaching China. The United States halted shipments of airplanes, parts, machine tools, and aviation gasoline to Japan, which was perceived by Japan as an unfriendly act. The U.S. did not stop oil exports to Japan at that time in part because prevailing sentiment in Washington was that such an action would be an extreme step that Japan would likely consider a provocation, given Japanese dependence on U.S. oil. Early in 1941, President Franklin D. Roosevelt moved the Pacific Fleet to Hawaii from its previous base in San Diego and ordered a military buildup in the Philippines in the hope of discouraging Japanese aggression in the Far East. Because the Japanese high command was (mistakenly) certain that any attack on the UK 's Southeast Asian colonies, including Singapore, would bring the U.S. into war, a devastating preventive strike appeared to be the only way to avoid U.S. naval interference. An invasion of the Philippines was also considered necessary by Japanese war planners. The U.S. War Plan Orange had envisioned defending the Philippines with a 40,000 - man elite force. This was opposed by Douglas MacArthur, who felt that he would need a force ten times that size, and was never implemented. By 1941, U.S. planners anticipated abandonment of the Philippines at the outbreak of war and orders to that effect were given in late 1941 to Admiral Thomas Hart, commander of the Asiatic Fleet. The U.S. ceased oil exports to Japan in July 1941, following Japanese expansion into French Indochina after the Fall of France, in part because of new American restrictions on domestic oil consumption. This in turn caused the Japanese to proceed with plans to take the Dutch East Indies, an oil - rich territory. On August 17, Roosevelt warned Japan that the U.S. was prepared to take steps against Japan if it attacked "neighboring countries ''. The Japanese were faced with the option of either withdrawing from China and losing face or seizing and securing new sources of raw materials in the resource - rich, European - controlled colonies of Southeast Asia. Japan and the U.S. engaged in negotiations during the course of 1941 in an effort to improve relations. During these negotiations, Japan offered to withdraw from most of China and Indochina when peace was made with the Nationalist government, adopt an independent interpretation of the Tripartite Pact, and not to discriminate in trade provided all other countries reciprocated. Washington rejected these proposals. Japanese Prime Minister Konoye then offered to meet with Roosevelt, but Roosevelt insisted on coming to an agreement before any meeting. The U.S. ambassador to Japan repeatedly urged Roosevelt to accept the meeting, warning that it was the only way to preserve the conciliatory Konoye government and peace in the Pacific. His recommendation was not acted upon. The Konoye government collapsed the following month when the Japanese military refused to agree to the withdrawal of all troops from China. Japan 's final proposal, on November 20, offered to withdraw their forces from southern Indochina and not to launch any attacks in Southeast Asia provided that the U.S., the UK, and the Netherlands ceased aiding China and lifted their sanctions against Japan. The American counter-proposal of November 26 (November 27 in Japan) (the Hull note) required Japan to evacuate all of China without conditions and conclude non-aggression pacts with Pacific powers. However the day before the Hull Note was delivered, on November 26 in Japan, the main Japanese attack fleet left port for Pearl Harbor. Preliminary planning for an attack on Pearl Harbor to protect the move into the "Southern Resource Area '' (the Japanese term for the Dutch East Indies and Southeast Asia generally) had begun very early in 1941 under the auspices of Admiral Isoroku Yamamoto, then commanding Japan 's Combined Fleet. He won assent to formal planning and training for an attack from the Imperial Japanese Navy General Staff only after much contention with Naval Headquarters, including a threat to resign his command. Full - scale planning was underway by early spring 1941, primarily by Rear Admiral Ryūnosuke Kusaka, with assistance from Captain Minoru Genda and Yamamoto 's Deputy Chief of Staff, Captain Kameto Kuroshima. The planners studied the 1940 British air attack on the Italian fleet at Taranto intensively. Over the next several months, pilots were trained, equipment was adapted, and intelligence was collected. Despite these preparations, Emperor Hirohito did not approve the attack plan until November 5, after the third of four Imperial Conferences called to consider the matter. Final authorization was not given by the emperor until December 1, after a majority of Japanese leaders advised him the "Hull Note '' would "destroy the fruits of the China incident, endanger Manchukuo and undermine Japanese control of Korea. '' By late 1941, many observers believed that hostilities between the U.S. and Japan were imminent. A Gallup poll just before the attack on Pearl Harbor found that 52 % of Americans expected war with Japan, 27 % did not, and 21 % had no opinion. While U.S. Pacific bases and facilities had been placed on alert on many occasions, U.S. officials doubted Pearl Harbor would be the first target; instead, they expected the Philippines would be attacked first. This presumption was due to the threat that the air bases throughout the country and the naval base at Manila posed to sea lanes, as well as to the shipment of supplies to Japan from territory to the south. They also incorrectly believed that Japan was not capable of mounting more than one major naval operation at a time. The Japanese attack had several major aims. First, it intended to destroy important American fleet units, thereby preventing the Pacific Fleet from interfering with Japanese conquest of the Dutch East Indies and Malaya and to enable Japan to conquer Southeast Asia without interference. Second, it was hoped to buy time for Japan to consolidate its position and increase its naval strength before shipbuilding authorized by the 1940 Vinson - Walsh Act erased any chance of victory. Third, to deliver a blow to America 's ability to mobilize its forces in the Pacific, battleships were chosen as the main targets, since they were the prestige ships of any navy at the time. Finally, it was hoped that the attack would undermine American morale such that the U.S. government would drop its demands contrary to Japanese interests, and would seek a compromise peace with Japan. Striking the Pacific Fleet at anchor in Pearl Harbor carried two distinct disadvantages: the targeted ships would be in very shallow water, so it would be relatively easy to salvage and possibly repair them; and most of the crews would survive the attack, since many would be on shore leave or would be rescued from the harbor. A further important disadvantage -- this of timing, and known to the Japanese -- was the absence from Pearl Harbor of all three of the U.S. Pacific Fleet 's aircraft carriers (Enterprise, Lexington, and Saratoga). IJN top command was so imbued with Admiral Mahan 's "Decisive battle '' doctrine -- especially that of destroying the maximum number of battleships -- that, despite these concerns, Yamamoto decided to press ahead. Japanese confidence in their ability to achieve a short, victorious war also meant other targets in the harbor, especially the navy yard, oil tank farms, and submarine base, were ignored, since -- by their thinking -- the war would be over before the influence of these facilities would be felt. On November 26, 1941, a Japanese task force (the Striking Force) of six aircraft carriers -- Akagi, Kaga, Sōryū, Hiryū, Shōkaku, and Zuikaku -- departed northern Japan en route to a position northwest of Hawaii, intending to launch its 408 aircraft to attack Pearl Harbor: 360 for the two attack waves and 48 on defensive combat air patrol (CAP), including nine fighters from the first wave. The first wave was to be the primary attack, while the second wave was to attack carriers as its first objective and cruisers as its second, with battleships as the third target. The first wave carried most of the weapons to attack capital ships, mainly specially adapted Type 91 aerial torpedoes which were designed with an anti-roll mechanism and a rudder extension that let them operate in shallow water. The aircrews were ordered to select the highest value targets (battleships and aircraft carriers) or, if these were not present, any other high value ships (cruisers and destroyers). First wave dive bombers were to attack ground targets. Fighters were ordered to strafe and destroy as many parked aircraft as possible to ensure they did not get into the air to intercept the bombers, especially in the first wave. When the fighters ' fuel got low they were to refuel at the aircraft carriers and return to combat. Fighters were to serve CAP duties where needed, especially over U.S. airfields. Before the attack commenced, two reconnaissance aircraft launched from cruisers Chikuma and Tone were sent to scout over Oahu and Maui and report on U.S. fleet composition and location. Reconnaissance aircraft flights risked alerting the U.S., and were not necessary. U.S. fleet composition and preparedness information in Pearl Harbor was already known due to the reports of the Japanese spy Takeo Yoshikawa. A report of the absence of the U.S. fleet in Lahaina anchorage off Maui was received from the fleet submarine I - 72. Another four scout planes patrolled the area between the Japanese carrier force (the Kidō Butai) and Niihau, to detect any counterattack. Fleet submarines I - 16, I - 18, I - 20, I - 22, and I - 24 each embarked a Type A midget submarine for transport to the waters off Oahu. The five I - boats left Kure Naval District on November 25, 1941. On December 6, they came to within 10 nmi (19 km; 12 mi) of the mouth of Pearl Harbor and launched their midget subs at about 01: 00 on December 7. At 03: 42 Hawaiian Time, the minesweeper Condor spotted a midget submarine periscope southwest of the Pearl Harbor entrance buoy and alerted the destroyer Ward. The midget may have entered Pearl Harbor. However, Ward sank another midget submarine at 06: 37 in the first American shots in the Pacific Theater. A midget submarine on the north side of Ford Island missed the seaplane tender Curtiss with her first torpedo and missed the attacking destroyer Monaghan with her other one before being sunk by Monaghan at 08: 43. A third midget submarine, Ha - 19, grounded twice, once outside the harbor entrance and again on the east side of Oahu, where it was captured on December 8. Ensign Kazuo Sakamaki swam ashore and was captured by Hawaii National Guard Corporal David Akui, becoming the first Japanese prisoner of war. A fourth had been damaged by a depth charge attack and was abandoned by its crew before it could fire its torpedoes. Japanese forces received a radio message from a midget submarine at 00: 41 on December 8 claiming damage to one or more large warships inside Pearl Harbor. In 1992, 2000, and 2001, Hawaii Undersea Research Laboratory 's submersibles found the wreck of the fifth midget submarine lying in three parts outside Pearl Harbor. The wreck was in the debris field where much surplus U.S. equipment was dumped after the war, including vehicles and landing craft. Both of its torpedoes were missing. This correlates with reports of two torpedoes fired at the light cruiser St. Louis at 10: 04 at the entrance of Pearl Harbor, and a possible torpedo fired at destroyer Helm at 08: 21. The attack took place before any formal declaration of war was made by Japan, but this was not Admiral Yamamoto 's intention. He originally stipulated that the attack should not commence until thirty minutes after Japan had informed the United States that peace negotiations were at an end. However, the attack began before the notice could be delivered. Tokyo transmitted the 5000 - word notification (commonly called the "14 - Part Message '') in two blocks to the Japanese Embassy in Washington. Transcribing the message took too long for the Japanese ambassador to deliver it on schedule; in the event, it was not presented until more than an hour after the attack began. (In fact, U.S. code breakers had already deciphered and translated most of the message hours before he was scheduled to deliver it.) The final part is sometimes described as a declaration of war. While it was viewed by a number of senior U.S government and military officials as a very strong indicator negotiations were likely to be terminated and that war might break out at any moment, it neither declared war nor severed diplomatic relations. A declaration of war was printed on the front page of Japan 's newspapers in the evening edition of December 8, but not delivered to the U.S. government until the day after the attack. For decades, conventional wisdom held that Japan attacked without first formally breaking diplomatic relations only because of accidents and bumbling that delayed the delivery of a document hinting at war to Washington. In 1999, however, Takeo Iguchi, a professor of law and international relations at International Christian University in Tokyo, discovered documents that pointed to a vigorous debate inside the government over how, and indeed whether, to notify Washington of Japan 's intention to break off negotiations and start a war, including a December 7 entry in the war diary saying, "(O) ur deceptive diplomacy is steadily proceeding toward success. '' Of this, Iguchi said, "The diary shows that the army and navy did not want to give any proper declaration of war, or indeed prior notice even of the termination of negotiations... and they clearly prevailed. '' In any event, even if the Japanese had decoded and delivered the 14 - Part Message before the beginning of the attack, it would not have constituted either a formal break of diplomatic relations or a declaration of war. The final two paragraphs of the message read: Thus the earnest hope of the Japanese Government to adjust Japanese - American relations and to preserve and promote the peace of the Pacific through cooperation with the American Government has finally been lost. The first attack wave of 183 planes was launched north of Oahu, led by Commander Mitsuo Fuchida. Six planes failed to launch due to technical difficulties. It included: As the first wave approached Oahu, it was detected by the U.S. Army SCR - 270 radar at Opana Point near the island 's northern tip. This post had been in training mode for months, but was not yet operational. The operators, Privates George Elliot Jr. and Joseph Lockard, reported a target. But Lieutenant Kermit A. Tyler, a newly assigned officer at the thinly manned Intercept Center, presumed it was the scheduled arrival of six B - 17 bombers from California. The Japanese planes were approaching from a direction very close (only a few degrees difference) to the bombers, and while the operators had never seen a formation as large on radar, they neglected to tell Tyler of its size. Tyler, for security reasons, could not tell the operators of the six B - 17s that were due (even though it was widely known). As the first wave planes approached Oahu, they encountered and shot down several U.S. aircraft. At least one of these radioed a somewhat incoherent warning. Other warnings from ships off the harbor entrance were still being processed or awaiting confirmation when the attacking planes began bombing and strafing. Nevertheless, it is not clear any warnings would have had much effect even if they had been interpreted correctly and much more promptly. The results the Japanese achieved in the Philippines were essentially the same as at Pearl Harbor, though MacArthur had almost nine hours warning that the Japanese had already attacked Pearl Harbor. The air portion of the attack began at 7: 48 a.m. Hawaiian Time (3: 18 a.m. December 8 Japanese Standard Time, as kept by ships of the Kido Butai), with the attack on Kaneohe. A total of 353 Japanese planes in two waves reached Oahu. Slow, vulnerable torpedo bombers led the first wave, exploiting the first moments of surprise to attack the most important ships present (the battleships), while dive bombers attacked U.S. air bases across Oahu, starting with Hickam Field, the largest, and Wheeler Field, the main U.S. Army Air Forces fighter base. The 171 planes in the second wave attacked the Army Air Forces ' Bellows Field near Kaneohe on the windward side of the island, and Ford Island. The only aerial opposition came from a handful of P - 36 Hawks, P - 40 Warhawks, and some SBD Dauntless dive bombers from the carrier Enterprise. In the first wave attack, about eight of the forty - nine 800 kg (1760 lb) armor - piercing bombs dropped hit their intended battleship targets. At least two of those bombs broke up on impact, another detonated before penetrating an unarmored deck, and one was a dud. Thirteen of the forty torpedoes hit battleships, and four torpedoes hit other ships. Men aboard U.S. ships awoke to the sounds of alarms, bombs exploding, and gunfire, prompting bleary - eyed men to dress as they ran to General Quarters stations. (The famous message, "Air raid Pearl Harbor. This is not drill. '', was sent from the headquarters of Patrol Wing Two, the first senior Hawaiian command to respond.) The defenders were very unprepared. Ammunition lockers were locked, aircraft parked wingtip to wingtip in the open to prevent sabotage, guns unmanned (none of the Navy 's 5 "/ 38s, only a quarter of its machine guns, and only four of 31 Army batteries got in action). Despite this low alert status, many American military personnel responded effectively during the attack. Ensign Joe Taussig Jr., aboard Nevada, commanded the ship 's antiaircraft guns and was severely wounded, but continued to be on post. Lt. Commander F.J. Thomas commanded Nevada in the captain 's absence and got her under way until the ship was grounded at 9: 10 a.m. One of the destroyers, Aylwin, got underway with only four officers aboard, all ensigns, none with more than a year 's sea duty; she operated at sea for 36 hours before her commanding officer managed to get back aboard. Captain Mervyn Bennion, commanding West Virginia, led his men until he was cut down by fragments from a bomb which hit Tennessee, moored alongside. The second planned wave consisted of 171 planes: 54 B5Ns, 81 D3As, and 36 A6Ms, commanded by Lieutenant - Commander Shigekazu Shimazaki. Four planes failed to launch because of technical difficulties. This wave and its targets comprised: The second wave was divided into three groups. One was tasked to attack Kāne ʻohe, the rest Pearl Harbor proper. The separate sections arrived at the attack point almost simultaneously from several directions. Ninety minutes after it began, the attack was over. 2,008 sailors were killed and 710 others wounded; 218 soldiers and airmen (who were part of the Army until the independent U.S. Air Force was formed in 1947) were killed and 364 wounded; 109 marines were killed and 69 wounded; and 68 civilians were killed and 35 wounded. In total, 2,403 Americans died and 1,178 were wounded. Eighteen ships were sunk or run aground, including five battleships. All of the Americans killed or wounded during the attack were non-combatants, given the fact there was no state of war when the attack occurred. Of the American fatalities, nearly half were due to the explosion of Arizona 's forward magazine after it was hit by a modified 16 - inch (410 mm) shell. Already damaged by a torpedo and on fire amidships, Nevada attempted to exit the harbor. She was targeted by many Japanese bombers as she got under way and sustained more hits from 250 lb (113 kg) bombs, which started further fires. She was deliberately beached to avoid blocking the harbor entrance. California was hit by two bombs and two torpedoes. The crew might have kept her afloat, but were ordered to abandon ship just as they were raising power for the pumps. Burning oil from Arizona and West Virginia drifted down on her, and probably made the situation look worse than it was. The disarmed target ship Utah was holed twice by torpedoes. West Virginia was hit by seven torpedoes, the seventh tearing away her rudder. Oklahoma was hit by four torpedoes, the last two above her belt armor, which caused her to capsize. Maryland was hit by two of the converted 16 '' shells, but neither caused serious damage. Although the Japanese concentrated on battleships (the largest vessels present), they did not ignore other targets. The light cruiser Helena was torpedoed, and the concussion from the blast capsized the neighboring minelayer Oglala. Two destroyers in dry dock, Cassin and Downes were destroyed when bombs penetrated their fuel bunkers. The leaking fuel caught fire; flooding the dry dock in an effort to fight fire made the burning oil rise, and both were burned out. Cassin slipped from her keel blocks and rolled against Downes. The light cruiser Raleigh was holed by a torpedo. The light cruiser Honolulu was damaged, but remained in service. The repair vessel Vestal, moored alongside Arizona, was heavily damaged and beached. The seaplane tender Curtiss was also damaged. The destroyer Shaw was badly damaged when two bombs penetrated her forward magazine. Of the 402 American aircraft in Hawaii, 188 were destroyed and 159 damaged, 155 of them on the ground. Almost none were actually ready to take off to defend the base. Eight Army Air Forces pilots managed to get airborne during the attack and six were credited with downing at least one Japanese aircraft during the attack: 1st Lt. Lewis M. Sanders, 2nd Lt. Philip M. Rasmussen, 2nd Lt. Kenneth M. Taylor, 2nd Lt. George S. Welch, 2nd Lt. Harry W. Brown, and 2nd Lt. Gordon H. Sterling Jr. Sterling was shot down by Lt. Fujita over Kaneohe Bay and is listed as Body Not Recovered (not Missing In Action). Lt. John L. Dains was killed by friendly fire returning from a victory over Kaawa. Of 33 PBYs in Hawaii, 24 were destroyed, and six others damaged beyond repair. (The three on patrol returned undamaged.) Friendly fire brought down some U.S. planes on top of that, including five from an inbound flight from Enterprise. Japanese attacks on barracks killed additional personnel. At the time of the attack, nine civilian aircraft were flying in the vicinity of Pearl Harbor. Of these, three were shot down. Fifty - five Japanese airmen and nine submariners were killed in the attack, and one was captured. Of Japan 's 414 available planes, 29 were lost during the battle (nine in the first attack wave, 20 in the second), with another 74 damaged by antiaircraft fire from the ground. Several Japanese junior officers including Fuchida and Genda urged Nagumo to carry out a third strike in order to destroy as much of Pearl Harbor 's fuel and torpedo storage, maintenance, and dry dock facilities as possible. Genda, who had unsuccessfully advocated for invading Hawaii after the air attack, believed that without an invasion three strikes were necessary to disable the base as much as possible. The captains of the other five carriers in the task force reported they were willing and ready to carry out a third strike. Military historians have suggested the destruction of these shore facilities would have hampered the U.S. Pacific Fleet far more seriously than the loss of its battleships. If they had been wiped out, "serious (American) operations in the Pacific would have been postponed for more than a year ''; according to Admiral Chester W. Nimitz, later Commander in Chief of the Pacific Fleet, "it would have prolonged the war another two years. '' Nagumo, however, decided to withdraw for several reasons: At a conference aboard Yamato the following morning, Yamamoto initially supported Nagumo. In retrospect, sparing the vital dockyards, maintenance shops, and oil depots meant the U.S. could respond relatively quickly to Japanese activities in the Pacific. Yamamoto later regretted Nagumo 's decision to withdraw and categorically stated it had been a great mistake not to order a third strike. Seventeen ships were damaged or lost in the attack, of which fourteen were repaired and returned to service. After a systematic search for survivors, formal salvage operations began. Captain Homer N. Wallin, Material Officer for Commander, Battle Force, U.S. Pacific Fleet, was immediately ordered to lead salvage operations. "Within a short time I was relieved of all other duties and ordered to full time work as Fleet Salvage Officer. '' Around Pearl Harbor, divers from the Navy (shore and tenders), the Naval Shipyard, and civilian contractors (Pacific Bridge and others) began work on the ships that could be refloated. They patched holes, cleared debris, and pumped water out of ships. Navy divers worked inside the damaged ships. Within six months, five battleships and two cruisers were patched or refloated so they could be sent to shipyards in Pearl Harbor and on the mainland for extensive repair. Intensive salvage operations continued for another year, a total of some 20,000 man - hours under water. Oklahoma, while successfully raised, was never repaired, and capsized while under tow to the mainland in 1947. Arizona and the target ship Utah were too heavily damaged for salvage, though much of their armament and equipment was removed and put to use aboard other vessels. Today, the two hulks remain where they were sunk, with Arizona becoming a war memorial. In the wake of the attack, 15 Medals of Honor, 51 Navy Crosses, 53 Silver Stars, four Navy and Marine Corps Medals, one Distinguished Flying Cross, four Distinguished Service Crosses, one Distinguished Service Medal, and three Bronze Star Medals were awarded to the American servicemen who distinguished themselves in combat at Pearl Harbor. Additionally, a special military award, the Pearl Harbor Commemorative Medal, was later authorized for all military veterans of the attack. The day after the attack, Roosevelt delivered his famous Infamy Speech to a Joint Session of Congress, calling for a formal declaration of war on the Empire of Japan. Congress obliged his request less than an hour later. On December 11, Germany and Italy declared war on the United States, even though the Tripartite Pact did not require it. Congress issued a declaration of war against Germany and Italy later that same day. The UK actually declared war on Japan nine hours before the U.S. did, partially due to Japanese attacks on Malaya, Singapore and Hong Kong, and partially due to Winston Churchill 's promise to declare war "within the hour '' of a Japanese attack on the United States. The attack was an initial shock to all the Allies in the Pacific Theater. Further losses compounded the alarming setback. Japan attacked the Philippines hours later (because of the time difference, it was December 8 in the Philippines). Only three days after the attack on Pearl Harbor, the battleships Prince of Wales and Repulse were sunk off the coast of Malaya, causing British Prime Minister Winston Churchill later to recollect "In all the war I never received a more direct shock. As I turned and twisted in bed the full horror of the news sank in upon me. There were no British or American capital ships in the Indian Ocean or the Pacific except the American survivors of Pearl Harbor who were hastening back to California. Over this vast expanse of waters Japan was supreme and we everywhere were weak and naked ''. Throughout the war, Pearl Harbor was frequently used in American propaganda. One further consequence of the attack on Pearl Harbor and its aftermath (notably the Niihau incident) was that Japanese American residents and citizens were relocated to nearby Japanese - American internment camps. Within hours of the attack, hundreds of Japanese American leaders were rounded up and brought to high - security camps such as Sand Island at the mouth of Honolulu harbor and Kilauea Military Camp on the island of Hawaii. Eventually, more than 110,000 Japanese Americans, nearly all who lived on the West Coast, were forced into interior camps, but in Hawaii, where the 150,000 - plus Japanese Americans composed over one - third of the population, only 1,200 to 1,800 were interned. The attack also had international consequences. The Canadian province of British Columbia, bordering the Pacific Ocean, had long had a large population of Japanese immigrants and their Japanese Canadian descendants. Pre-war tensions were exacerbated by the Pearl Harbor attack, leading to a reaction from the Government of Canada. On February 24, 1942, Order - in - Council P.C. no. 1486 was passed under the War Measures Act allowing for the forced removal of any and all Canadians of Japanese descent from British Columbia, as well as the prohibiting from them returning to the province. On 4 March, regulations under the Act were adopted to evacuate Japanese - Canadians. As a result, 12,000 were interned in interior camps, 2,000 were sent to road camps and another 2,000 were forced to work in the prairies at sugar beet farms. The Japanese planners had determined that some means was required for rescuing fliers whose aircraft were too badly damaged to return to the carriers. The island of Niihau, only 30 minutes flying time from Pearl Harbor, was designated as the rescue point. The Zero flown by Petty Officer Shigenori Nishikaichi of Hiryu was damaged in the attack on Wheeler, so he flew to the rescue point on Niihau. The aircraft was further damaged on landing. Nishikaichi was helped from the wreckage by one of the native Hawaiians, who, aware of the tension between the United States and Japan, took the pilot 's maps and other documents. The island 's residents had no telephones or radio and were completely unaware of the attack on Pearl Harbor. Nishikaichi enlisted the support of three Japanese - American residents in an attempt to recover the documents. During the ensuing struggles, Nishikaichi was killed and a Hawaiian civilian was wounded; one collaborator committed suicide, and his wife and the third collaborator were sent to prison. The ease with which the local ethnic Japanese residents had apparently gone to the assistance of Nishikaichi was a source of concern for many, and tended to support those who believed that local Japanese could not be trusted. Admiral Hara Tadaichi summed up the Japanese result by saying, "We won a great tactical victory at Pearl Harbor and thereby lost the war. '' To a similar effect, see Isoroku Yamamoto 's alleged "sleeping giant '' quote. While the attack accomplished its intended objective, it turned out to be largely unnecessary. Unbeknownst to Yamamoto, who conceived the original plan, the U.S. Navy had decided as far back as 1935 to abandon ' charging ' across the Pacific towards the Philippines in response to an outbreak of war (in keeping with the evolution of Plan Orange). The U.S. instead adopted "Plan Dog '' in 1940, which emphasized keeping the IJN out of the eastern Pacific and away from the shipping lanes to Australia, while the U.S. concentrated on defeating Nazi Germany. Fortunately for the United States, the American aircraft carriers were untouched by the Japanese attack; otherwise the Pacific Fleet 's ability to conduct offensive operations would have been crippled for a year or more (given no diversions from the Atlantic Fleet). As it was, the elimination of the battleships left the U.S. Navy with no choice but to rely on its aircraft carriers and submarines -- the very weapons with which the U.S. Navy halted and eventually reversed the Japanese advance. While six of the eight battleships were repaired and returned to service, their relatively low speed and high fuel consumption limited their deployment, and they served mainly in shore bombardment roles (their only major action being the Battle of Surigao Strait in October 1944). A major flaw of Japanese strategic thinking was a belief that the ultimate Pacific battle would be fought by battleships, in keeping with the doctrine of Captain Alfred Thayer Mahan. As a result, Yamamoto (and his successors) hoarded battleships for a "decisive battle '' that never happened. The Japanese confidence in their ability to achieve a short, victorious war meant that they neglected Pearl Harbor 's navy repair yards, oil tank farms, submarine base, and old headquarters building. All of these targets were omitted from Genda 's list, yet they proved more important than any battleship to the American war efforts in the Pacific. The survival of the repair shops and fuel depots allowed Pearl Harbor to maintain logistical support to the U.S. Navy 's operations, such as the Battles of Coral Sea and Midway. It was submarines that immobilized the Imperial Japanese Navy 's heavy ships and brought Japan 's economy to a virtual standstill by crippling the transportation of oil and raw materials: by the end of 1942, import of raw materials was cut to half of what it had been, "to a disastrous ten million tons '', while oil import "was almost completely stopped ''. Lastly, the basement of the Old Administration Building was the home of the cryptanalytic unit which contributed significantly to the Midway ambush and the Submarine Force 's success. Ever since the Japanese attack, there has been debate as to how and why the United States had been caught unaware, and how much and when American officials knew of Japanese plans and related topics. Military officers including Gen. Billy Mitchell had pointed out the vulnerability of Pearl to air attack. At least two Naval War games, one in 1932 and another in 1936, proved that Pearl was vulnerable to such an attack. Admiral James Richardson was removed from command shortly after protesting President Roosevelt 's decision to move the bulk of the Pacific fleet to Pearl Harbor. The decisions of military and political leadership to ignore these warnings has contributed to conspiracy theories. Several writers, including journalist Robert Stinnett and former United States rear admiral Robert Alfred Theobald, have argued that various parties high in the U.S. and British governments knew of the attack in advance and may even have let it happen or encouraged it in order to force the U.S. into war via the so - called "back door ''. However, this conspiracy theory is rejected by mainstream historians. Informational notes Citations Bibliography Further reading Accounts Media Historical documents