question
stringlengths
15
100
context
stringlengths
18
412k
when did the ball drop in new york start
Times Square ball - wikipedia The Times Square Ball is a time ball located in New York City 's Times Square. Located on the roof of One Times Square, the ball is a prominent part of a New Year 's Eve celebration in Times Square commonly referred to as the ball drop, where the ball descends 141 feet (43 m) in 60 seconds down a specially designed flagpole, beginning at 11: 59: 00 p.m. ET, and resting at midnight to signal the start of the new year. In recent years, the festivities have been preceded by live entertainment, including performances by musicians. The event was first organized by Adolph Ochs, owner of The New York Times newspaper, as a successor to a series of New Year 's Eve fireworks displays he held at the building to promote its status as the new headquarters of the Times, while the ball itself was designed by Artkraft Strauss. First held on December 31, 1907, to welcome 1908, the ball drop has been held annually since, except in 1942 and 1943 in observance of wartime blackouts. The ball 's design has been updated over the years to reflect improvements in lighting technology; the ball was initially constructed from wood and iron, and lit with 100 incandescent light bulbs. The current incarnation, designed by Harlem - based architectural lighting firm Focus Lighting, features a computerized LED lighting system and an outer surface consisting of triangular crystal panels. These panels are produced by Waterford Crystal, and contain inscriptions representing a yearly theme. Since 2009, the current ball has been displayed atop One Times Square year - round, while the original, smaller version of the current ball that was used in 2008 has been on display inside the Times Square visitor 's center. The event is organized by the Times Square Alliance and Countdown Entertainment, a company led by Jeff Strauss, and is among the most notable New Year 's celebrations internationally: it is attended by at least 1 million spectators yearly, and is nationally televised as part of New Year 's Eve specials broadcast by a number of networks and cable channels. The prevalence of the Times Square ball drop has inspired similar "drops '' at other local New Year 's Eve events across the country; while some use balls, some instead drop objects that represent local culture or history. To facilitate the arrival of attendees, Times Square is closed to traffic beginning in the late afternoon on New Year 's Eve. The square is then divided into different viewing sections referred to as "pens '', into which attendees are directed sequentially upon arrival. Security is strictly enforced by the New York City Police Department (NYPD), even more so since the 2001 -- 02 edition in the wake of the September 11 attacks. Attendees are required to pass through security checkpoints before they are assigned a pen, and are prohibited from bringing backpacks or alcohol to the event. Security was increased further for 2017 -- 18 edition due to recent incidents such as the truck attack in New York on October 31, and the Route 91 Harvest festival shootings in Las Vegas; these included additional patrols of Times Square hotels, rooftop patrol squads and counter-snipers, and the installation of reflective markers on buildings to help officers identify the location of elevated shooters. Festivities formally begin in the early evening with the raising of the ball at around 6: 00 p.m. ET. Party favors are distributed to attendees, which have historically included large balloons, hats, and other items branded with the event 's corporate sponsors. The hours before the drop are preceded by hourly countdowns for the arrival of the new year in other countries, along with live music performances by popular musicians. Some of these performances are organized by, and aired on New Year 's Eve television specials which are broadcast from Times Square. The drop itself occurs at 11: 59 p.m. -- the last minute of the year, and is ceremonially "activated '' by a dignitary or celebrity joined on - stage by the current Mayor of New York City. The conclusion of the drop is followed by fireworks shot from the roof of One Times Square, along with the playing of "Auld Lang Syne '' by Guy Lombardo, "Theme from New York, New York '' by Frank Sinatra, "America the Beautiful '' by Ray Charles, "What a Wonderful World '' by Louis Armstrong, and "Over the Rainbow '' by IZ. Since the 1996 New Year 's Eve celebration, the current Mayor of New York City has been joined by a special guest, selected yearly to recognize their community involvement or significance, in ceremonially "activating '' the ball drop by pressing a button, resembling a smaller version of the ball itself, at exactly one minute to midnight. The button itself does not actually trigger the drop; that is done from a control room, synchronized using a government time signal. Special guests who have activated the ball drop have included: Since the 2005 -- 06 edition of the event, the drop has been directly preceded by the playing of John Lennon 's song "Imagine ''. Until 2009 -- 2010, the original recording was used; since 2010 -- 2011, the song has been performed by the headlining act; From 2001 - 02 until 2004 - 05, the song played immediately before the drop was Lee Greenwood 's God Bless the U.S.A.. It was sung originally in remembrance of 9 / 11, but the practice was retained for another three years. After the conclusion of the festivities and the dispersal of attendees, cleanup is performed overnight to remove confetti and other debris from Times Square before it is re-opened to the public the following morning. Few traces of the previous night 's celebration remain after the cleanup process is completed: following the 2013 -- 14 drop, the New York City Department of Sanitation estimated that by 8: 00 a.m., it had cleared over 50 tons of refuse from Times Square, using 190 workers from their own crews and the Times Square Alliance. The first New Year 's Eve celebration in Times Square was held on December 31, 1904; The New York Times ' owner, Adolph Ochs, decided to celebrate the opening of the newspaper 's new headquarters, One Times Square, with a New Year 's fireworks show on the roof of the building to welcome 1905. Close to 200,000 people attended the event, displacing traditional celebrations that had normally been held at Trinity Church. However, following several years of fireworks shows, Ochs wanted a bigger spectacle at the building to draw more attention to the area. The newspaper 's chief electrician, Walter F. Palmer, suggested using a time ball, after seeing one used on the nearby Western Union Building. Ochs hired sign designer Artkraft Strauss to construct a ball for the celebration; it was built from iron and wood, electrically lit with one hundred incandescent light bulbs, weighed 700 pounds (320 kg), and measured 5 feet (1.5 m) in diameter. The ball was hoisted on the building 's flagpole with rope by a team of six men. Once it hit the roof of the building, the ball was designed to complete an electric circuit to light a 5 - foot tall sign indicating the new year, and trigger a fireworks show. The first ever "ball drop '' was held on December 31, 1907, welcoming the year 1908. In 1913, only eight years after it moved to One Times Square, the Times moved its corporate headquarters to 229 West 43rd Street. The Times still maintained ownership of the tower, however, and Strauss continued to organize future editions of the drop. The original ball was retired in 1920 in favor of a new design; the second ball remained 5 feet (1.5 m) in diameter, but was now constructed from iron, weighing 400 pounds (180 kg). The ball drop was placed on hiatus for New Year 's Eve 1942 and 1943 due to wartime lighting restrictions during World War II. Instead, a moment of silence was observed at midnight in Times Square, accompanied by the sound of church bells and chimes played from sound trucks. The second ball was retired in favor of a third design in 1955; again, it maintained the same diameter of its predecessors, but was now constructed from aluminium, and weighed 150 pounds (68 kg). In 1981, the third ball was revamped in honor of the I Love New York campaign, with red lightbulbs and a green stem to give it the appearance of an apple. For 1988, organizers acknowledged the addition of a leap second earlier that day (leap seconds are appended at midnight UTC, which is five hours before midnight in New York) by extending the drop to 61 seconds, and by including a special one - second light show during the extra second. The original white lightbulbs returned to the ball for 1989, but were replaced by red, white, and blue bulbs in 1991 to salute the troops of Operation Desert Shield. The third ball was revamped again in 1995 for 1996, adding a computerized lighting system with 180 halogen bulbs and 144 strobe lights, and over 12,000 rhinestones. Lighting designer Barry Arnold stated that the changes were "something (that) had to be done to make this event more spectacular as we approach the millennium. '' The drop itself became computerized through the use of an electric winch synced with the National Institute of Standards and Technology 's time signal; the new system was not without issues, however, as a glitch caused the ball to pause for a short moment halfway through its descent. After its 44th use in 1999, the third ball was retired and placed on display at the Atlanta headquarters of Jamestown Group, owners of One Times Square. On December 28, 1998, during a press conference attended by New York City mayor Rudy Giuliani, organizers announced that the third ball would be retired for the arrival of the new millennium, and replaced by a new design constructed by Waterford Crystal. The year 2000 celebrations introduced more prominent sponsorship to the drop; companies such as Discover Card, Korbel Champagne, and Panasonic were announced as official sponsors of the festivities in Times Square. The city also announced that Ron Silver would lead a committee known as "NYC 2000 '', which was in charge of organizing events across the city for year 2000 celebrations. A full day of festivities was held at Times Square to celebrate the arrival of the year 2000, which included concerts and hourly cultural presentations with parades of puppets designed by Michael Curry, representing countries entering the new year at that hour. Organizers expected a total attendance exceeding 2 million spectators. The fourth ball, measuring 6 feet (1.8 m) in diameter and weighing 1,070 pounds (490 kg), incorporated a total of over 600 halogen bulbs, 504 triangle - shaped crystal panels provided by Waterford, 96 strobe lights, and spinning, pyramid - shaped mirrors. The ball was constructed at Waterford 's factory in Ireland, and was then shipped to New York City, where the lighting system and motorized mirrors were installed. Many of the triangles were inscribed with "Hope '' - themed designs changing yearly, such as "Hope for Fellowship '', "Hope for Wisdom '', "Hope for Unity '', "Hope for Courage '', and "Hope for Abundance ''. For 2002, as part of the theme "Hope for Healing '', 195 of the ball 's panels were engraved with the names of nations and organizations who were affected by or were involved in the aftermath of the September 11 attacks. In December 2011, the "Hope for Healing '' panels were accepted into the collection of the National September 11 Museum. In honor of the ball drop 's centennial anniversary, a brand new fifth design debuted for New Year 's Eve 2008. Once again manufactured by Waterford Crystal with a diameter of 6 feet (1.8 m), but weighing 1,212 pounds (550 kg), it used LED lamps provided by Philips (which can produce 16,777,216 or 2 colors), with computerized lighting patterns developed by the New York City - based lighting firm Focus Lighting. The ball featured 9,576 energy - efficient bulbs that consumed the same amount of electricity as only 10 toasters. The 2008 ball was only used once, and was placed on display at the Times Square Visitors Center following the event. For 2009, a larger version of the fifth ball was introduced -- an icosahedral geodesic sphere lit by 32,256 LED lamps. Its diameter is twice as wide as the 2008 ball, at 12 feet (3.7 m), and contains 2,688 Waterford Crystal panels, with a weight of 11,875 pounds (5,386 kg). It was designed to be weatherproof, as the ball would now be displayed atop One Times Square nearly year - round following the celebrations. Yearly themes for the ball 's crystal panels continued; from 2008 to 2013, the ball contained crystal patterns that were part of a Waterford series known as "World of Celebration '', including themes such as "Let There Be Light '' and "Let There Be Peace ''. For 2014, all the ball 's panels were replaced, marking a new theme series known as "Greatest Gifts '', beginning with "Gift of Imagination ''. The numerical sign indicating the year (which remains atop the tower along with the ball itself) uses Philips LED lamps. For 2014, the final two digits of the sign used bulbs from the company 's "Hue '' line of multi-color LED lamps, allowing them to have computerized lighting cues. According to the National Weather Service, from 1907 to 2016, the average temperature at midnight in Central Park was 34 ° F (1 ° C). The coldest event was in 1917 when the temperature was 1 ° F (− 17 ° C) and the wind chill was − 18 ° F (− 28 ° C). The warmest temperature was 58 ° F (14 ° C), in 1965 and 1972. It has snowed during the ball drop just seven times out of 111 events (one being light snow) -- 1926, 1934, 1948, 1952, 1961, 1967, and 2009 -- and it has rained multiple times. Festivities in 2018 were the second - coldest on record due to an arctic air mass, forecast at 11 ° F (− 12 ° C) before wind chill. As a public event, the festivities and ball drop are often broadcast on television. As of 2016 - 17, a host feed of 21 cameras across Times Square is provided to broadcasters to incorporate into their coverage. The event is covered as part of New Year 's Eve television specials on several major U.S. television networks. By far the most notable of these is Dick Clark 's New Year 's Rockin ' Eve; created, produced, and originally hosted by the entertainer Dick Clark until his death in 2012, and currently hosted by Ryan Seacrest and Jenny McCarthy, the program first aired on NBC in 1972 before moving to ABC, where it has been broadcast ever since. New Year 's Rockin ' Eve has consistently been the most - watched New Year 's Eve special in the U.S. annually, peaking at 25.6 million viewers for its 2018 edition. Following the death of Dick Clark in April 2012, a crystal engraved with his name was added to the 2013 ball in tribute. Across the remaining networks, NBC broadcasts New Year 's Eve with Carson Daly, hosted from Times Square by Carson Daly of The Voice and Last Call while Fox has aired New Year 's specials covering Times Square with rotating hosts and themes, which were broadcast primarily under the title New Year 's Eve Live until 2014. From 2015 to 2017, Fox broadcast Pitbull 's New Year 's Revolution from Miami instead, but returned to New York - oriented coverage hosted by Steve Harvey for 2018. Spanish - language network Univision broadcasts ¡ Feliz!, hosted by Raúl de Molina of El Gordo y La Flaca, while Telemundo has the Bienvedido series with Daniel Sarcos from Un Nuevo Dia for 2017 and Jorge Bernal from ¡ Suelta La Sopa! for 2018. On cable, CNN carries coverage of the festivities, known as New Year 's Eve Live, which was historically hosted by Anderson Cooper and Kathy Griffin from Times Square. Griffin was removed from her role in 2017 after she published a controversial political photo; she was replaced by Andy Cohen for 2018. Fox News carries All - American New Year, which was most recently hosted by Elisabeth Hasselbeck and Bill Hemmer from Times Square. Since 2009, an official webcast of the ball drop and its associated festivities has been produced, streamed via Livestream.com. Beginning in the 1940s, NBC broadcast coverage from Times Square anchored by Ben Grauer on both radio and television. Its coverage was later incorporated into special episodes of The Tonight Show, continuing through Johnny Carson and Jay Leno 's tenures on the program. NBC would introduce a dedicated special, New Year 's Eve with Carson Daly, beginning in 2003. From 1956 to 1976, CBS was well known for its television coverage of the festivities hosted by bandleader Guy Lombardo from the ballroom of the Waldorf - Astoria Hotel in New York City, featuring his band 's famous rendition of "Auld Lang Syne '' at midnight. After Lombardo 's death in 1977, CBS and the Royal Canadians, now led by Victor Lombardo, attempted to continue the special. However, Guy 's absence and the growing popularity of ABC 's New Year 's Rockin ' Eve prompted CBS to eventually drop the band entirely. The Royal Canadians were replaced by a new special, Happy New Year, America, which ran in various formats with different hosts (such as Paul Anka, Donny Osmond, Andy Williams, Paul Shaffer, and Montel Williams) until it was discontinued after 1996. CBS, barring a special America 's Millennium broadcast for 2000, has no longer broadcast any national New Year 's programming since. For 2000, in lieu of New Year 's Rockin ' Eve, ABC News covered the festivities as part of its day - long telecast, ABC 2000 Today. Hosted by Peter Jennings, the broadcast featured coverage of millennium festivities from around the world, including those in New York City. Jennings was joined by Dick Clark as a special correspondent for coverage from Times Square. MTV had broadcast coverage originating from the network 's Times Square studios at One Astor Plaza. For 2011, MTV also held its own ball drop in Seaside Heights, New Jersey, the setting of its popular reality series Jersey Shore, featuring cast member Snooki lowered inside a giant "hamster ball ''. Originally, MTV planned to hold the drop within its studio in Times Square, but the network was asked by city officials to conduct the drop elsewhere. Coordinates: 40 ° 45 ′ 23 '' N 73 ° 59 ′ 11 '' W  /  40.7564 ° N 73.9865 ° W  / 40.7564; - 73.9865
who is opening for mariah carey and lionel richie
All the Hits tour (Lionel Richie and Mariah Carey) - wikipedia The All the Hits Tour was an ongoing concert tour by American singer - songwriter Lionel Richie with special guest Mariah Carey as the opening act. The tour was scheduled to begin on March 15, 2017, at the Royal Farms Arena in Baltimore. The tour was delayed due to Richie 's longer than expected recovery from a knee procedure and began in summer 2017. Carey has stated that she would arrange her setlist differently each night for her fans who have seen her shows before.
who was hit hardest during the great depression
Cities in the Great Depression - wikipedia Throughout the industrial world, cities were hit hard during the Great Depression, beginning in 1929 and lasting through most of the 1930s. Worst hit were port cities (as world trade fell) and cities that depended on heavy industry, such as steel and automobiles. Service - oriented cities were hurt less severely. Political centers such as Washington, London and Berlin flourished during the Great Depression, as the expanded role of government added many new jobs. Canada 's economy at the time was just starting to industrialize primary industries (agriculture, fishing, mining and logging) to manufacturing. Exports and prices of raw materials plunged, and employment, prices and profits fell in every sector. Canada was the worst - hit (after the United States) because of its economic position. It was further affected as its main trading partners were the U.S. and Britain. The hardest - hit cities were the heavy industry centers of Southern Ontario. They included Hamilton (Canada 's largest steel center), Toronto, Tilbury, and Windsor, an automotive manufacturing center linked to its larger neighbour, Detroit. In Ontario, unemployment skyrocketed to roughly 45 %. The Prairie Provinces and Western Canada were among the hardest - hit; they fully recovered after 1939. The fall of wheat prices drove many farmers to the towns and cities, such as Calgary, Alberta, Regina, Saskatchewan, and Brandon, Manitoba. Women held 25 - 30 % of the jobs in the cities. Few women were employed in heavy industry, railways or construction. Many were household workers or were employed in restaurants and family - owned shops. Women factory workers typically handled clothing and food. Educated women had a narrow range of jobs, such as clerical work and teaching. It was expected that a woman give up a good job when she married. Srigley emphasizes the wide range of background factors and family circumstances, arguing that "gender '' itself was typically less important than race, ethnicity, or class. Singapore, at the time of British colony, was integrated into the world economy and suffered economic declines like other trading cities. However the people of Singapore were resilient in coping. Those who remained in the city used complex relationships among Chinese kinfolk; they shared food, housing, and clothing, and minimized the negative impacts. They spread work around, and provided an intelligence network to assist relatives in finding temporary employment. The safety valve of emigration to rural areas reduced the overall negative impact. Although the impact of the Great Depression on Great Britain was less severe than elsewhere, the industrial cities of the Midlands, the North, and Scotland were very hard - hit. Liverpool and Manchester with years of high unemployment had already acquired a reputation as highly depressed areas. City leaders fought back, have promoted a series of reforms and innovations in the infrastructure that made them leaders in the new urban redevelopment. Grandiose projects included the Wythenshawe Estate, the Mersey Tunnel and the Manchester Central Library. They created jobs and boosted local economies and morale. Promoters strongly emphasized how the redevelopment projects presented new images of Liverpool and Manchester. One goal was to integrate the newly enfranchised voters, a strategy also employed by the Conservative party to engage with the mass electorate. The worldwide Great Depression had a moderate impact on the French economy, which proved resilient. Conditions worsened in 1931 bringing hardships and a more somber mood. Unemployment rose, and hours of work were cut; however the price of food sharply declined, offsetting some of the hardship. The population of Paris declined slightly from its all - time peak of 2.9 million in 1921 to 2.8 million in 1936. The arrondissements in the center lost as much as twenty percent of their population, while the outer neighborhoods, gained ten percent. The low birth rate of Parisians was compensated by a new wave of immigration from Russia, Poland, Germany, eastern and central Europe, Italy, Portugal and Spain. Political tensions mounted in Paris with strikes, demonstrations and confrontations between the Communists and Front populaire on the extreme left and the Action Française on the extreme right. In Germany, the depression had reached its worst in 1932, with 6 million unemployed, spread throughout every city. From 1928 to 1932 unemployment in Berlin soared from 133,000 to 600,000. In Hamburg, a port city, the numbers went from 32,000 to 135,000. In Dortmund, in the Ruhr industrial region, it went from 12,000 to 65,000. Berlin verged on political chaos as Communist and Nazi paramilitary forces fought for control of the streets. Overall the Nazis were weakest in the largest cities, which were controlled by Socialist and Communist parties (and in Catholic areas, the Center party). After 1933, the Nazi government greatly expanded arms production, which reduced unemployment. Berlin, and the other cultural centers, were especially hard - hit. The publicly subsidized city and state theaters that were the center of cultural life took heavy cuts. After 1933, the Nazis imposed a new, heavily subsidized cultural order that glorified Nazi ideals and ridiculed the artistic achievements of the Weimar era. The Soviet Union was largely cut off from world affairs, and during the 1930s was engaged in a very large scale maneuver to force peasants off the land (using starvation as a weapon) and relocating them in industrial cities. Many factories, electrical plants, and transportation facilities were built along these lines. The most dramatic showpiece was the Moscow subway. The Moscow Metro opened in 1935 and immediately became the centerpiece of the transportation system. More than that it was a Stalinist device to awe and control the populace, and give them an appreciation of Soviet realist art. It became the prototype for future Soviet large - scale technologies. Lazar Kaganovich was in charge; he designed the subway so that citizens would absorb the values and ethos of Stalinist civilization as they rode. The artwork of the 13 original stations became nationally and internationally famous. For example, the Sverdlov Square subway station featured porcelain bas - reliefs depicting the daily life of the Soviet peoples, and the bas - reliefs at the Dynamo Stadium sports complex glorified sports and the physical prowess of the powerful new "Homo Sovieticus. '' (Soviet man). The metro was touted as the symbol of the new social order -- a sort of Communist cathedral of engineering modernity. Soviet workers did the labor and the art work, but the main engineering designs, routes, and construction plans were handled by specialists recruited from the London Underground. The Britons called for tunneling instead of the "cut - and - cover '' technique, the use of escalators instead of lifts, and designed the routes and the rolling stock. The paranoia of Stalin and the NKVD was evident when the secret police arrested numerous British engineers for espionage -- that is for gaining an in - depth knowledge of the city 's physical layout. Engineers for the Metropolitan Vickers Electrical Company were given a show trial and deported in 1933, ending the role of British business in the USSR. America 's larger cities in the 1920s enjoyed strong growth. With the end of large - scale immigration, populations stabilized and the plentiful jobs in the cities pulled families upwards in terms of social mobility. Investment in office buildings, stores, factories, utilities, streets, and, especially, apartments and single - family homes, added substantially to the infrastructure, and contributed to the notion that better times lay ahead. However, after 1929, the optimism ebbed away, overwhelmed by a deepening pessimism that made long - term private investment seem inadvisable. The Depression 's damage to large cities, suburbs, towns and rural areas varied according to the economic base. Most serious in larger cities was the collapse of the construction industry with new starts falling to less than 10 % of the norm of the late 1920s. Although much needed work was deferred, maintenance and repair of existing structures comprised over a third of the private sector construction budget in the 1930s. Devastating was the disappearance of 2 million high paying jobs in the construction trades, plus the loss of profits and rents that humbled many thousands of landlords and real estate investors. Second came the general downturn in industry, especially heavy manufacturing. Steel in Pittsburgh, Pennsylvania, and Gary, Indiana, and automobiles in Detroit took the heaviest hits, along with railroads and coal mining. In these sectors, the largest cities suffered somewhat less than smaller mill towns, mining camps and railroad centers. Unemployment was a problem everywhere, but it was less severe among women than men, among workers in non-durable industries (such as food and clothing), in services and sales, and in government jobs. A sharp educational gradient meant that the less skilled inner city men had much higher unemployment rates than the high - school and college educated men who lived in outer zones and suburbs. Although suburbia stopped growing, it did not suffer nearly as much as the central cities. While some unemployed came to the cities looking for relief, it appears that even larger numbers of unemployed returned to family farms. For the first time ever, the movement of native population was away from cities and toward rural America. The fiscal soundness of city and county governments was challenged by the rise in relief expenditures and the sharp fall in tax collections. The Hoover Administration had encouraged state and local governments to expand public works projects, which they did in 1930 and 1931. While this expansion may have slowed the rise in unemployment, the spending was a luxury that could not be borne in the face of falling tax revenues and the unwillingness of investors to put more money into municipal bonds. After 1933, new sales taxes and infusions of federal money helped relieve the fiscal distress. While local relief before 1932 focused on providing small sums of cash or baskets of food and coal for the neediest, the federal programs launched by Hoover and greatly expanded by the New Deal tried to use massive construction projects with prevailing wages to jumpstart the economy and solve the unemployment crisis. ERA, FERA, WPA and PWA built and repaired the public infrastructure in dramatic fashion but did little to foster the recovery of the private sector. In sharp contrast to Britain, where private housing construction pulled the country out of depression, American cities saw little private construction or investment, and so they languished in the economic doldrums even as their parks, sewers, airports and municipal buildings were enhanced. The problem in retrospect was that the New Deal 's investment in the public infrastructure had only a small "multiplier '' effect, in contrast to the high multiplier for jobs that private investment might have created. Franklin Delano Roosevelt had a magnetic appeal to the city dwellers -- he brought relief and recognition of their ethnic leaders and ward bosses, as well as labor unions. Taxpayers, small business and the middle class voted for Roosevelt in 1936 but turned sharply against him after the recession of 1937 - 38 seemed to belie his promises of recovery. Roosevelt 's New Deal Coalition discovered an entirely new use for city machines in his three reelection campaigns of the New Deal and the Second World War. Traditionally, local bosses minimized turnout so as to guarantee reliable control of their wards and legislative districts. To carry the electoral college, however, Roosevelt needed to carry the entire state, and thus needed massive majorities in the largest cities to overcome the hostility of suburbs and towns. With Harry Hopkins his majordomo, Roosevelt used the WPA as a national political machine. Men on relief could get WPA jobs regardless of their politics, but hundreds of thousands of well - paid supervisory jobs were given to the local Democratic machines. The 3.5 million voters on relief payrolls during the 1936 election cast 82 % of their ballots for Roosevelt. The vibrant labor unions, heavily based in the cities, likewise did their utmost for their benefactor, voting 80 % for him, as did Irish, Italian and Jewish voters. In all, the nation 's 106 cities over 100,000 population voted 70 % for FDR in 1936, compared to his 59 % elsewhere. Roosevelt won reelection in 1940 thanks to the cities. In the North, the cities over 100,000 gave Roosevelt 60 % of their votes, while the rest of the North favored Willkie 52 % - 48 %. It was just enough to provide the critical electoral college margin. With the start of full - scale war mobilization in the summer of 1940, the cities revived. The new war economy pumped massive investments into new factories and funded round - the - clock munitions production, guaranteeing a job to anyone who showed up at the factory gate. The American mafia and some other organized crime syndicates, which had emerged during Prohibition, usually retained power despite heavy pressure from the FBI and federal authorities. Those mob figures that had not been shut down by the authorities often already ran powerful business empires and, though the declining economy severely challenged them, the desperation of the unemployed and underemployed working class often increased their power and influence. Gambling, prostitution, and loansharking provided substitutes for illegal liquor. Some cities managed to thrive during the Depression because of the economic activity generated by criminal enterprises. Atlantic City, long an established resort town, struggled during the Depression but managed to maintain a strong economy in large part due to illegal gambling activities with an unlimited supply of customers from Philadelphia and New York City. Galveston, Texas was one of the most successful examples; the island city, Free State of Galveston, run by the Maceo syndicate, became a major resort town due to its lavish, illegal casino districts enabled by a corrupt law enforcement environment. The small, desert town of Las Vegas, Nevada began to develop based on vice businesses during this period with the added advantage that laws there were much less strict. In the American colony of the Philippines, the Main political dispute over independence was settled by 1934 with the decision that the Philippines would become independent in 1946. The Great Depression was much less severe than in the United States, primarily because the sharp drop in the cost of food work to the benefit of the working class in the city. Washington provided much of the funding for a large middle - class bureaucracy and for major construction projects.
when did the train service start in india which places did it connect find out
History of rail transport in India - wikipedia Rail transport in India began during the early nineteenth century. India 's first railway proposals were made in Madras in 1832. The Red Hill Railway, the country 's first train, ran from Red Hills to Chintadripet bridge in Madras in 1837. It was hauled by a rotary steam - engine locomotive manufactured by William Avery. Built by Arthur Cotton, the railway was primarily used to transport granite stone for road - building work in Madras. In 1845, the Godavari Dam Construction Railway was built at Dowleswaram in Rajahmundry. Also built by Cotton, it supplied stone for the construction of a dam over the Godavari River. On 8 May 1845, the Madras Railway was incorporated, followed that year by the East India Railway. On 1 August 1849, the Great Indian Peninsular Railway was incorporated by an act of parliament. The "guarantee system '', providing free land and a guaranteed five - percent rate of return to private British companies willing to build railways, was finalized on 17 August 1849. In 1851, the Solani Aqueduct Railway was built in Roorkee. It was hauled by the Thomason steam locomotive, named after a British officer - in - charge of that name. The railway transported construction materials for an aqueduct over the Solani River. In 1852, the Madras Guaranteed Railway Company was incorporated. The country 's first passenger train, which ran between Bombay 's Bori Bunder station and Thane on 16 April 1853, was dedicated by Lord Dalhousie. The 14 - carriage train was hauled by three steam locomotives: the Sahib, Sindh, and Sultan. Travelling 34 kilometres (21 mi), the train carried 400 people. The passenger line was built and operated by the Great Indian Peninsula Railway (GIPR). It was built in 1,676 mm (5 ft 6 in) broad gauge, which became the country 's standard for railways. The first passenger train in eastern India ran from Howrah (near Calcutta) to Hoogly, a distance of 24 miles (39 km), on 15 August 1854. The line was built and operated by the East Indian Railway Company (EIR). In May 1854, the Bombay -- Thane line was extended to Kalyan with the Dapoorie viaduct over the Ulhas River (India 's first railway bridge). That year, the GIPR opened its first workshops in Byculla. In 1855, the BB&CI Railway was incorporated. That August, the EIR Express and Fairy Queen steam locomotives were introduced. South India 's first passenger train ran from Royapuram -- Veyasarapady (Madras) to Wallajah Road in Arcot, a distance of 60 miles (97 km), on 1 July 1856. It was built and operated by the Madras Railway. The Madras Railway 's first workshop opened in Perambur (near Madras) that year, and the Bombay - Thane line was extended to Khopoli. In 1858, the Eastern Bengal Railway was incorporated. India 's first tramway (a horse - drawn tramway) opened in Calcutta between Sealdah and Armenian Ghat Street, a distance of 3.8 kilometres (2.4 mi), on 24 February 1873. The following year, the Great South Indian and Carnatic Railways merged to form the South Indian Railway Company. On 9 May 1874, a horse - drawn tramway began operation in Bombay between Colaba and Parel. The Calcutta Tramways Company was incorporated in 1880, followed a decade later by the East Coast State Railway. Lighting in passenger coaches was introduced by many railway companies in 1897. In 1902, the Jodhpur Railway was the first to introduce electric lighting as standard fixtures. Electric signal lighting was introduced between Dadar and Currey Road in Bombay in 1920. 1865 1871 1893 1909 The first railway budget was presented in 1925. On 3 February 1925, the first electric passenger train in India ran between Victoria Terminus (VT) and Kurla on 1,500 V DC overhead traction. Cammell Laird and Uerdingenwagonfabrik manufactured the locomotives for this train. The VT - Bandra section was electrified (with an elevated platform at Sandhurst Road), the Oudh and Rohilkhund Railway was merged with the EIR, the first railway budget was presented in the same year. In 1926, the Kurla - Kalyan section was electrified with 1,500 V DC. Electrification to Poona and Igatpuri (both 1,500 V DC) over the Bhore and Thal Ghats was also completed, and the Charbagh railway station in Lucknow was built that year. The Bandra - Virar section was electrified with 1,500 V DC in January 1928. The Frontier Mail made its inaugural run between Bombay VT and Peshawar in 1928. The country 's first automatic color - light signals became operational, on GIPR 's lines between Bombay VT and Byculla. In 1928, the Kanpur Central and Lucknow stations opened. The Grand Trunk Express began running between Peshawar and Mangalore, the Punjab Limited Express began running between Mumbai and Lahore, and automatic color - light signaling was extended to the Byculla - Kurla section the following year. On 1 June 1930, the Deccan Queen began service (hauled by a WCP - 1 -- No. 20024, old number EA / 1 4006) with seven coaches on the GIPR 's electrified route from Bombay VT to Poona (Pune). The Hyderabad Godavari Valley Railway was merged into Nizam 's State Railway and the route of the Grand Trunk Express was changed to Delhi - Madras that year. The re-organisation of railways in India into regional zones began in 1951. On 14 April of that year, the Southern Railway zone was created. On 5 November, the Central and Western Railway zones were created. That year, the post of Chief Commissioner of Railways was abolished and the Railway Board adopted the practice of making its senior-most member the chairman of the board. The government of West Bengal also entered into an agreement with the Calcutta Tramways Company to take over its administrative functions that year. The Northern, Eastern and North Eastern Railway zones were created on 14 April 1952. Fans and lights were mandated for all compartments in all classes of passenger accommodations in 1952, and sleeping accommodations were introduced in coaches. On 1 August 1955, the South - Eastern zone was split off from the Eastern Railway zone. A divisional system of administration was established for the zones in 1956, and the first fully air - conditioned train was introduced (between Howrah and Delhi). In 1957, after successful trials in France, SNCF proposed 25 kV AC electrification for India 's railways. Indian Railways decided to adopt 25 kV AC electrification, choosing SNCF as a technical consultant. The Main Line Electrification Project (which later became the Railway Electrification Project and, still later, the Central Organisation for Railway Electrification) was established that year. In 1958, the Northeast Frontier Railway zone split off from the North Eastern zone. In 1959, Raj Kharswan to Dongoposi was the first section electrified with 25 kV AC traction. The first scheduled train using 25 kV AC traction ran on the Raj Kharswan - Dongoposi section on 11 August 1960. The first containerized freight service began between Bombay and Ahmedabad in 1966, and 25 kV AC electrification of several suburban tracks around Delhi, Madras and Calcutta was completed. In 1979, the Main Line Electrification Project became the Central Organization for Railway Electrification (CORE). India 's first metro train ran from Esplanade to Bhowanipur (now the Netaji Bhawan station) in Calcutta on 24 October 1984, and the Calcutta Metro was the country 's first rapid - transit line. In 1986, computerized ticketing and reservations were introduced in New Delhi. The Shatabdi Express, India 's fastest train, was introduced between New Delhi and Jhansi in 1988; the line was later extended to Bhopal. In 1990, the first self - printing ticket machine (SPTM) was introduced in New Delhi. Air - conditioned, three - tier coaches and a sleeper class (separate from Second Class) were introduced in 1993. On 16 January 1995, the first regularly - scheduled service with 2 x 25 kV traction began on the Bina - Katni line. In September 1996, the CONCERT system of computerized reservations began in New Delhi, Mumbai and Chennai. In 1998, coupon - validating machines (CVMs) were introduced at Mumbai CST. The CONCERT system became operational nationwide on 18 April 1999; the South East Central Railway zone was established and credit cards were accepted for tickets and reservations at some stations that year. In February 2000, the Indian Railways website went online. On 6 July 2002, the East Coast, South Western, South East Central, North Central and West Central Railway zones were created. Indian Railways (IR) began online train reservations and ticketing on 3 August of that year, with Internet ticketing extended to many cities on 1 December. On 5 February 2012, The Western Railway zone (WR) ended its use of 1,500 V DC traction, switching to 25 kV AC traction. The Tatkal system of ticketing was extended to all trains on 26 September 2013. Gatimaan Express, India 's fastest train with a maximum speed of 160 km / h, made its maiden journey from Delhi to Agra on 5 April 2016. The Central Railway zone (CR) ended its use of DC traction in the Mumbai area and on the country 's main - line rail network, switching to 25 kV AC traction on 11 April of that year. On 31 March 2017, IR announced that India 's entire rail network would be electrified by 2022.
when are the golden globe awards in 2018
75th Golden Globe Awards - wikipedia The 75th Golden Globe Awards honored film and American television of 2017 and was broadcast live on January 7, 2018, from The Beverly Hilton in Beverly Hills, California beginning at 5: 00 p.m. PST / 8: 00 p.m. EST by NBC. The ceremony was produced by Dick Clark Productions in association with the Hollywood Foreign Press Association. Talk - show host Seth Meyers hosted the ceremony for the first time. Oprah Winfrey was announced as Cecil B. DeMille Lifetime Achievement Award honoree on December 13, 2017. The nominees were announced on December 11, 2017, by Sharon Stone, Alfre Woodard, Kristen Bell and Garrett Hedlund. Three Billboards Outside Ebbing, Missouri won the most awards for the evening with four, including Best Motion Picture -- Drama. The Shape of Water and Lady Bird won two awards each. Big Little Lies, The Handmaid 's Tale and The Marvelous Mrs. Maisel were among the television shows that received multiple awards. The nominees for 75th Golden Globe Awards were announced on December 11, 2017. Winners are listed first in boldface. The following seventeen films received multiple nominations: The following films received multiple wins: The following fourteen programs received multiple nominations: The following three programs received multiple wins: During a pre-show event the award for "Best Podcast '' was announced. The event was streamed live on YouTube. In support of the # metoo and Time 's Up movements, nearly the entire audience wore black. Many of the acceptance speeches specifically mentioned these causes, including that of Oprah Winfrey. Previously known as Miss or Mr. Golden Globe, the title was changed this ceremony to Golden Globe Ambassador to better reflect inclusiveness. The inaugural ambassador was Simone Garcia Johnson, daughter of Dwayne Johnson and Dany Garcia. Due to the Weinstein effect, many attendees wore black in support of the Time 's Up movement, and wore corresponding # MeToo pins. Tarana Burke, who created the "Me too '' movement in 2006, attended the awards as a guest of Michelle Williams. Activists attended the ceremony as guests, namely: Tarana Burke as a guest of Michelle Williams, Rosa Clemente as a guest of Susan Sarandon, Saru Jayaraman as a guest of Amy Poehler, Billie Jean King as a guest of Emma Stone, Marai Larasi as a guest of Emma Watson, Calina Lawrence as a guest of Shailene Woodley, Ai - jen Poo as a guest of Meryl Streep, and Mónica Ramírez as a guest of Laura Dern. The ceremony averaged a Nielsen 5.0 ratings / 18 share, and was watched by 19.0 million viewers. The ratings was a five percent decline from the previous ceremony 's viewership of 20.02 million, becoming the lowest since 2012. No "In Memoriam '' section was broadcast on television during the ceremony, so the HFPA included a slideshow on their website, and they included the following names:
who plays lani price on days of our lives
Lani Price - wikipedia Lani Price is a fictional character from the original NBC daytime soap opera, Days of Our Lives portrayed by Sal Stowers. Lani is introduced as a new police cadet at the Salem Police department and later reveals herself to be the illegitimate daughter of Mayor Abe Carver (James Reynolds). Lani is very driven and focused on her career. Lani forms a close bond with her younger half - brother Theo Carver (Kyler Pettis) and also falls for Shawn - Douglas Brady (Brandon Beemer) who has recently separated from his wife. She is also instrumental in bringing down Ben Weston (Robert Scott Wilson), the "Necktie Killer. '' Lani leaves town when Shawn - Douglas reunites with his ex-wife. The character returns to town in December 2016 to visit her father Abe who has recently been shot. A paranoid Theo ropes her into investigating their father 's new love interest Valerie Grant (Vanessa A. Williams). She is the love interest of JJ Deveraux (Casey Moss). Lani is introduced as a rookie police officer at the Salem Police Department. She takes an interest in the mayor -- Abe Carver. She also takes a liking to veteran detective Rafe Hernandez (Galen Gering). Abe sees pictures on Lani 's computer and realizes she is the daughter of singer Tamara Price (Marilyn McCoo), his ex-flame. Lani and Rafe attend Salem 's bicentennial celebration together where Abe confronts Lani over his suspicions about her paternity and she confirms that she is his daughter. Abe makes plans to introduce his autistic son Theo (Kyler Pettis) to Lani but wants to ease into it. Lani forms a connection with Theo when she helps save his friend Joey Johnson (James Lastovic) when he collapses during the party. Lani uncovers that Ben Weston (Robert Scott Wilson) is the serial killer that has been terrorizing Salem. Lani along with fellow officer JJ Deveraux (Casey Moss) tracks Ben to a nearby town where she successfully arrest him. Theo struggles to except the revelation that Lani is his sister but when he finally does, Theo becomes very territorial -- even around their father. Theo confides in Lani that he is being bullied but he swears her to secrecy. Lani continues flirting with Rafe but she soon recognizes that he is in love with Hope Brady (Kristian Alfonso). Lani gets a new partner in Hope 's son, Shawn - Douglas Brady (Brandon Beemer) which leads to romance. Lani falls hard for Shawn but before they can consummate their relationship, Shawn reunites with his estranged wife Belle Black (Martha Madison) and Lani moves back to Miami. Lani, now a detective returns to Salem in December 2016 to check up on Abe who is recovering from being shot. Lani is very grateful to Doctor Valerie Grant (Vanessa A. Williams) for saving her father 's life. However, Theo suspects Valerie is keeping a secret and Lani reluctantly agrees to help him uncover it. On New Year 's Eve, Lani confronts JJ Deveraux and reveals that she is the woman that he had a one - night stand. JJ explains that he was blackout drunk and does not remember and they agree to just be friends. In 2017, Lani and JJ are paired together on the Orwell case. They soon start dating despite Abe 's disapproval. The series released the casting call for the contract role in September 2014, then known as Maya. The character was due to start filming in late 2014 and expected to air by Spring 2015. On June 30, 2015, it was announced that model and actress Sal Stowers, known to daytime audiences for portrayal of Cassandra Foster on the short lived online reboot of All My Children. Stowers was the season 9 of the reality television series America 's Next Top Model. The actress was slated to make her first appearance in September 2015. "So beyond excited and honored to announce that I have joined the Emmy - winning cast of Days Of Our Lives! '' Stowers said in celebration of her new gig. Stowers filmed her first scenes in April 2015. On September 2, 2015, Stowers confirmed through Twitter that she would debut in the role of Lani on September 25. Stowers auditioned opposite Galen Gering and beat out 8 other actresses for the coveted role. Stowers described the ca ≠ sting process in an interview with Soap Opera Digest. "It was 6 a.m. and there was this huge adrenaline rush right before the test. '' She was instructed by the casting director Marnie Saitta to do push - ups to help calm her nerves. Stowers booked the role the same day. She was still a bit nervous but then the actress told herself "There was a reason why I got the job. '' She taped three episodes on her first day. The casting reunited Stowers with her All My Children co-star Robert Scott Wilson. To celebrate her new gig, Stowers planned a brunch to coincide with her first airshow. "I love playing Abe 's daughter and being apart of his family '' Stowers said of the role. "I 'm so passionate about Lani '' she continued. The original casting notice described the character as "smart and drop - dead gorgeous '' African - American woman between the ages of 25 and 30 who was a "confident, dedicated professional '' with a "sensitive side. '' According to Stowers, "Lani is very smart. '' She is also "very driven and really good at her job. '' Stowers described Lani as having "a power in her. '' Of her character, Stowers said Lani "came with a goal. '' Lani is "very determined. When she finds out what she wants, she goes after it and wo n't let anything stop her. '' When Stowers was invited back to play Lani in 2016, she actively decided to change the way she portrayed Lani. "I wanted to approach her differently and bring a boldness to Lani, showing her as strong. '' In addition, the writers wanted to explore the character 's personality more. Lani is more of a risk taker when she comes back. According Stowers, Lani has grown so much and "has more of an opinion now. '' Stowers further stated that "Lani owns her power and is comitted to her job. '' The actress later explained that "(Lani) always tries to be Superwoman. '' There was initial speculation that Lani would share connection with Billy Flynn 's character, Chad DiMera. Upon her introduction, Lani is still settling into her new home "so we are seeing Lani on her best behavior '' Stowers remarked. She is also trying to build a relationship with her father Abe whom her mother kept "hidden from Lani her whole life. '' Lani is determined to ingratiate herself into the Salem Police department and "she 's putting everything into her work. '' In the beginning, Stowers said "there was a mystery to Lani. '' As the mayor 's daughter, "Lani kind of walked on eggshells. '' In March 2016, Daytime Confidential reported that Stowers was one of several cast members released from their contracts when the series once again replaced the writing team. Stowers had actually filmed her departure scenes at the end of 2015, only a few months after her onscreen debut. However, the series kept quiet about her impending departure. When the news finally broke, Stowers took to Twitter to thank fans for their love and support. Of Lani 's unexpected departure in June 2016, Stowers said "I just went with what was happening. '' In September 2016, rumors circulated that Stowers would reprise her role as the actress had appeared in the cast photo for the 13,000 th episode back which filmed in July 2016. However, Stowers return was not officially confirmed by the network until the week before her first re-appearance on December 19. Stowers was hesitant when she was suddenly contacted by the casting director about returning to the role of Lani. "I was excited but at first, I did n't believe it '' the actress explained. However, Stowers revealed that she expected Lani to come back at some point because of her connection to Abe. Her return was kept secret so it was just as much a surprise to her co-stars as the actress herself. However, Stowers was excited because she missed working with everyone. Stowers said her hiatus helped to develop her as an actress and as a person. "When I walk onto the set now I 'm very confident in what I 've created with Lani. '' She continued, "I 've definitely grown, not only as an artist, but also as a human... Giving Lani a voice, I also gave myself a voice. That 's not easy to do, especially (with) a new job. Stowers was grateful that Lani was much more integrated into the canvas upon her return. "Everyone respects her. She 's making great friendships. I think in the beginning (...), that was n't there. '' Lani comes back to town with "secret. '' Though Lani 's main reason for returning is to visit her ailing father, it is revealed that she is the woman JJ Deveraux (Casey Moss) had a drunken one - night - stand with whom he could not remember. Initially, Lani shows an interest in Rafe Hernandez (Galen Gering), a fellow member of the Salem Police Department. When she realizes he is interested in Hope Brady (Kristian Alfonso), Lani hooks up with Hope 's son Shawn - Douglas Brady (Brandon Beemer) when he separates from his wife. When Shawn reconciles with his wife, Lani skips town. According to Sal Stowers, Lani "loves love '' and she wants it just as much as she wants a career. However, she needs a partner she can have that with. Upon her reintroduction, the writers pair Lani with JJ -- another one of her colleagues. Stowers liked the pairing "because they 're fun '' she stated. "They have this very playful chemistry but they also really get each other. '' However, Lani feels the rookie JJ will let their relationship interfere with their work, so she tries to cut him out of things. This backfires when Lani is abducted by a drug dealer and forced to take drugs. She experiences withdrawal and at same time questions the future of her relationship with JJ. "She wonders if this is what he really wants or if it 's just something to get him by. '' Stowers relished in the opportunity to show her acting chops as Lani suffered from withdrawals after ingesting halo. "I spent a lot of time researching what addicts go through during withdrawal -- I did n't know there was fear, loneliness, and hot and cold sensations, feeling like your skin is on fire. '' Stowers admitted that she had to go a "dark place '' but she was up for the challenge. "I 'm a very positive, happy person, so to put myself in a dark place was scary. '' Lani is adamant about keeping JJ out of her struggles because "she had something to prove to herself '' and JJ. "She wanted to show JJ that she could hold it together. '' Stowers said it was easy work with Casey Moss and credited her costar with making her feel comfortable. At the same time, Lani bonds with Eli Grant (Lamon Archey) who helps her hide her withdrawals from everyone including her father and JJ. While their is a connection between the two, Stowers felt it would be a bit "awkward '' due to Abe 's relationship with Eli 's mom Valerie Grant (Vanessa A. Williams). Eli and Lani initially clash but she admires how good he is at his job. Stowers was excited to work with Archey again, as they had worked together in their modeling days. "We 're super-comfortable around each other because we 've known each other for so long. '' Trey from Rickey.org said of Stowers casting "Looks like Days of our Lives is about to get some well - needed diversity on their canvas! '' Jamey Giddens said Stowers ' firing "proves once again that while discussion of diversity are of growing important in primetime and film, daytime is completely regressing in this area. '' Ryan White - Nobles from TVSource Magazine said the lack of development for the character left him unable to feel anything for Lani. However, he continued, "I will say it 's disappointing that once again; an actress of color remains an afterthought in a genre that once placed a value on showcasing diversity. '' Michael Goldberg from Serial Scoop said "here is hoping that Lani works better as a character this time around then she did last time '' and believed Lani 's interactions with JJ Deveraux would benefit the character.
who sang what the world needs now is love
What the World Needs Now is Love - wikipedia "What the World Needs Now Is Love '' is a 1965 popular song with lyrics by Hal David and music composed by Burt Bacharach. First recorded and made popular by Jackie DeShannon, it was released on April 15, 1965, on the Imperial label after a release on sister label Liberty records the previous month was canceled. It peaked at number seven on the US Hot 100 in July of that year. In Canada, the song reached number one. The song was originally offered to Dionne Warwick, who turned it down at the time, though she later recorded it for her album Here Where There Is Love. (Warwick also recorded a second version in 1996, which scraped the lower reaches of the US Hot 100.) Bacharach initially did not believe in the song, and was reluctant to play it for DeShannon. DeShannon 's version was recorded on March 23, 1965, at New York 's Bell Sound studios. Bacharach arranged, conducted and produced the session. In 1966, The Supremes recorded the song for their album Reflections. In 2011, Ronan Keating recorded the song for his album When Ronan Met Burt. Kree Harrison, in 2013, made a cover of the song in the 12th season of American Idol. The studio version was recorded by Idol Studio Recordings. On June 15, 2016, the song was recorded by Broadway for Orlando, with all proceeds going to the victims of the Orlando nightclub shooting. On Dec. 22, 2016 the Mighty Mighty Bosstones announced the upcoming release of their version of the song On Mar. 24, 2017 it was re-recorded by Hans Zimmer, Steve Mazzaro & Missi Hale for the movie The Boss Baby. "What the World Needs Now Is Love '' has been recorded or performed live by over 100 artists, including: In addition to the DeShannon hit recording and the numerous cover versions, "What the World Needs Now is Love '' served as the basis for a distinctive 1971 remix. Disc jockey Tom Clay was working at radio station KGBS in Los Angeles, California, when he created the single "What the World Needs Now is Love / Abraham, Martin and John '', a social commentary that became a surprise hit record that summer. The song begins with a man asking a young boy to define such words as bigotry, segregation, and hatred (to which the boy says he does n't know); he says that prejudice is "when someone 's sick ''. Following that is a soundbite of a drill sergeant leading a platoon into training, along with gunfire sound effects, after which are snippets of the two songs -- both as recorded by The Blackberries, a session recording group. Interspersed are excerpts of speeches by John F. Kennedy, Robert F. Kennedy, the eulogy (given by Ted Kennedy) after Robert 's assassination, and Martin Luther King, Jr., and soundbites of news coverage of each one 's assassination. The ending of the song is a reprise of the introduction. "What the World Needs Now is Love / Abraham, Martin and John '' rose to No. 8 on the Billboard Hot 100 in August 1971, and was Clay 's only Top 40 hit. "What the World Needs Now is Love '' has been used in many film soundtracks, notably Bob & Carol & Ted & Alice and For the Love of Fred (used as the film 's closing theme song in both), Austin Powers: International Man of Mystery, My Best Friend 's Wedding, Bridget Jones: The Edge of Reason, Hot Shots!, Happy Gilmore, and Forrest Gump. In the Danish zodiac porn comedy I Jomfruens tegn (1973), an extended version is used for the hardcore underwater orgy that ends the film. The song contains the memorable lines: The song builds upon the theme of "Stowaway in the Sky '', composed in 1960 by Jean Prodromidès for the film of the same title.
according to the rokeach value survey which of these values is not considered an instrumental value
Rokeach Value Survey - wikipedia The Rokeach Value Survey (RVS) is a values classification instrument. Developed by social psychologist Milton Rokeach, the instrument is designed for rank - order scaling of 36 values, including 18 terminal 18 instrumental values. The task for participants in the survey is to arrange the 18 terminal values, followed by the 18 instrumental values, into an order "of importance to YOU, as guiding principles in YOUR life ''. The RVS has been studied in the context of personality psychology, behavior, marketing, social structure and cross-cultural studies. There have been a number of attempts to reduce the 18 instrumental values and 18 terminal values into a set of underlying factors, but without consistent success. Attempts have included that by Feather and Peay in 1975 and by Charles Johnston in 1995. Rokeach 's RVS is based on a 1968 volume (Beliefs, Attitudes, and Values) which presented the philosophical basis for the association of fundamental values with beliefs and attitudes. His value system was instrumentalised into the Rokeach Value Survey in his 1973 book The Nature of Human Values. Terminal Values refer to desirable end - states of existence. These are the goals that a person would like to achieve during his or her lifetime. These values vary among different groups of people in different cultures. The terminal values in RVS are: Instrumental Values refer to preferable modes of behavior. These are preferable modes of behavior, or means of achieving the terminal values. The Instrumental Values are: Keith Gibbons and Iain Walker question whether the values included in the RVS are the ones that are critical. They argue that Rokeach, who started with several hundred values suggested by 130 individuals and a literature review, had an inadequate criteria for reducing the values. They also questioned the validity of Rockeach 's measures, suggesting that when people rank the values they may not even be ranking the same referents.
list of high school subjects in south africa
National Senior certificate - wikipedia The National Senior Certificate or NSC is a high school diploma and is the main school - leaving certificate in South Africa. This certificate is commonly known as the matriculation (matric) certificate, as grade 12 is the matriculation grade. The NSC, previously known as the Further Education and Training Certificate or FETC, replaced the Senior Certificate with effect from 2008 and was phased in starting with grade 10 in 2006. Pupils study at least seven subjects, including two compulsory official South African languages, either Mathematics or Mathematical Literacy, Life Orientation and three elective subjects. Subjects are all taken on the same level - there is no higher or standard grade as in the past. The official pass grade is 30 %. The mean mark in any subject is usually about 55. Only a small proportion of candidates score an ' A ' in any subject (from as little as 2 % to a maximum of about 10 % in subjects taken by highly select groups.) A further 8 -- 15 % are likely to gain a ' B ' and about 20 -- 25 % achieve a ' C ' grade. The National Senior Certificate is a group certificate and records an aggregate mark. The Department of Basic Education has responsibility for general educational policy to be implemented by nine provincial education departments and private providers such as the Independent Examinations Board (IEB). There are nine provincial examination boards and three independent boards, of which the IEB is the biggest. The IEB operates on a national level catering primarily for independent schools. Learners study at least 7 subjects - 4 compulsory and at least 3 electives. All subjects are written on one grade only and are no longer written on Higher or Standard Grade. Not all schools offer the full range of Elective subjects listed here. Each school may offer subjects specific to its academic orientation. For example, Agriculture Schools offer the agriculture - orientated subjects whereas technical Schools offer the practical and mechanical - orientated subjects. At least 3 subjects from the following: Life Orientation (colloquially abbreviated as "LO '') has been introduced into the senior high school phase as an examination subject. Life Orientation is a broad - learning subject that covers non-academic skills needed in everyday life. Life Orientation is examined, marked and moderated internally and comprises the following sections: There are three types of subjects: Continuous Assessment (CASS) includes all the tests, examinations, tasks, activities, orals and projects done throughout the year. Matric results are usually out of 400 marks. Language compensation is described by several sources: "To the final mark is added the language compensation, which is 5 % of the mark attained by the candidate for all non-language subjects, for candidates whose mother tongue is not English or Afrikaans. The 5 % compensates learners for the disadvantage suffered by these candidates being instructed in a language that is not their mother tongue. '' (Written reply to parliamentary question in 2011) "The compensation applies to learners whose first language is neither English nor Afrikaans and who offer an African language as their first language. They receive an additional 5 % on their non-language subjects. The measure was first introduced in 1999 by the South African Certification Council. '' (Written reply to parliamentary question in 2007,) Dr Sizwe Mabizela, Chairperson of Council, Umalusi has provided further explanation: "This is the most misunderstood concept in this country. In terms of the policy on language compensation, learners who offer an African language as Home Language and do not offer Afrikaans or English as Home language qualify for a 5 % language compensation on the mark they have obtained in a non-language subject. For example a learner who obtains a mark of zero (0) out of 300 will obtain 5 % of zero (which is zero) for language compensation; a learner who obtains 10 out of 300 will receive 5 % of 10, which is 0, 5 marks, for language compensation; a learner who obtains 100 out of 300 marks will obtain an additional 5 marks for language compensation. '' This kind of compensation has some impact on pass rates, but does provide significant impact at the upper end of the scale, affecting those applying for admission to university. For example, a qualifying learner obtaining 95 % would receive 95 x 1.05 = 99.75 % (which rounds to 100 %). A learner obtaining 40 % would receive 40 x 1.05 = 42.0 %. In addition to minimum grades required in each subject, universities either set their own entrance tests and / or use the National Benchmark Tests (NBT). To study for a bachelor 's degree at a South African university requires that the applicant has at least an NSC endorsed by Umalusi, with a pass of 30 % in the chosen university 's language of learning and teaching, as well as a level 4 or higher in the following list of designated, 19 - credit subjects:
control your emotions or be consumed by them meaning in tamil
Self - control - wikipedia Self - control, an aspect of inhibitory control, is the ability to regulate one 's emotions, thoughts, and behavior in the face of temptations and impulses. As an executive function, self - control is a cognitive process that is necessary for regulating one 's behavior in order to achieve specific goals. A related concept in psychology is emotional self - regulation. Self - control is like a muscle. According to studies, self - regulation, whether emotional or behavioral, was proven to be a limited resource which functions like energy. In the short term, overuse of self - control will lead to depletion. However, in the long term, the use of self - control can strengthen and improve over time. Self - control is also a key concept in the general theory of crime, a major theory in criminology. The theory was developed by Michael Gottfredson and Travis Hirschi in their book titled A General Theory of Crime, published in 1990. Gottfredson and Hirschi define self - control as the differential tendency of individuals to avoid criminal acts independent of the situations in which they find themselves. Individuals with low self - control tend to be impulsive, insensitive towards others, risk takers, short - sighted, and nonverbal. The general theory of crime holds that self - control is established in early childhood through three major factors: the strength of the parent - to - child emotional bond, adequate supervision by parents, parents ' ability to recognize punishable behavior, and appropriate discipline by parents. Desire is an affectively charged motivation toward a certain object, person, or activity, but not limited to, that associated with pleasure or relief from displeasure. Desires vary in strength and duration. A desire becomes a temptation when it impacts or enters the individual 's area of self - control, if the behavior resulting from the desire conflicts with an individual 's values or other self - regulatory goals. A limitation to research on desire is the issue of individuals desiring different things. New research looked at what people desire in real world settings. Over one week, 7,827 self - reports of desires were collected and indicated significant differences in desire frequency and strength, degree of conflict between desires and other goals, and the likelihood of resisting desire and success of the resistance. The most common and strongly experienced desires are those related to bodily needs like eating, drinking, and sleeping. This study has many implications related to self - control and the everyday things that interfere with people 's ability to stay on task. This is a big reason why self - control is considered to be a public speaker 's worst nightmare. Desires that conflict with overarching goals or values are known as temptations. Self - control dilemmas occur when long - term goals and values clash with short - term temptations. Counteractive Self - Control Theory states that when presented with such a dilemma, we lessen the significance of the instant rewards while momentarily increasing the importance of our overall values. When asked to rate the perceived appeal of different snacks before making a decision, people valued health bars over chocolate bars. However, when asked to do the rankings after having chosen a snack, there was no significant difference of appeal. Further, when college students completed a questionnaire prior to their course registration deadline, they ranked leisure activities as less important and enjoyable than when they filled out the survey after the deadline passed. The stronger and more available the temptation is, the harsher the devaluation will be. One of the most common self - control dilemmas involves the desire for unhealthy or unneeded food consumption versus the desire to maintain long - term health. An indication of unneeded food could also be over expenditure on certain types of consumption such as eating away from home. Not knowing how much to spend, or overspending one 's budget on eating out can be an symptom of a lack of self control. Experiment participants rated a new snack as significantly less healthy when it was described as very tasty compared to when they heard it was just slightly tasty. Without knowing anything else about a food, the mere suggestion of good taste triggers counteractive self - control and prompted them to devalue the temptation in the name of health. Further, when presented with the strong temptation of one large bowl of chips, participants both perceived the chips to be higher in calories and ate less of them than did participants who faced the weak temptation of three smaller chip bowls, even though both conditions represented the same amount of chips overall. Weak temptations are falsely perceived to be less unhealthy, so self - control is not triggered and desirable actions are more often engaged in, supporting the counteractive self - control theory. Weak temptations present more of a challenge to overcome than strong temptations, because they appear less likely to compromise long - term values. The decrease in an individual 's liking of and desire for a substance following repeated consumption of that substance is known as satiation. Satiation rates when eating depend on interactions of trait self - control and healthiness of the food. After eating equal amounts of either clearly healthy (raisins and peanuts) or unhealthy (M&Ms and Skittles) snack foods, people who scored higher on trait self - control tests reported feeling significantly less desire to eat more of the unhealthy foods than they did the healthy foods. Those with low trait self - control satiated at the same pace regardless of health value. Further, when reading a description emphasizing the sweet flavor of their snack, participants with higher trait self - control reported a decrease in desire faster than they did after hearing a description of the healthy benefits of their snack. Once again, those with low self - control satiated at the same rate regardless of health condition. Perceived unhealthiness of the food alone, regardless of actual health level, relates to faster satiation, but only for people with high trait self - control. Thinking that is characterized by high construals, whenever individuals "are obliged to infer additional details of content, context, or meaning in the actions and outcomes that unfold around them '', will view goals and values in a global, abstract sense. Whereas low level construals emphasize concrete, definitive ideas and categorizations. Different construal levels determine our activation of self - control in response to temptations. One technique for inducing high - level construals is asking an individual a series of "why? '' questions that will lead to increasingly abstracted responses, whereas low - level construals are induced by "how? '' questions leading to increasingly concrete answers. When taking an Implicit Association Test, people with induced high - level construals are significantly faster at associating temptations (such as candy bars) with "bad, '' and healthy choices (such as apples) with "good '' than those in the low - level condition. Further, higher - level construals also show a significantly increased likelihood of choosing an apple for snack over a candy bar. Without any conscious or active self - control efforts, temptations can be dampened by merely inducing high - level construals. It is suggested that the abstraction of high - level construals reminds people of their overall, lifelong values, such as a healthy lifestyle, which deemphasizes the current tempting situation. Positive correlation between linguistic capability and self - control has been inferred from experiments with common chimpanzees. Human self - control research is typically modeled by using a token economy system. A token economy system is a behavioral program in which individuals in a group can earn tokens for a variety of desirable behaviors and can cash in the tokens for various backup, positive reinforcers. The difference in research methodologies with humans - using tokens or conditioned reinforcers versus non-humans using sub-primary forces suggested procedural artifacts as a possible suspect. One aspect of these procedural differences was the delay to the exchange period (Hyten et al. 1994). Non-human subjects can and most likely would access their reinforcement immediately. The human subjects had to wait for an "exchange period '' in which they could exchange their tokens for money, usually at the end of the experiment. When this was done with the non-human subjects, in the form of pigeons, they responded much like humans in that males showed much less control than females. (Jackson & Hackenberg 1996). Logue, (1995), who is discussed more below, points out that in her study done on self - control it was male children who responded with less self - control than female children. She then states, that in adulthood, for the most part, the sexes equalize on their ability to exhibit self - control. This could imply a human 's ability to exert more self - control as they mature and become aware of the consequences associated with impulsivity. This suggestion is further examined below. Most of the research in the field of self - control assumes that self - control is in general better than impulsiveness. As a result, almost all research done on this topic is from this standpoint and very rarely is impulsiveness the more adaptive response in experimental design. Self - control is a measurable variable in humans. In the worst circumstances people with the most or high self - control and resilience have the best odds of defying the odds they are faced with, which could be poverty, bad schooling, unsafe communities, etc. Those at a disadvantage with high self - control go on to higher education and professional jobs, but this seems to have a negative effect on their health. When looking at people who come from advantage backgrounds with high self - control, we see a different phenomenon happening. Those who come from an advantaged background tend to be high - achieving and with their achievement comes good health. The psychological phenomenon known as "John Henryism '' posits that when goal - oriented, success - minded people strive ceaselessly in the absence of adequate support and resources, they can -- like the mighty 19th - century folk legend who fell dead of an aneurysm after besting a steam - powered drill in a railroad - spike - driving competition -- work themselves to death. Or, at least, toward it. In the 1980s Sherman James a socio - epidemiologist from North Carolina found that black Americans in the state suffered disproportionately from heart disease and strokes. He too landed on "John Henryism '' as the cause of this phenomenon. More recently some in the field of developmental psychology have begun to think of self - control in a more complicated way that takes into account that sometimes impulsiveness is the more adaptive response. In their view, a normal individual should have the capacity to be either impulsive or controlled depending on which is the most adaptive. However, this is a recent shift in paradigm and there is little research conducted along these lines. B.F. Skinner 's Science and Human Behavior provides a survey of nine categories of self - control methods. The manipulation of the environment to make some responses easier to physically execute and others more difficult illustrate this principle. This can be referred to as physical guidance which is the application of physical contact to induce an individual to go through the motions of a desired behavior. This concept can also be referred to as a physical prompt. Examples of this include clapping one 's hand over one 's own mouth, placing one 's hand in one 's pocket to prevent fidgeting, and using a ' bridge ' hand position to steady a pool shot all represent physical methods to affect behavior. Manipulating the occasion for behavior may change behavior as well. Removing distractions that induce undesired actions or adding a prompt to induce it are examples. Hiding temptation and reminders are two more. The need to hide temptation is a result of its effect on the mind. A common theme among studies of desire is an investigation of the underlying cognitive processes of a craving for an addictive substance, such as nicotine or alcohol. In order to better understand the cognitive processes involved, the Elaborated Intrusion (EI) theory of craving was developed. According to the EI theory, craving persists because individuals develop mental images of the coveted substance that are instantly pleasurable, but which also increase their awareness of deficit. The result is a cruel circle of desire, imagery, and preparation to satisfy the desire. This quickly escalates into greater expression of the imagery that incorporates working memory, interferes with performance on simultaneous cognitive tasks, and strengthens the emotional response. Essentially the mind is consumed by the craving for a desired substance, and this craving in turn interrupts any concurrent cognitive tasks. Obviously a craving for nicotine or alcohol is an extreme case, but nevertheless the EI theory holds true for more normal motivations and desires. Deprivation is the time in which an individual does not receive a reinforcer, while satiation occurs when an individual has received a reinforcer to such a degree that it will temporarily have no reinforcing power over them. If we deprive ourselves of a stimulus, the value of that reinforcement increases. For example, if an individual has been deprived of food, they may go to extreme measures to get that food, such as stealing. On the other hand, when we have an exceeding amount of a reinforcer, that reinforcement loses its value; if an individual eats a large meal, they may no longer be enticed by the reinforcement of dessert. One may manipulate one 's own behavior by affecting states of deprivation or satiation. By skipping a meal before a free dinner one may more effectively capitalize on the free meal. By eating a healthy snack beforehand the temptation to eat free "junk food '' is reduced. Also noteworthy is the importance of imagery in desire cognition during a state of deprivation. A study conducted on this topic involved smokers divided into two groups. The control group was instructed to continue smoking as usual until they arrived at the laboratory, where they were then asked to read a multisensory neutral script, meaning it was not related to a craving for nicotine. The experimental group, however, was asked to abstain from smoking before coming to the laboratory in order to induce craving and upon their arrival were told to read a multisensory urge - induction script intended to intensify their nicotine craving. Once the participants finished reading the script they rated their craving for cigarettes. Next they formulated visual or auditory images when prompted with verbal cues such as "a game of tennis '' or "a telephone ringing. '' After this task the participants again rated their craving for cigarettes. The study found that the craving experienced by the abstaining smokers was decreased to the control group 's level by visual imagery but not by auditory imagery alone. That mental imagery served to reduce the level of craving in smokers illustrates that it can be used as a method of self - control during times of deprivation. We manipulate emotional conditions in order to induce certain ways of responding. One example of this can be seen in theatre. Actors often elicit tears from painful memories if it is necessary for the character they are playing. This idea is similar to the notion if we read a letter, book, listen to music, watch a movie, in order to get us in the "mood '' so we can be in the proper state of mind for a certain event or function. Additionally, treating an activity as "work '' or "fun '' can have an effect on the difficulty of self - control. In order to analyze the possible effects of the cognitive transformation of an object on desire, a study was conducted based on a well - known German chocolate product. The study involved 71 undergraduate students, all of whom were familiar with the chocolate product. The participants were randomly assigned to one of three groups: the control condition, the consummatory condition, and the nonconsummatory transformation condition. Each group was then given three minutes to complete their assigned task. The participants in the control condition were told to read a neutral article about a location in South America that was devoid of any words associated with food consumption. Those in the consummatory condition were instructed to imagine as clearly as possible how consuming the chocolate would taste and feel. The participants in the nonconsummatory transformation condition were told to imagine as clearly as possible odd settings or uses for the chocolate. Next, all the participants underwent a manipulation task that required them to rate their mood on a five - point scale in response to ten items they viewed. Following the manipulation task, participants completed automatic evaluations that measured their reaction time to six different images of the chocolate, each of which was paired with a positive or a negative stimuli. The results showed that the participants instructed to imagine the consumption of the chocolate demonstrated higher automatic evaluations toward the chocolate than did the participants told to imagine odd settings or uses for the chocolate, and participants in the control condition fell in - between the two experimental conditions. This indicates that the manner in which one considers an item influences how much it is desired. Aversive stimulation is used as a means of increasing or decreasing the likelihood of target behavior. Similar to all methods of self - management, there is a controlling response, and a controlled response. An averse stimuli is sometimes referred to as a punisher or simply an aversive. Closely related to the idea of a punisher is the concept of punishment. Punishment is the idea that in a given situation, someone does something that is immediately followed by a punisher, then that person is less likely to do the same thing again when she or he next encounters a similar situation. An example of this can be seen when a teenage stays out past curfew. After staying out past curfew the teenagers parents ground the teenager. Because the teenager has been punished for his or her behavior he or she is less likely to stay out past their curfew again, thus decreasing the likelihood of the target behavior. Certain types of drugs affect self - controls. Stimulants, such as methylphenidate and amphetamine, improve inhibitory control in general and are used to treat ADHD. Similarly, depressants, such as alcohol, represent barriers to self - control through sluggishness, slower brain function, poor concentration, depression and disorientation. Operant conditioning sometimes referred to as Skinnerian conditioning is the process of strengthening a behavior by reinforcing it or weakening it by punishing it. By continually strengthening and reinforcing a behavior, or weakening and punishing a behavior an association as well as a consequence is made. Similarly, a behavior that is altered by its consequences is known as operant behavior There are multiple components of operant conditioning; these include reinforcement such as positive reinforcers and negative reinforcers. A positive reinforcer is a stimulus which, when presented immediately following a behavior, causes the behavior to increase in frequency. Negative reinforcers are a stimulus whose removal immediately after a response cause the response to be strengthened or to increase in frequency. Additionally, components of punishment are also incorporated such as positive punishment and negative punishment. Examples of operant conditioning can be seen every day. When a student tells a joke to one of his peers and they all laugh at this joke, this student is more likely to continue this behavior of telling jokes because his joke was reinforced by the sound of their laughing. However, if a peer tells the student his joke is "silly '' or "stupid '', he will be punished by telling the joke and his likelihood to tell another joke is greatly decreased. Another example of operant conditioning can be seen in the form of quitting a habit such as smoking. By using this technique to quit smoking, self - discipline must be displayed as the smoker must stop giving into their addiction. Self - punishment of responses would include the arranging of punishment contingent upon undesired responses. This might be seen in the behavior of whipping oneself which some monks and religious persons do. This is different from aversive stimulation in that, for example, the alarm clock generates escape from the alarm, while self - punishment presents stimulation after the fact to reduce the probability of future behavior. Punishment is more like conformity than self - control because with self - control there needs to be an internal drive, not an external source of punishment that makes the person want to do something. There is external locus of control which is similar to determinism and there is internal locus of control which is similar to free will. With a learning system of punishment the person does not make their decision based upon what they want, rather they base it on the external factors. When you use a negative reinforcement you are more likely to influence their internal decisions and allow them to make the choice on their own whereas with a punishment the person will make their decisions based upon the consequences and not exert self - control. The best way to learn self - control is with free will where people are able to perceive they are making their own choices. Skinner noted that various philosophies and religions exemplified this principle by instructing believers to love their enemies. When we are filled with rage or hatred we might control ourselves by ' doing something else ' or more specifically something that is incompatible with our response. Functional imaging of the brain has shown that self - control is correlated with an area in the dorsolateral prefrontal cortex (dlPFC), a part of the frontal lobe. This area is distinct from those involved in generating intentional actions, attention to intentions, or select between alternatives. This control occurs through the top - down inhibition of premotor cortex. There is some debate about the mechanism of self - control and how it emerges. Traditionally, researchers believed the bottom - up approach guided self - control behavior. The more time a person spends thinking about a rewarding stimulus, the more likely he or she will experience a desire for it. Information that is most important gains control of working memory, and can then be processed through a top - down mechanism. Increasing evidence suggests that top down processing plays a strong role in self - control. Specifically, top - down processing can actually regulate bottom - up attentional mechanisms. To demonstrate this, researchers studied working memory and distraction by presenting participants with neutral or negative pictures and then a math problem or no task. They found that participants reported less negative moods after solving the math problem compared to the no task group, which was due to an influence on working memory capacity. There are many researchers working on identifying the brain areas involved in the exertion of self - control; many different areas are known to be involved. In relation to self - control mechanisms, the reward centers in the brain compare external stimuli versus internal need states and a person 's learning history. At the biological level, a loss of control is thought to be caused by a malfunctioning of a decision mechanism. A mechanistic explanation of self - control is still in its infancy. However, there is strong demand for knowledge about these mechanism because knowledge of these mechanisms would have tremendous clinical application. Much of the work on how the brain reaches decisions is based on evidence from perceptual learning. Many of the tasks that subjects are tested on are not tasks typically associated with self - control, but are more general decision tasks. Nevertheless, the research on self - control is informed by more general research on decision tasks. Sources for evidence on the neural mechanisms of self - control include fMRI studies on human subject, neural recordings on animals, lesion studies on humans and animals, and clinical behavioral studies on humans with self - control disorders. There is broad agreement that the cortex is involved in self - control. The details of the final model have yet to be worked out. However, there are some enticing findings that suggest a mechanistic account of self - control could prove to have tremendous explanatory value. What follows is a survey of some of the important recent literature on the brain regions involved in self - control. The prefrontal cortex is located in the most anterior portion of the frontal lobe in the brain. It forms a larger portion of the cortex in humans. The dendrites in the prefrontal cortex contain up to 16 times as many dendritic spines as neurons in other cortical areas. Due to this, the prefrontal cortex integrates a large amount of information. The orbitofrontal cortex cells are important factors for self - control. If an individual has the choice between an immediate reward or a more valuable reward which they can receive later, an individual would most likely try to control the impulse to take that immediate reward. If an individual has a damaged orbitofrontal cortex, this impulse control will most likely not be as strong, and they may be more likely to take the immediate reinforcement. Additionally, we see lack of impulse control in children because the prefrontal cortex develops slowly. Todd A. Hare et al. use functional MRI techniques to show that the ventromedial prefrontal cortex (vmPFC) and the dorsolateral prefrontal cortex (DLPFC) are crucially involved in the exertion of self - control. They found that activity in the vmPFC was correlated with goal values and that the exertion of self - control required the modulation of the vmPFC by the DLPFC. The study found that a lack of self - control was strongly correlated with reduced activity in the DLPFC. Hare 's study is especially relevant to the self - control literature because it suggests that an important cause of poor self - control is a defective DLPFC. Alexandra W. Logue is interested in how outcomes change the possibilities of a self - control choice being made. Logue identifies three possible outcome effects: outcome delays, outcome size, and outcome contingencies. The delay of an outcome results in the perception that the outcome is less valuable than an outcome which is more readily achieved. The devaluing of the delayed outcome can cause less self - control. A way to increase self - control in situations of a delayed outcome is to pre-expose an outcome. Pre-exposure reduces the frustrations related to the delay of the outcome. An example of this is signing bonuses. Outcome size deals with the relative, perceived size of possible outcomes. There tends to be a relationship between the value of the incentive and the desired outcome; the larger the desired outcome, the larger the value. Some factors that decrease value include delay, effort / cost, and uncertainty. The decision tends to be based on the option with the higher value at the time of the decision. Finally, Logue defines the relationship between responses and outcomes as outcome contingencies. Outcome contingencies also impact the degree of self - control that a person exercises. For instance, if a person is able to change his choice after the initial choice is made, the person is far more likely to take the impulsive, rather than self - controlled, choice. Additionally, it is possible for people to make precommitment action. A precommitment action is an action meant to lead to a self - controlled action at a later period in time. When a person sets an alarm clock, they are making a precommitted response to wake up early in the morning. Hence, that person is more likely to exercise the self - controlled decision to wake up, rather than to fall back in bed for a little more sleep. Cassandra B. Whyte studied locus of control and academic performance and determined that internals tend to achieve at a higher level. Internals may perceive they have options from which to choose, thus facilitating more hopeful decision - making behavior as opposed to dependence on externally determined outcomes that require less commitment, effort, or self - control. Many things affect one 's ability to exert self - control, but it seems that self - control requires sufficient glucose levels in the brain. Exerting self - control depletes glucose. Reduced glucose, and poor glucose tolerance (reduced ability to transport glucose to the brain) are correlated with lower performance in tests of self - control, particularly in difficult new situations. Self - control demands that an individual work to overcome thoughts, emotions, and automatic responses / impulses. These strong efforts require higher blood glucose levels. Lower blood glucose levels can lead to unsuccessful self - control abilities. Alcohol causes a decrease of glucose levels in both the brain and the body, and it also has an impairing effect on many forms of self - control. Furthermore, failure of self - control occurs most likely during times of the day when glucose is used least effectively. Self - control thus appears highly susceptible to glucose. An alternative explanation of the limited amounts of glucose that are found is that this depends on the allocation of glucose, not on limited supply of glucose. According to this theory, the brain has sufficient resources of glucose and also has the possibility of delivering the glucose, but the personal priorities and motivations of the individual cause the glucose to be allocated to other sites. This theory has not been tested yet. In the 1960s, Walter Mischel tested four - year - old children for self - control in "The Marshmallow Test '': the children were each given a marshmallow and told that they can eat it anytime they want, but if they waited 15 minutes, they would receive another marshmallow. Follow up studies showed that the results correlated well with these children 's success levels in later life. A strategy used in the marshmallow test was the focus on "hot '' and "cool '' features of an object. The children were encouraged to think about the marshmallow 's "cool features '' such as its shape and texture, possibly comparing it to a cotton ball or a cloud. The "hot features '' of the marshmallow would be its sweet, sticky tastiness. These hot features make it more difficult to delay gratification. By focusing on the cool features, the mind is adverted from the appealing aspects of the marshmallow, and self - control is more plausible. Years later Dr. Mischel reached out to the participants of his study who were then in their 40 's. He found that those who showed less self - control by taking the single marshmallow in the initial study were more likely to develop problems with relationships, stress, and drug abuse later in life. Dr. Mischel carried out the experiment again with the same participants in order to see which parts of the brain were active during the process of self - control. The participants received scans through M.R.I to show brain activity. The results showed that those who exhibited lower levels of self - control had higher brain activity in the ventral striatum, the area that deals with positive rewards. Reviews concluded that self - control is correlated with various positive life outcomes, such as happiness, adjustment and various positive psychological factors. Self - control was also negatively correlated with sociotropy which in turn is correlated with depression. There 's conflicting evidence about whether will power is finite, infinite or self - reinforcing resource, a phenomenon sometimes termed ego depletion or reverse ego depletion. However, belief that will - power is infinite or self - reinforcing is associated with greater will power, voluntary executive function. Exerting self - control through the executive functions in decision making is held in some theories to deplete one 's ability to do so in the future. Ego depletion is the view that high self - control requires energy and focus, and over an extended period of self - control demands, this self - control can lessen. There are ways to help this ego depletion. One way is through rest and relaxation from these high demands. Additionally, training self - control with certain behaviors can also help to strengthen an individual 's self - control. This seems to be particularly effective in those who would otherwise have difficulty controlling their impulses in the domain of interest. Another way to overcome unwanted desires is to change the method with which we approach desire. One study in particular analyzed the impact of approaching a temptation by defining it in abstract, general terms as opposed to specific, concrete details. For the purposes of the study, approaching a situation using general terms was defined as the high - level construal condition whereas using specific details was termed the low - level construal condition. The study involved 42 college students who were randomly assigned to either the high - level or low - level construal condition. The participants were then presented with a packet that described five scenarios, each one involving a unique self - control conflict. For those participants in the high - level construal condition the scenarios were described using only general terms and for those in the low - level construal condition the scenarios were described using only specific details. After imagining themselves in each scenario, the participants were asked to indicate how bad they would feel if they indulged in the temptation using a six - point scale ranging from "not at all bad '' to "very bad. '' The data showed that participants in the high - level construal condition reported greater negative evaluations of temptations than did participants in the low - level construal conditions. This implies that individuals using high - level construals are better able to place a temptation in context and properly evaluate its long - term impact, and therefore are more likely to maintain self - control. Kelly McGonigal defines willpower as "the ability to do what you really want to do when part of you really does n't want to do it. '' It consists of three competing elements: 1) I will -- the ability to do what you need to do; 2) I wo n't -- the other side of self - control; the ability to resist temptation; and 3) I want -- your true want, the ability to remember the big picture of your life. Willpower is a resource that gets depleted, particularly when you are rundown or hungry. However, you may increase your capacity for willpower by engaging in activities such as mindfulness, meditation and exercise and / or by ensuring good nutrition and adequate sleep.
who are the singers of we are the world
We Are the World - wikipedia "We Are the World '' is a song and charity single originally recorded by the supergroup United Support of Artists (USA) for Africa in 1985. It was written by Michael Jackson and Lionel Richie (with arrangements by Michael Omartian) and produced by Quincy Jones for the album We Are the World. With sales in excess of 20 million copies, it is one of the fewer than 30 all - time physical singles to have sold at least 10 million copies worldwide. Following Band Aid 's 1984 "Do They Know It 's Christmas? '' project in the United Kingdom, an idea for the creation of an American benefit single for African famine relief came from activist Harry Belafonte, who, along with fundraiser Ken Kragen, was instrumental in bringing the vision to reality. Several musicians were contacted by the pair, before Jackson and Richie were assigned the task of writing the song. The duo completed the writing of "We Are the World '' seven weeks after the release of "Do They Know It 's Christmas? '', and one night before the song 's first recording session, on January 21, 1985. The historic event brought together some of the most famous artists in the music industry at the time. The song was released on March 7, 1985, as the only single from the album. A worldwide commercial success, it topped music charts throughout the world and became the fastest - selling American pop single in history. The first ever single to be certified multi-platinum, "We Are the World '' received a Quadruple Platinum certification by the Recording Industry Association of America. Awarded numerous honors -- including three Grammy Awards, one American Music Award, and a People 's Choice Award -- the song was promoted with a critically received music video, a home video, a special edition magazine, a simulcast, and several books, posters, and shirts. The promotion and merchandise aided the success of "We Are the World '' and raised over $63 million (equivalent to $138 million today) for humanitarian aid in Africa and the US. Following the devastation caused by the magnitude 7.0 M earthquake in Haiti on January 12, 2010, a remake of the song by another all - star cast of singers was recorded on February 1, 2010. Entitled "We Are the World 25 for Haiti '', it was released as a single on February 12, 2010, and proceeds from the record aided survivors in the impoverished country. Before the writing of "We Are the World '', American entertainer and social activist Harry Belafonte had sought for some time to have a song recorded by the most famous artists in the music industry at the time. He planned to have the proceeds donated to a new organization called United Support of Artists for Africa (USA for Africa). The non-profit foundation would then feed and relieve starving people in Africa, specifically Ethiopia, where around one million people died during the country 's 1983 -- 1985 famine. The idea followed Band Aid 's "Do They Know It 's Christmas? '' project in the UK, which Belafonte had heard about. In the activist 's plans, money would also be set aside to help eliminate hunger in the United States of America. Entertainment manager and fellow fundraiser Ken Kragen was contacted by Belafonte, who asked for singers Lionel Richie and Kenny Rogers -- Kragen 's clients -- to participate in Belafonte 's musical endeavor. Kragen and the two musicians agreed to help with Belafonte 's mission, and in turn, enlisted the cooperation of Stevie Wonder, to add more "name value '' to their project. Quincy Jones was drafted to co-produce the song, taking time out from his work on The Color Purple. Jones also telephoned Michael Jackson, who had just released the commercially successful Thriller album and had concluded a tour with his brothers. Jackson revealed to Richie that he not only wanted to sing the song, but to participate in its writing as well. To begin with, "We Are the World '' was to be written by Jackson, Richie, and Wonder. As Wonder had limited time to work on the project, Jackson and Richie proceeded to write "We Are the World '' themselves. They began creating the song at Hayvenhurst, the Jackson family home in Encino. For a week, the two spent every night working on lyrics and melodies in the singer 's bedroom. They knew that they wanted a song that would be easy to sing and memorable. The pair wanted to create an anthem. Jackson 's older sister La Toya watched the two work on the song, and later contended that Richie only wrote a few lines for the track. She stated that her younger brother wrote 99 percent of the lyrics, "but he 's never felt it necessary to say that ''. La Toya further commented on the song 's creation in an interview with the American celebrity news magazine People. "I 'd go into the room while they were writing and it would be very quiet, which is odd, since Michael 's usually very cheery when he works. It was very emotional for them. '' Richie had recorded two melodies for "We Are the World '', which Jackson took, adding music and words to the song in the same day. Jackson stated, "I love working quickly. I went ahead without even Lionel knowing, I could n't wait. I went in and came out the same night with the song completed -- drums, piano, strings, and words to the chorus. '' Jackson then presented his demo to Richie and Jones, who were both shocked; they did not expect the pop star to see the structure of the song so quickly. The next meetings between Jackson and Richie were unfruitful; the pair did not produce any additional vocals and got no work done. It was not until the night of January 21, 1985, that Richie and Jackson completed the lyrics and melody of "We Are the World '' within two and a half hours, one night before the song 's first recording session. The first night of recording, January 22, 1985, had tight security on hand, as Richie, Jackson, Wonder, and Jones started work on "We Are the World '' at Kenny Rogers ' Lion Share Recording Studio. The studio, on Beverly Boulevard in California, was filled with musicians, technicians, video crews, retinues, assistants, and organizers as the four musicians entered. To begin the night, a "vocal guide '' of "We Are the World '' was recorded by Richie and Jackson and duplicated on tape for each of the invited performers. The guide was recorded on the sixth take, as Quincy Jones felt that there was too much "thought '' in the previous versions. Following their work on the vocal guide, Jackson and Jones began thinking of alternatives for the line "There 's a chance we 're taking, we 're taking our own lives '': the pair was concerned that the latter part of the line would be considered a reference to suicide. As the group listened to a playback of the chorus, Richie declared that the last part of the line should be changed to "We 're ' saving ' our own lives '', which his fellow musicians agreed with. Producer Jones also suggested altering the former part of the line. "One thing we do n't want to do, especially with this group, is look like we 're patting ourselves on the back. So it 's really, ' There 's a choice we 're making. ' '' Around 1: 30 am, the four musicians ended the night by finishing a chorus of melodic vocalizations, including the sound "sha - lum sha - lin - gay ''. Jones told the group that they were not to add anything else to the tape. "If we get too good, someone 's gon na start playing it on the radio, '' he announced. On January 24, 1985, after a day of rest, Jones shipped Richie and Jackson 's vocal guide to all of the artists who would be involved in "We Are the World '' 's recording. Enclosed in the package was a letter from Jones, addressed to "My Fellow Artists '': The cassettes are numbered, and I ca n't express how important it is not to let this material out of your hands. Please do not make copies, and return this cassette the night of the 28th. In the years to come, when your children ask, ' What did mommy and daddy do for the war against world famine? ', you can say proudly, this was your contribution. Ken Kragen chaired a production meeting at a bungalow off Sunset Boulevard on January 25, 1985. There, Kragen and his team discussed where the recording sessions with the supergroup of musicians should take place. He stated, "The single most damaging piece of information is where we 're doing this. If that shows up anywhere, we 've got a chaotic situation that could totally destroy the project. The moment a Prince, a Michael Jackson, a Bob Dylan -- I guarantee you! -- drives up and sees a mob around that studio, he will never come in. '' On the same night, Quincy Jones ' associate producer and vocal arranger, Tom Bahler, was given the task of matching each solo line with the right voice. Bahler stated, "It 's like vocal arranging in a perfect world. '' Jones disagreed, stating that the task was like "putting a watermelon in a Coke bottle ''. The following evening, Lionel Richie held a "choreography '' session at his home, where it was decided who would stand where. The final night of recording was held on January 28, 1985, at A&M Recording Studios in Hollywood. Michael Jackson arrived at 9 pm, earlier than the other artists, to record his solo section and record a vocal chorus by himself. He was subsequently joined in the recording studio by the remaining USA for Africa artists, who included Ray Charles, Billy Joel, Diana Ross, Cyndi Lauper, Bruce Springsteen, and Smokey Robinson. Also in attendance were five of Jackson 's siblings: Jackie, La Toya, Marlon, Randy, and Tito. Many of the participants came straight from an American Music Award ceremony, which had been held that same night. Invited musician Prince, who would have had a part in which he and Michael Jackson sang to each other, did not attend the recording session. The reason given for his absence has varied. One newspaper claimed that the singer did not want to record with other acts. Another report, from the time of "We Are the World '' 's recording, suggested that the musician did not want to partake in the session because organizer Bob Geldof called him a "creep ''. Prince did, however, donate an exclusive track, "4 The Tears In Your Eyes '', for the We Are the World album. In all, more than 45 of America 's top musicians participated in the recording, and another 50 had to be turned away. Upon entering the recording studio, the musicians were greeted by a sign pinned to the door which read, "Please check your egos at the door. '' They were also greeted by Stevie Wonder, who proclaimed that if the recording was not completed in one take, he and Ray Charles, two blind men, would drive everybody home. Each of the performers took their position at around 10: 30 pm and began to sing. Several hours passed before Stevie Wonder announced that he would like to substitute a line in Swahili for the "sha - lum sha - lin - gay '' sound. At this point, Waylon Jennings left the recording studio for a short time when it was suggested by some that the song be sung in Swahili. A heated debate ensued, in which several artists also rejected the suggestion. The "sha - lum sha - lin - gay '' sound ran into opposition as well and was subsequently removed from the song. Jennings returned to the studio and participated in the recording, which bears his name in the end credits. The participants eventually decided to sing something meaningful in English. They chose to sing the new line "One world, Our children '', which most of the participants enjoyed. In the early hours of the morning, two Ethiopian women, guests of Stevie Wonder, were brought into the recording studio -- it had been decided that a portion of the proceeds raised would be used to bring aid to those affected by the recent famine in Ethiopia. They thanked the singers on behalf of their country, bringing several artists to tears, before being led from the room. Wonder attempted to lighten the mood, by joking that the recording session gave him a chance to "see '' fellow blind musician Ray Charles. "We just sort of bumped into each other! '' The solo parts of the song were recorded without any problems. The final version of "We Are the World '' was completed at 8 am. "We Are the World '' is sung from a first person viewpoint, allowing the audience to "internalize '' the message by singing the word we together. It has been described as "an appeal to human compassion ''. The first lines in the song 's repetitive chorus proclaim, "We are the world, we are the children, we are the ones who make a brighter day, so let 's start giving ''. "We Are the World '' opens with Lionel Richie, Stevie Wonder, Paul Simon, Kenny Rogers, James Ingram, Tina Turner, and Billy Joel singing the first verse. Michael Jackson and Diana Ross follow, completing the first chorus together. Dionne Warwick, Willie Nelson, and Al Jarreau sing the second verse, before Bruce Springsteen, Kenny Loggins, Steve Perry, and Daryl Hall go through the second chorus. Co-writer Jackson, Huey Lewis, Cyndi Lauper, and Kim Carnes follow with the song 's bridge. This structuring of the song is said to "create a sense of continuous surprise and emotional buildup ''. "We Are the World '' concludes with Bob Dylan and Ray Charles singing a full chorus, Wonder and Springsteen duetting, and ad libs from Charles and Ingram. On March 8, 1985, "We Are the World '' was released as a single, in both 7 '' and 12 '' format. The song was the only one released from the We Are the World album and became a chart success around the world. In the US, it was a number one hit on the R&B singles chart, the Hot Adult Contemporary Tracks chart and the Billboard Hot 100, where it remained for a month. The single had initially debuted at number 21 on the Hot 100, the highest entry since Michael Jackson 's "Thriller '' entered the charts at number 20 the year before. It took four weeks for the song to claim the number one spot -- half the time a single would normally have taken to reach its charting peak. On the Hot 100, the song moved from 21 to 5 to 2 and then number 1. "We Are the World '' might have reached the top of the Hot 100 chart sooner, if it were not for the success of Phil Collins ' "One More Night '', which received a significant level of support from both pop and rock listeners. "We Are the World '' also entered Billboard 's Top Rock Tracks and Hot Country Singles charts, where it peaked at numbers 27 and 76 respectively. The song became the first single since The Beatles ' "Let It Be '' to enter Billboard 's Top 5 within two weeks of release. Outside of the US, the single reached number one in Australia, France, Ireland, Italy, New Zealand, The Netherlands, Norway, Sweden, Switzerland and the UK. The song peaked at number 2 in only two countries: Germany and Austria. The single was also a commercial success: the initial shipment of 800,000 "We Are the World '' records sold out within three days of release. The record became the fastest - selling American pop single in history. At one Tower Records store on Sunset Boulevard in West Hollywood, 1,000 copies of the song were sold in two days. Store worker Richard Petitpas commented, "A number one single sells about 100 to 125 copies a week. This is absolutely unheard of. '' By the end of 1985, "We Are the World '' had become the best selling single of the year. Five years later it was revealed that the song had become the biggest single of the 1980s. "We Are the World '' was eventually cited as the biggest selling single in both US and pop music history. The song became the first - ever single to be certified multi-platinum; it received a 4 × certification by the Recording Industry Association of America. The estimated global sales of "We Are the World '' are said to be 20 million. Despite the song 's commercial success, "We Are the World '' received mixed reviews from journalists, music critics and the public following its release. American journalist Greil Marcus felt that the song sounded like a Pepsi jingle. He wrote, "... the constant repetition of ' There 's a choice we 're making ' conflates with Pepsi 's trademarked ' The choice of a new generation ' in a way that, on the part of Pepsi - contracted song writers Michael Jackson and Lionel Richie, is certainly not intentional, and even more certainly beyond the realm of serendipity. '' Marcus added, "In the realm of contextualization, ' We Are the World ' says less about Ethiopia than it does about Pepsi -- and the true result will likely be less that certain Ethiopian individuals will live, or anyway live a bit longer than they otherwise would have, than that Pepsi will get the catch phrase of its advertising campaign sung for free by Ray Charles, Stevie Wonder, Bruce Springsteen, and all the rest. '' Author Reebee Garofalo agreed, and expressed the opinion that the line "We 're saving our own lives '' was a "distasteful element of self - indulgence ''. He asserted that the artists of USA for Africa were proclaiming "their own salvation for singing about an issue they will never experience on behalf of a people most of them will never encounter ''. In contrast, Stephen Holden of The New York Times praised the phrase "There 's a choice we 're making, We 're saving our own lives ''. He commented that the line assumed "an extra emotional dimension when sung by people with superstar mystiques ''. Holden expressed that the song was "an artistic triumph that transcends its official nature ''. He noted that unlike Band Aid 's "Do They Know It 's Christmas '', the vocals on "We Are the World '' were "artfully interwoven '' and emphasized the individuality of each singer. Holden concluded that "We Are the World '' was "a simple, eloquent ballad '' and a "fully - realized pop statement that would sound outstanding even if it were n't recorded by stars ''. The song proved popular with both young and old listeners. The public enjoyed hearing a supergroup of musicians singing together on one track, and felt satisfied in buying the record, knowing that the money would go towards a good cause. People reported they bought more than one copy of the single, some buying up to five copies of the record. One mother from Columbia, Missouri purchased two copies of "We Are the World '', stating, "The record is excellent whether it 's for a cause or not. It 's fun trying to identify the different artists. It was a good feeling knowing that I was helping someone in need. '' According to music critic and Bruce Springsteen biographer Dave Marsh, "We Are the World '' was not widely accepted within the rock music community. The author revealed that the song was "despised '' for what it was not: "a rock record, a critique of the political policies that created the famine, a way of finding out how and why famines occur, an all - inclusive representation of the entire worldwide spectrum of post-Presley popular music ''. Marsh revealed that he felt some of the criticisms were right, while others were silly. He claimed that despite the sentimentality of the song, "We Are the World '' was a large - scale pop event with serious political overtones. "We Are the World '' was recognized with several awards following its release. At the 1986 Grammy Awards, the song and its accompanying music video won four awards: Record of the Year, Song of the Year, Best Pop Performance by a Duo or Group with Vocal and Best Music Video, Short Form. The music video was awarded two honors at the 1985 MTV Video Music Awards. It collected the awards for Best Group Video and Viewer 's Choice. People 's Choice Awards recognized "We Are the World '' with the Favorite New Song award in 1986. In the same year, the American Music Awards named "We Are the World '' "Song of the Year '', and honored organizer Harry Belafonte with the Award of Appreciation. Collecting his award, Belafonte thanked Ken Kragen, Quincy Jones, and "the two artists who, without their great gift would not have inspired us in quite the same way as we were inspired, Mr. Lionel Richie and Mr. Michael Jackson ''. Following the speech, the majority of USA for Africa reunited on stage, closing the ceremony with "We Are the World ''. Vinyl single: "We Are the World '' was promoted with a music video, a video cassette, and several other items made available to the public, including books, posters, shirts and buttons. All proceeds from the sale of official USA for Africa merchandise went directly to the famine relief fund. All of the merchandise sold well; the video cassette -- entitled We Are the World: The Video Event -- documented the making of the song, and became the ninth best - selling home video of 1985. All of the video elements were produced by Howard G. Malley and Craig B. Golin along with April Lee Grebb as the production supervisor. The music video showed the recording of "We Are the World '', and drew criticism from some. Michael Jackson joked before filming, "People will know it 's me as soon as they see the socks. Try taking footage of Bruce Springsteen 's socks and see if anyone knows who they belong to. '' Jackson was also criticized for filming and recording his solo piece privately, away from the other artists. The song was also promoted with a special edition of the American magazine Life. The publication had been the only media outlet permitted inside A&M Recording Studios on the night of January 28, 1985. All other press organizations were barred from reporting the events leading up to and during "We Are the World '' 's recording. Life ran a cover story of the recording session in its April 1985 edition of the monthly magazine. Seven members of USA for Africa were pictured on the cover: Bob Dylan, Bruce Springsteen, Cyndi Lauper, Lionel Richie, Michael Jackson, Tina Turner and Willie Nelson. Inside the magazine were photographs of the "We Are the World '' participants working and taking breaks. "We Are the World '' received worldwide radio coverage in the form of an international simultaneous broadcast later that year. Upon spinning the song on their local stations, Georgia radio disc jockeys, Bob Wolf and Don Briscar came up with the idea for a worldwide simulcast. They called hundreds of radio and satellite stations asking them to participate. On the morning of April 5, 1985 (Good Friday of that year) at 10: 25 am, over 8000 radio stations simultaneously broadcast the song around the world. As the song was broadcast, hundreds of people sang along on the steps of St. Patrick 's Cathedral in New York. The simultaneous radio broadcast of "We Are the World '' was repeated again the following Good Friday. "We Are the World '' gained further promotion and coverage on May 25, 1986, when it was played during a major benefit event held throughout the US. Hands Across America -- USA for Africa 's follow - up project -- was an event in which millions of people formed a human chain across the US. The event was held to draw attention to hunger and homelessness in the United States. "We Are the World '' 's co-writer, Michael Jackson, had wanted his song to be the official theme for the event. The other board members of USA for Africa outvoted the singer, and it was instead decided that a new song would be created and released for the event, titled "Hands Across America ''. When released, the new song did not achieve the level of success that "We Are the World '' did, and the decision to use it as the official theme for the event led to Jackson -- who co-owned the publishing rights to "We Are the World '' -- resigning from the board of directors of USA for Africa. Four months after the release of "We Are the World '', USA for Africa had taken in almost $10.8 million (equivalent to $24 million today). The majority of the money came from record sales within the US. Members of the public also donated money -- almost $1.3 million within the same time period. In May 1985, USA for Africa officials estimated that they had sold between $45 million and $47 million worth of official merchandise around the world. Organizer Ken Kragen announced that they would not be distributing all of the money at once. Instead, he revealed that the foundation would be looking into finding a long - term solution for Africa 's problems. "We could go out and spend it all in one shot. Maybe we 'd save some lives in the short term but it would be like putting a Band - Aid over a serious wound. '' Kragen noted that experts had predicted that it would take at least 10 to 20 years to make a slight difference to Africa 's long - term problems. In June 1985, the first USA for Africa cargo jet carrying food, medicine, and clothing departed for Ethiopia and Sudan. It stopped en route in New York, where 15,000 T - shirts were added to the cargo. Included in the supplies were high - protein biscuits, high - protein vitamins, medicine, tents, blankets and refrigeration equipment. Harry Belafonte, representing the USA for Africa musicians, visited Sudan in the same month. The trip was his last stop on a four - nation tour of Africa. Tanzanian Prime Minister Salim Ahmed Salim greeted and praised Belafonte, telling him, "I personally and the people of Tanzania are moved by this tremendous example of human solidarity. '' One year after the release of "We Are the World '', organizers noted that $44.5 million had been raised for USA for Africa 's humanitarian fund. They stated that they were confident that they would reach an initial set target of $50 million (equivalent to $109 million in 2017). By October 1986, it was revealed that their $50 million target had been met and exceeded; CBS Records gave USA for Africa a check for $2.5 million, drawing the total amount of money to $51.2 million. USA for Africa 's Hands Across America event had also raised a significant amount of money -- approximately $24.5 million for the hungry in the US. Since its release, "We Are the World '' has raised over $63 million (equivalent to $138 million today) for humanitarian causes. Ninety percent of the money was pledged to African relief, both long and short term. The long - term initiative included efforts in birth control and food production. The remaining 10 percent of funds was earmarked for domestic hunger and homeless programs in the US. From the African fund, over 70 recovery and development projects were launched in seven African nations. Such projects included aid in agriculture, fishing, water management, manufacturing and reforestation. Training programs were also developed in the African countries of Mozambique, Senegal, Chad, Mauritania, Burkina Faso, and Mali. Elias Kifle Maraim Beyene, a survivor from Ethiopia being asked about his memory of Michael Jackson after his death remembers: I wo n't ever forget Michael Jackson because his contribution to the song We are the World had a very significant effect on my life. I am 50 now but 25 years ago I was living in Addis Ababa, Ethiopia, which at that time was suffering from a long drought and famine. It was a terrible situation. Lots of people became sick and many more died. Around one million people in all were killed by the famine. In 1984 Michael Jackson, along with a number of other leading musicians, made the song We are the World to raise money for Africa. We received a lot of aid from the world and I was one of those who directly benefitted from it. The wheat flour that was distributed to the famine victims was different to the usual cereal we bought at the market. We baked a special bread from it. The local people named the bread after the great artist and it became known as Michael Bread. It was soft and delicious. When you have been through such hard times you never forget events like this. If you speak to anyone who was in Addis Ababa at that time they will all know what Michael Bread is and I know I will remember it for the rest of my life. "We Are the World '' has been performed live by members of USA for Africa on several occasions both together and individually. One of the earliest such performances came in 1985, during the rock music concert Live Aid, which ended with more than 100 musicians singing the song on stage. Harry Belafonte and Lionel Richie made surprise appearances for the live rendition of the song. Michael Jackson would have joined the artists, but was "working around the clock in the studio on a project that he 's made a major commitment to '', according to his press agent, Norman Winter. An inaugural celebration was held for US President - elect Bill Clinton in January 1993. The event was staged by Clinton 's Hollywood friends at the Lincoln Memorial and drew hundreds of thousands of people. Aretha Franklin, LL Cool J, Michael Bolton and Tony Bennett were among some of the musicians in attendance. Said Jones, "I 've never seen so many great performers come together with so much love and selflessness. '' The celebration included a performance of "We Are the World '', which involved Clinton, his daughter Chelsea, and his wife Hillary singing the song along with USA for Africa 's Kenny Rogers, Diana Ross and Michael Jackson. The New York Times ' Edward Rothstein commented on the event, stating, "The most enduring image may be of Mr. Clinton singing along in ' We Are the World ', the first President to aspire, however futilely, to hipness. '' As a prelude to his song "Heal the World '', "We Are the World '' was performed as an interlude during two of Michael Jackson 's tours, the Dangerous World Tour from 1992 to 1993 and the HIStory World Tour from 1996 to 1997. Jackson briefly perform the song with a chorus at the 2006 World Music Award in London, in his last live public performance. Jackson planned to use the song for his This Is It comeback concerts at The O2 Arena in London from 2009 to 2010, but the shows were cancelled due to his sudden death. Michael Jackson died in June 2009, after suffering a cardiac arrest. His memorial service was held several days later on July 7, and was reported to have been viewed by more than three billion people. The finale of the event featured group renditions of the Jackson anthems "We Are the World '' and "Heal the World ''. The singalong of "We Are the World '' was led by Darryl Phinnessee, who had worked with Jackson since the late 1980s. It also featured co-writer Lionel Richie and Jackson 's family, including his children. Following the performance, "We Are the World '' re-entered the US charts for the first time since its 1985 release. The song debuted at number 50 on Billboard 's Hot Digital Songs chart. On January 12, 2010, Haiti was struck by a magnitude - 7.0 earthquake, the country 's most severe earthquake in over 200 years. The epicenter of the quake was just outside the Haitian capital Port - au - Prince. Over 230,000 civilians have been confirmed dead by the Haitian government because of the disaster and around 300,000 have been injured. Approximately 1.2 million people are homeless and it has been reported that the lack of temporary shelter may lead to the outbreak of disease. To raise money for earthquake victims, a new celebrity version of "We Are the World '' was recorded on February 1, 2010, and released on February 12, 2010. Over 75 musicians were involved in the remake, which was recorded in the same studio as the 1985 original. The new version features revised lyrics as well as a rap segment pertaining to Haiti. Michael Jackson 's younger sister Janet duets with her late brother on the track, as per a request from their mother Katherine. In the video and on the track, archive material of Michael Jackson is used from the original 1985 recording. On February 20, 2010, a non-celebrity remake, "We Are the World 25 for Haiti (YouTube Edition) '', was posted to the video sharing website YouTube. Internet personality and singer - songwriter Lisa Lavie conceived and organized the Internet collaboration of 57 unsigned or independent YouTube musicians geographically distributed around the world. Lavie 's 2010 YouTube version, a cover of the 1985 original, excludes the rap segment and minimizes the Auto - tune that characterizes the 2010 celebrity remake. Another 2010 remake of the original is the Spanish - language "Somos El Mundo ''. It was written by Emilio Estefan and his wife Gloria Estefan, and produced by Emilio, Quincy Jones and Univision Communications, the company that funded the project. "We Are the World '' has been recognized as a politically important song, which "affected an international focus on Africa that was simply unprecedented ''. It has been credited with creating a climate in which musicians from around the world felt inclined to follow. According to The New York Times ' Stephen Holden, since the release of "We Are the World '', it has been noted that movement has been made within popular music to create songs that address humanitarian concerns. "We Are the World '' was also influential in subverting the way music and meaning were produced, showing that musically and racially diverse musicians could work together both productively and creatively. Ebony described the January 28 recording session, in which Quincy Jones brought together a multi-racial group, as being "a major moment in world music that showed we can change the world ''. "We Are the World '', along with Live Aid and Farm Aid, demonstrated that rock music had become more than entertainment, but a political and social movement. Journalist Robert Palmer noted that such songs and events had the ability to reach people around the world, send them a message, and then get results. Since the release of "We Are the World '', and the Band Aid single that influenced it, numerous songs have been recorded in a similar fashion, with the intent to aid disaster victims throughout the world. One such example involved a supergroup of Latin musicians billed as "Hermanos del Tercer Mundo '', or "Brothers of the Third World ''. Among the supergroup of 62 recording artists were Julio Iglesias, José Feliciano, and Sérgio Mendes. Their famine relief song was recorded in the same studio as "We Are the World ''. Half of the profits raised from the charity single was pledged to USA for Africa. The rest of the money was to be used for impoverished Latin American countries. Another notable example is the 1989 cover of the Deep Purple song "Smoke on the Water '' by a supergroup of hard rock, prog rock, and heavy metal musicians collaborating as Rock Aid Armenia to raise money for victims of the devastating 1988 Armenian earthquake. The 20th anniversary of "We Are the World '' was celebrated in 2005. Radio stations around the world paid homage to USA for Africa 's creation by simultaneously broadcasting the charity song. In addition to the simulcast, the milestone was marked by the release of a two - disc DVD called We Are the World: The Story Behind the Song. Ken Kragen asserted that the reason behind the simulcast and DVD release was not for USA for Africa to praise themselves for doing a good job, but to "use it to do some more good (for the original charity). That 's all we care about accomplishing. '' Harry Belafonte also commented on the 20th anniversary of the song. The entertainer acknowledged that "We Are the World '' had "stood the test of time ''; anyone old enough to remember it can still at least hum along. sales figures based on certification alone shipments figures based on certification alone
meaning of the proverb fortune favours the brave
Fortune favours the bold - wikipedia "Fortune favours the bold '', "Fortune favours the brave '', "Fortune helps the brave '', and "Fortune favours the strong '' are common translations of a Latin proverb. The slogan has been used historically in the military in the Anglo - Saxon world, and it is used up to the present in the US Army and on the coats of arms of individual families and clans. Fortune favors the bold is the translation of a Latin proverb, which exists in several forms with slightly different wording, where Fortuna is the goddess of luck, such as These Proverbs in turn descended from Fortes fortuna adiuvat. (literally: "fortune favours the strong '') used in Terence 's comedy play Phormio, line 203. Pliny the Younger quotes his uncle, Pliny the Elder as using the phrase when deciding to take his fleet and investigate the eruption of Mount Vesuvius in 79 AD, in the hope of helping his friend Pompianus. "' Fortes ' inquit ' fortuna iuvat: Pomponianum pete. ' '' "' Fortune ', he said, ' favours the brave: head for Pomponianus. ' ''. The expedition cost the elder Pliny his life. The quote "Fortes Fortuna Juvat '' is used by the Jydske Dragonregiment, or Jutish Dragoon Regiment, in the Royal Danish Army. The motto for the Portuguese Commandos is "Audaces Fortuna Juvat '' (A sorte protege os Audazes). It is used as the motto for the British Army 's Yorkshire Regiment having been previously used by one of the Yorkshire 's antecedent regiments, the Duke of Wellington 's Regiment (West Riding (33rd / 76th Foot)). The Latin version Audentes Fortuna Juvat is the motto of Clan MacKinnon and features on the clan crest. It is the motto for Clan Turnbull. It is used as the motto for the O'Flaherty family in Ireland and is also used on their coat of arms. This is used as the motto for the Dickson family and is presented on their family crest. The motto Fortuna Audaces Juvat was used by the Clevland family of Tapeley Park, Westleigh, Devon, in the 18th and 19th centuries, as seen with their armorials on several of the family 's mural monuments in Westleigh Church. The phrase was used as the motto of the Royal Air Force station based at East Fortune, in East Lothian. The base was operational in the First World War and between 1940 and 1947. It is the motto of the Ulster Loyalist terrorist group the Orange Volunteers. It is the official motto of the United States Coast Guard Academy Class of 1982, which has produced more Coast Guard flag officers than any other class that graduated from the Academy. "Fortuna Favet Fortibus '' (fortune favors the brave) is the official motto of the United States Naval Academy Classes of 1985, 2004, and 2012. The motto "Fortes Fortuna Juvat '' appears on the gates of Honor Hill at Ft. Benning, Georgia, where U.S. Army infantrymen ceremoniously receive the iconic cross rifle insignia. It has been the motto of several United States Navy ships: The Latin equivalent "fortuna audentes juvat '' is used as the motto for the Turing family, dating back to 1316 AD. The motto is used by the 366th Fighter Wing of the United States Air Force and appears on the wing patch. The motto is also used by the Air Force Office of Special Investigations, 3rd Field Investigation Region, Detachment 327, Little Rock Air Force Base. It is the unit motto for 2nd Battalion, 3rd Marines, stationed out of Marine Corps Base Hawaii. It is the unit motto for 3rd Battalion, 8th Marines, stationed out of Marine Corps Camp Lejeune, NC. The motto is also used on the Seattle Police Department 's SWAT unit patch. The Latin equivalent "fortuna favet audaci '' is the motto of Trumbull College of Yale University. The motto is used by the Cornielje family of The Netherlands alongside their coat of arms. Motto used by the 80th Fighter Squadron stationed at Kunsan AB, Republic of South Korea.
where did most of the battles of the revolutionary war take place
List of American Revolutionary War battles - wikipedia This is a list of military actions in the American Revolutionary War. Actions marked with an asterisk involved no casualties. Major campaigns, theaters, and expeditions of the war
top scorer in la liga in one season
List of La Liga top scorers - wikipedia Since the 1929 formation of La Liga, Spain 's top division of association football, a total of 56 players have finished as the competition 's top goalscorer. As no official recognition exists in Spain for the league top scorer, the following data is based on official match reports provided by the Liga de Fútbol Profesional and may differ from independent research and the unofficial Pichichi Trophy awarded by the newspaper Marca. La Liga 's all - time top goalscorer is Barcelona 's Lionel Messi, who also holds the record for most goals scored in a season with 50 goals in 2011 - 12. Athletic Bilbao 's Telmo Zarra, who was the competition 's all - time top scorer until 2014, was top scorer in six seasons between 1945 and 1953. Three other players -- Real Madrid 's Alfredo Di Stéfano, Quini of Sporting de Gijón and Barcelona, and Hugo Sánchez of Atlético Madrid and Real Madrid -- each finished as top scorer in five seasons. Players in Bold are still active. Three or more goals in a single match. For the complete list of hat - tricks see List of La Liga hat - tricks. Players in bold are still active Players with at least 10 hat - tricks are shown in this table. Source: BDFútbol
what's the difference between general manager and store manager
General manager - wikipedia A general manager is an executive who has overall responsibility for managing both the revenue and cost elements of a company 's income statement, known as profit & loss (P&L) responsibility. A general manager usually oversees most or all of the firm 's marketing and sales functions as well as the day - to - day operations of the business. Frequently, the general manager is responsible for effective planning, delegating, coordinating, staffing, organizing, and decision making to attain desirable profit making results for an organization (Sayles 1979). In many cases, the general manager of a business is given a different formal title or titles. Most corporate managers holding the titles of chief executive officer (CEO) or president, for example, are the general managers of their respective businesses. More rarely, the chief financial officer (CFO), chief operating officer (COO), or chief marketing officer (CMO) will act as the general manager of the business but there is level of post between them therefore GM and CEO are different. Depending on the company, individuals with the title managing director, regional vice president, country manager, product manager, branch manager, or segment manager may also have general management responsibilities. In large companies, many vice presidents will have the title of general manager when they have the full set of responsibility for the function in that particular area of the business and are often titled vice president and general manager. In technology companies, general managers are often given the title of product manager. In consumer products companies, general managers are often given the title brand manager or category manager. In professional services firms, the general manager may hold titles such as managing partner, senior partner, or managing director. In the hotel industry, the general manager is the head executive responsible for the overall operation of an individual hotel establishment including financial profitability. The general manager holds ultimate managerial authority over the hotel operation and usually reports directly to a regional vice president, corporate office, and / or hotel ownership / investors. Common duties of a general manager include but are not limited to hiring and management of an executive team consisting of individual department heads that oversee various hotel departments and functions, budgeting and financial management, creating and enforcing hotel business objectives and goals, sales management, marketing management, revenue management, project management, contract management, handling of emergencies and other major issues involving guests, employees, or the facility, public relations, labor relations, local government relations, maintaining business partnerships, and many additional duties. The extent of duties of an individual hotel general manager vary significantly depending on the size of the hotel and company organization, for example, general managers of smaller boutique - type hotels may be directly responsible for additional administrative duties such as accounting, human resources, payroll, purchasing, and other duties that would normally be handled by other subordinate managers or entire departments and divisions in a larger hotel operation. In most professional sports, the general manager is the team executive responsible for acquiring the rights to player personnel, negotiating their contracts, and reassigning or dismissing players no longer desired on the team. The general manager may also have responsibility for hiring the head coach of the team. For many years in U.S. professional sports, coaches often served as general managers for their teams as well, deciding which players would be kept on the team and which ones dismissed, and even negotiating the terms of their contracts in cooperation with the ownership of the team. In fact, many sports teams in the early years of U.S. professional sports were coached by the owner of the team, so in some cases the same individual served as owner, general manager and head coach. As the amount of money involved in professional sports increased, many prominent players began to hire agents to negotiate contracts on their behalf. This intensified contract negotiations that resulted, as well to ensure all player contracts are in accordance with these caps, as well as consistent with the desires of the ownership and its ability to pay. General Managers are usually responsible for the selection of players in player drafts and work with the coaching staff and scouts to build a strong team. In sports with developmental or minor leagues, the general manager is usually the team executive with the overall responsibility for "sending down '' and "calling up '' players to and from these leagues, although the head coach may also have significant input into these decisions. Some of the most successful sports general managers have been former players and coaches, while others have backgrounds in ownership and business management. The term is not commonly used in Europe, especially in soccer, where the position of manager or coach is used instead to refer to the managing / coaching position. The position of director of football might be the most similar position on many European football clubs.
what is a proof of citizenship in india
Indian nationality law - Wikipedia The conferment of a person, as a citizen of India, is governed by Articles 5 to 11 (Part II) of the Constitution of India. The legislation related to this matter is the Citizenship Act 1955, which has been amended by the Citizenship (Amendment) Act 1986, the Citizenship (Amendment) Act 1992, the Citizenship (Amendment) Act 2003, The Citizenship (Amendment) Act, 2005 and Citizenship (Amendment) Act, 2015 Article 9 of Indian Constitution says that a person who voluntarily acquires citizenship of any other country is no longer an Indian citizen. Also, according to The Passports Act, a person has to surrender his / her Indian passport and vote card and other Indian ID cards must not use after other country citizenship. It is a punishable offence if the person fails to surrender the passport. Indian nationality law largely follows the jus sanguinis (citizenship by right of blood) as opposed to the jus soli (citizenship by right of birth within the territory). The President of India is termed the First Citizen of India. The Government of India Act 1858 established the British Raj and formally brought the majority of Indians under British imperial rule. Until the Indian Independence Act 1947 took effect on 15 August 1947, Indians under the British Raj generally fell into one of two categories: Effective from 15 August 1947, India was established as the independent Dominion of India. Along with subjects of the other British Dominions, Indians resident, born and naturalised in Indian provinces legally remained British subjects by virtue of Section 18 (3) of the Indian Independence Act, unless they had already acquired citizenship of the United Kingdom or any other country. From 15 August, British protection over the princely states lapsed, and Indians who were subjects of a principality automatically lost their status as British protected persons. The rulers and Indian subjects of princely states which had acceded to the Dominion of India or to the Dominion of Pakistan on or prior to 15 August (termed "Acceding States '') became British subjects. Indians resident in a princely state which had not acceded to either Dominion by 15 August became temporarily stateless, lacking any recognized nationality or British subject status, though remaining subjects of their state. From 1 January 1949, when the British Nationality Act 1948 came into force, to 25 January 1950, Indians in the Indian provinces became British subjects with Indian citizenship. From 26 November 1949, Indians domiciled in the territories of India became Indian citizens. With the promulgation of the Indian Constitution on 26 January 1950, which established the Republic of India, the majority of Indian citizens were no longer British subjects, but continued to enjoy the status of Commonwealth citizen (also known as a British subject with Commonwealth citizenship, a status which does not entitle the holder to use a British passport), by virtue of their Indian citizenship and India 's membership of the Commonwealth. However, a number of Indians, notably those who had been born in a former princely state, did not acquire Indian citizenship on commencement of the Indian Constitution and retained British subject without citizenship status (which entitles a person to a British passport) unless they had acquired citizenship of another Commonwealth country. The Citizenship Act of India (1955) finally extended Indian citizenship to all Indians, regardless of whether they had been born in a former princely state or not. On 20 December 1961, India acquired the territories of Goa, Daman and Diu and Dadra and Nagar Haveli after the military action which were under the territories of Portugal. The French territory of Puducherry, Karaikal, Mahé, Yanam and the Free town of Chandranagore, were acquired under treaty of cession with France. Sikkim was also merged with India and became a constituent state with effect from 16 May 1975. Some of the enclaves in the eastern part of India, were also acquired under border agreements with Pakistan and Bangladesh respectively In order to expressly provide the citizenship for people in territories as mentioned above, the central government issued the Goa, Daman and Diu (Citizenship) Order, 1962, Dadra and Nagar Haveli (Citizenship) Order, 1962 and Citizenship (Pondicherry) Order 1962, in exercise of its powers under section 7 of the Citizenship act and for Sikkim, the President extended the Citizenship act, and the relevant rules under Article 371 - F (n) of Indian Constitution. In case of acquired enclaves, that did not necessitate legislative action, as that was only a border demarcation agreement. Persons domiciled in the territory of India as on 26 November 1949 automatically became Indian citizens by virtue of operation of the relevant provisions of the Indian Constitution coming into force, and most of these constitutional provisions came into force on 26 January 1950. The Constitution of India also made provision regarding citizenship for migrants from the territories of Pakistan which had been part of India before partition. Any person born in India on or after 26 January 1950, but prior to the commencement of the 1986 Act on 1 July 1987, is a citizen of India by birth. A person born in India on or after 1 July 1987 is a citizen of India if either parent was a citizen of India at the time of the birth. Those born in India on or after 3 December 2004 are considered citizens of India only if both of their parents are citizens of India or if one parent is a citizen of India and the other is not an illegal migrant at the time of their birth. In September 2013, Bombay High Court gave a judgement that a birth certificate, passport or even an Aadhaar card alone may not be enough to prove Indian citizenship, unless the parents are Indian citizens. Persons born outside India on or after 26 January 1950 but before 10 December 1992 are citizens of India by descent if their father was a citizen of India at the time of their birth. Persons born outside India on or after 10 December 1992 are considered citizens of India if either of their parents is a citizen of India at the time of their birth. From 3 December 2004 onwards, persons born outside of India shall not be considered citizens of India unless their birth is registered at an Indian diplomatic mission within one year of the date of birth. In certain circumstances it is possible to register after one year with the permission of the Central Government. The application for registration of the birth of a child must be made to an Indian diplomatic mission and must be accompanied by an undertaking in writing from the parents of the child that he or she does not hold the passport of another country. The Central Government may, on an application, register as a citizen of India under section 5 of the Citizenship Act 1955 any person (not being an illegal migrant) if s / he belongs to any of the following categories: Citizenship of India by naturalisation can be acquired by a foreigner (not illegal migrant) who is ordinarily resident in India for 12 years (throughout the period of 12 months immediately preceding the date of application and for 11 years in the aggregate in the 14 years preceding the 12 months) and other qualifications as specified in Third Schedule to the Citizen Act. Renunciation is covered in Section 8 of the Citizenship Act 1955. If an adult makes a declaration of renunciation of Indian citizenship, s / he loses Indian citizenship. In addition, any minor child of that person also loses Indian citizenship from the date of renunciation. When the child reaches the age of 18, he or she has the right to have his or her Indian citizenship restored. The provisions for making a declaration of renunciation under Indian citizenship law require that the person making the declaration be "of full age and capacity ''. Termination is covered in Section 9 of the Citizenship Act, 1955. The provisions for termination are separate and distinct from the provisions for making a declaration of renunciation. Section 9 (1) of the act provides that any citizen of India who by naturalisation or registration acquires the citizenship of another country shall cease to be a citizen of India. Notably, the termination provision differs from the renunciation provision because it applies to "any citizen of India '' and is not restricted to adults. Indian children therefore also automatically lose their claim to Indian citizenship if at any time after birth they acquire a citizenship of another country by, for example, naturalisation or registration -- even if the acquisition of another citizenship was done as a result of actions by the child 's parents. The acquisition of another country 's passport is also deemed under the Citizenship Rules, 1956 to be voluntary acquisition of another country 's nationality. Rule 3 of Schedule III of the Citizenship Rules, 1956 states that "the fact that a citizen of India has obtained on any date a passport from the Government of any other country shall be conclusive proof of his / her having voluntarily acquired the citizenship of that country before that date ''. Again, this rule applies even if the foreign passport was obtained for the child by his or her parents, and even if possession of such a passport is required by the laws of a foreign country which considers the child to be one of its citizens (e.g., a US - born child of Indian parents who is automatically deemed to be a US citizen according to US law, and who is therefore required by US law to have a US passport in order to enter and leave the US). It does not matter that a person continues to hold an Indian passport. This rule seemingly even applies if the foreign nationality was automatically had from birth, and thus not voluntarily acquired after birth. Persons who acquire another citizenship lose Indian citizenship from the date on which they acquire that citizenship or another country 's passport. The prevailing practice at a number of British diplomatic posts, for example, is to impound and return to the Indian authorities the Indian passports of those applicants who apply for and are granted British passports. Special rules exist for Indian citizens with a connection to Goa, Daman and Diu. Rule 3A of Schedule III of the Citizenship Rules, 1956 states that "Where a person, who has become an Indian Citizen by virtue of the Goa, Daman and Diu (Citizenship) Order, 1962, or the Dadra and Nagar Haveli (Citizenship) Order 1962, issued under section 7 of the Citizenship Act, 1955 (57 of 1955) holds a passport issued by the Government of any other country, the fact that he has not surrendered the said passport on or before the 19 January 1963 shall be conclusive proof of his / her having voluntarily acquired the citizenship of that country before that date. On 16 February 1962, a Constitution Bench of the Supreme Court of India held in the case of Izhar Ahmad Khan vs Union of India that "If it is shown that the person has acquired foreign citizenship either by naturalisation or registration, there can be no doubt that s / he ceases to be a citizen of India in consequence of such naturalisation or registration. '' In response to persistent demands for dual citizenship, particularly from the diaspora in North America and other developed countries, the Overseas Citizenship of India (OCI) scheme was introduced by amending The Citizenship Act, 1955 in August 2005. The scheme was launched during the Pravasi Bharatiya Divas convention at Hyderabad in 2006. Indian authorities have interpreted the law to mean a person can not have a second country 's passport simultaneously with an Indian one -- even in the case of a child who is claimed by another country as a citizen of that country, and who may be required by the laws of the other country to use one of its passports for foreign travel (such as a child born in the United States or in Australia to Indian parents), and the Indian courts have given the executive branch wide discretion over this matter. Therefore, Overseas Citizenship of India is not an actual citizenship of India and thus, does not amount to dual citizenship or dual nationality or anyone no longer to use Indian IDs after OCI. Moreover, the OCI card is not a substitute for an Indian visa and therefore, the passport which displays the lifetime visa must be carried by OCI holders while travelling to India. OCI Cards are now being printed without the lifelong "U '' Visa Sticker (which is normally pasted on the applicant 's passport). The proof of lifelong visa will be just the OCI Card which will have "Life Time Visa '' printed on it. The OCI Card will be valid with any Valid Passport. "However, countries may consider the OCI as dual citizenship: for example, the UK government considers that, for purposes of the British Nationality Act 1981, "OCI is considered to be citizenship of another State ''. This was a form of identification issued to an individual who held a passport in a country other than Afghanistan, Bangladesh, Bhutan, China, Nepal, Pakistan and Sri Lanka and could prove their Indian origin up to three generations before. In early 2011, the then Prime Minister of India, Manmohan Singh, announced that the Person of Indian Origin card will be merged with the Overseas Citizen of India card. This new card was proposed to be called the Overseas Indian Card. As of 9 January 2015, the PIO card scheme has been discontinued and applicants are to apply for OCI only. All currently held PIO cards are treated as OCI cards. PIO card holders will get a special stamp in their existing PIO card, saying "lifelong validity '' thus making them equal to existing OCI cards. It is generally difficult to have dual citizenship of India and another country, due to the provisions for loss of Indian nationality when an Indian national naturalizes in another country (see "Loss of citizenship '' above), and the requirement to renounce one 's existing citizenships when naturalizing in India (see "Naturalization '' above). There are still some ways in which a person may have dual citizenship of India and another country, including: A public interest litigation (PIL) seeking dual citizenship for overseas Indians was filed in the Supreme Court on 6 January 2015 on the eve of the inauguration of the Pravasi Bharatiya Divas (PBD) in Gujarat ‟ s capital Gandhinagar by Prime Minister Narendra Modi. On 20 April 2015, the Supreme Court of India dismissed the Public Interest Litigation (PIL). In dismissing the said PIL, the Supreme Court reasoned that Mr. Venkat Narayan can not plead on somebody else 's behalf as he is not the aggrieved party and those who need to assert their right should come forward. Visa requirements for Indian citizens are administrative entry restrictions by the authorities of other states placed on citizens of India. According to the 2017, Indian citizens had visa - free or visa on arrival access to 49 countries and territories, ranking the Indian passport 87th in terms of travel freedom according to the Henley visa restrictions index. Open border with Schengen Area. Russia is a transcontinental country in Eastern Europe and Northern Asia. The vast majority of its population (80 %) lives in European Russia, therefore Russia as a whole is included as a European country here. Turkey is a transcontinental country in the Middle East and Southeast Europe. Has a small part of its territory (3 %) in Southeast Europe called Turkish Thrace. Azerbaijan and Georgia (Abkhazia; South Ossetia) are transcontinental countries. Both have a small part of their territories in the European part of the Caucasus. Kazakhstan is a transcontinental country. Has a small part of its territories located west of the Urals in Eastern Europe. Armenia (Artsakh) and Cyprus (Northern Cyprus) are entirely in Southwest Asia but having socio - political connections with Europe. Egypt is a transcontinental country in North Africa and the Middle East. Has a small part of its territory in the Middle East called Sinai Peninsula. Partially recognized.
who is buried in santa croce in florence
Santa Croce, Florence - wikipedia The Basilica di Santa Croce (Basilica of the Holy Cross) is the principal Franciscan church in Florence, Italy, and a minor basilica of the Roman Catholic Church. It is situated on the Piazza di Santa Croce, about 800 meters south - east of the Duomo. The site, when first chosen, was in marshland outside the city walls. It is the burial place of some of the most illustrious Italians, such as Michelangelo, Galileo, Machiavelli, the poet Foscolo, the philosopher Gentile and the composer Rossini, thus it is known also as the Temple of the Italian Glories (Tempio dell'Itale Glorie). The Basilica is the largest Franciscan church in the world. Its most notable features are its sixteen chapels, many of them decorated with frescoes by Giotto and his pupils, and its tombs and cenotaphs. Legend says that Santa Croce was founded by St Francis himself. The construction of the current church, to replace an older building, was begun on 12 May 1294, possibly by Arnolfo di Cambio, and paid for by some of the city 's wealthiest families. It was consecrated in 1442 by Pope Eugene IV. The building 's design reflects the austere approach of the Franciscans. The floorplan is an Egyptian or Tau cross (a symbol of St Francis), 115 metres in length with a nave and two aisles separated by lines of octagonal columns. To the south of the church was a convent, some of whose buildings remain. The Primo Chiostro, the main cloister, houses the Cappella dei Pazzi, built as the chapter house, completed in the 1470s. Filippo Brunelleschi (who had designed and executed the dome of the Duomo) was involved in its design which has remained rigorously simple and unadorned. In 1560, the choir screen was removed as part of changes arising from the Counter-Reformation and the interior rebuilt by Giorgio Vasari. As a result, there was damage to the church 's decoration and most of the altars previously located on the screen were lost. The bell tower was built in 1842, replacing an earlier one damaged by lightning. The neo-Gothic marble façade dates from 1857 - 1863. The Jewish architect Niccolo Matas from Ancona, designed the church 's façade, working a prominent Star of David into the composition. Matas had wanted to be buried with his peers but because he was Jewish, he was buried under the threshold and honored with an inscription. In 1866, the complex became public property, as a part of government suppression of most religious houses, following the wars that gained Italian independence and unity. The Museo dell'Opera di Santa Croce is housed mainly in the refectory, also off the cloister. A monument to Florence Nightingale stands in the cloister, in the city in which she was born and after which she was named. Brunelleschi also built the inner cloister, completed in 1453. In 1966, the Arno River flooded much of Florence, including Santa Croce. The water entered the church bringing mud, pollution and heating oil. The damage to buildings and art treasures was severe, taking several decades to repair. Today the former dormitory of the Franciscan friars houses the Scuola del Cuoio (Leather School). Visitors can watch as artisans craft purses, wallets, and other leather goods which are sold in the adjacent shop. Artists whose work is present in the church include: Once present in the church 's Medici Chapel, but now split between the Florentine Galleries and the Bagatti Valsecchi Museum in Milan, is a polyptych by Lorenzo di Niccolò. The Basilica became popular with Florentines as a place of worship and patronage and it became customary for greatly honoured Florentines to be buried or commemorated there. Some were in chapels "owned '' by wealthy families such as the Bardi and Peruzzi. As time progressed, space was also granted to notable Italians from elsewhere. For 500 years monuments were erected in the church including those to: A Room with a View (1908), E.M. Forster, chapter 2 Romola (1863), George Eliot Works related to Catholic Encyclopedia (1913) / Suppression of Monasteries in Continental Europe at Wikisource
if i could read my mind gordon lightfoot
If You Could Read My Mind - wikipedia "If You Could Read My Mind '' is a song by Canadian singer - songwriter Gordon Lightfoot. It reached number one on Canadian music charts and was his first recording to appear on the American music charts, reaching number 5 on the Billboard Hot 100 singles chart in February 1971. Later in the year it reached number 30 in the UK. The song also reached number one for one week on the Billboard Easy Listening chart, and was the first of four Lightfoot releases to reach number one. This song first appeared on Lightfoot 's 1970 album Sit Down Young Stranger, which was later renamed If You Could Read My Mind following the song 's success. Lightfoot has cited his divorce for inspiring the lyrics, saying they came to him as he was sitting in a vacant Toronto house one summer. At the request of his daughter, Ingrid, he performs the lyrics with a slight change now: the line "I 'm just trying to understand the feelings that you lack '' is altered to "I 'm just trying to understand the feelings that we lack. '' He has said in an interview that the difficulty with writing songs inspired by personal stories is that there is not always the emotional distance and clarity to make lyrical improvements such as the one his daughter suggested. In 1987 Lightfoot took a lawsuit out against the writer of "The Greatest Love of All '', alleging plagiarism of 24 bars of "If You Could Read My Mind ''. Lightfoot has stated that he dropped the lawsuit when he felt it was having a negative effect on the singer Whitney Houston, as the lawsuit was about the writer and not her. The song is in A major and uses the subtonic chord. The song has been covered by many other artists, including Andy Williams, Johnny Mathis, Petula Clark, Jack Jones, Ultra Nate, Don Williams, Johnny Cash, Duane Steele, Don McLean, Kalan Porter, Herb Alpert & the Tijuana Brass, Olivia Newton - John, Liza Minnelli, Diana Krall and Sarah McLachlan, Glen Campbell, Gene Clark, Aurora featuring Marcella Detroit, Amber, Gordon Haskell, Vikki Carr, Daliah Lavi, the Mexican Actress / model / singer Isabel Madow (in Spanish, as "Sí Pudieras Leer Mi Mente ''), Beckie Menzie and Joe Dassin (with French lyrics as "Si tu peux lire en moi ''), Hector (in Finnish, as "Jos Lukisit Kuin Kirjaa ''), etc. The covered version by The Spotnicks was adopted as an unofficial theme for the 1972 Summer Olympics. (1) Country music artist Dwight Yoakam performs the song in his live sets on a regular basis. Holly Cole covers this song on the 2012 album Night. Canadian singer Connie Kaldor recorded the song on the Gordon Lightfoot tribute album Beautiful. A Tribute to Gordon Lightfoot in 2003. Canadian musician Neil Young released a cover of this song on his 2014 album A Letter Home. Canadian jazz singer Diana Krall released a cover as a duet with Sarah McLachlan on "The Complete Sessions '' release of Krall 's 2015 album Wallflower. It has also been used in motion pictures and television shows, including:
where is bates motel supposed to take place
Bates Motel (TV series) - wikipedia Bates Motel is an American psychological horror drama television series that aired from March 18, 2013 to April 24, 2017. It was developed by Carlton Cuse, Kerry Ehrin, and Anthony Cipriano, and is produced by Universal Television and American Genre for the cable network A&E. The series, a contemporary prequel to Alfred Hitchcock 's 1960 film Psycho; based on Robert Bloch 's 1959 novel of the same name, depicts the lives of Norman Bates (Freddie Highmore) and his mother Norma (Vera Farmiga) prior to the events portrayed in the novel and film, albeit in a different fictional town (White Pine Bay, Oregon, as opposed to Fairvale, California) and in a modern - day setting. However, the final season loosely adapts the plot of Psycho. Max Thieriot and Olivia Cooke both starred as part of the main cast throughout the series ' run. After recurring in the first season, Nestor Carbonell was added to the main cast from season two onward. The series begins in Arizona with the death of Norma 's husband, after which Norma purchases the Seafairer motel located in a coastal Oregon town so that she and Norman can start a new life. Subsequent seasons follow Norman as his mental illness becomes dangerous, and Norma as she struggles to protect her son, and those around him, from himself. The series was filmed outside Vancouver in Aldergrove, British Columbia, along with other locations within the Fraser Valley of British Columbia. A&E chose to skip a pilot of the series, opting to go straight - to - series by ordering a 10 - episode first season. On June 15, 2015, the series was renewed for a fourth and fifth season, making Bates Motel A&E 's longest - running original scripted drama series in the channel 's history. The series ' lead actors, Vera Farmiga and Freddie Highmore, received particular praise for their performances in the series, with the former receiving a Primetime Emmy Award nomination and winning a Saturn Award for Best Actress on Television. Bates Motel also won three People 's Choice Awards for Favorite Cable TV Drama, and for Favorite Cable TV Actress (Farmiga) and Actor (Highmore). The first season follows Norma and Norman Bates as they buy a motel after Norman 's father dies. On one of the first nights of the two owning the motel, the former owner breaks in and sexually assaults Norma. Norman knocks the attacker out, and Norma stabs him to death. She decides it 's best not to call the police and to cover up the murder. She and Norman dispose of the body. He complicates the cover - up by keeping a belt that belonged to the victim. When the town sheriff and his deputy notice that a man has gone missing, Norma and Norman must keep them from digging too far. The second season follows the aftermath of Norman 's teacher 's murder, as her mysterious past comes to light. Meanwhile, Norma finds herself making dangerous decisions in order to keep the motel running and preventing the impending bypass. Bradley 's search for her father 's killer leads to the extremes, and Dylan learns the disturbing truth about his parentage. The third season focuses on Norman 's waning deniability about what 's happening to him, and the lengths he will go to gain control of his fragile psyche. The dramatic events of last season leave Norma more aware of her son 's mental fragility and fearful of what he is capable of. Meanwhile, Sheriff Romero begins to distance himself from the Bates family after he suspects Norma is lying to him about her husband 's death. The fourth season follows Norma as she becomes increasingly fearful of Norman, going to great lengths to find him the professional help he needs. This complicates their once unbreakable trust as Norman struggles to maintain his grip on reality. Meanwhile, Sheriff Romero once again finds himself drawn into Norma and Norman 's lives. He agrees to marry Norma because his insurance will enable her to place Norman in an expensive psychiatric hospital. His generosity backfires, however, when Norman learns of the marriage. Norman bitterly resents Romero for coming between him and his mother and at one point threatens the sheriff with an axe. The fifth season begins two years after the death of Norma. Publicly happy and well - adjusted, Norman struggles at home, where his blackouts are increasing and "Mother '' threatens to take him over completely. Meanwhile, Dylan and Emma find themselves drawn back into Norman 's world, and Romero hungers for revenge against his stepson, Norman. On January 12, 2012, it was reported that A&E were developing a television series titled Bates Motel that would serve as a prequel to the Alfred Hitchcock film Psycho. The first script was written by Anthony Cipriano. In March 2012, Carlton Cuse and Kerry Ehrin joined the project as executive producers and head writers. Cuse has cited the drama series Twin Peaks as a key inspiration for Bates Motel, stating, "We pretty much ripped off Twin Peaks... If you wanted to get that confession, the answer is yes. I loved that show. They only did 30 episodes. Kerry (Ehrin) and I thought we 'd do the 70 that are missing. '' On July 2, 2012, A&E gave Bates Motel a straight - to - series order. Chris Bacon was hired to score the music for the series in January 2013. On August 27, 2012, Vera Farmiga was the first to be cast in the leading role of Norma Louise Bates. On September 14, 2012, Freddie Highmore was cast as Norman Bates. That same day, Max Thieriot was cast as Norman 's half - brother, Dylan Massett. Shortly after, on September 19, 2012, Nicola Peltz was cast as Bradley Martin, a possible love interest for Norman. Finally, on September 20, 2012, Olivia Cooke was the final main cast member to join the series, in the role of Emma Decody, Norman 's best friend. Nestor Carbonell was cast in a recurring role as Sheriff Alex Romero in the first season, but was upgraded to the main cast at the beginning of the second season. In July 2014, Kenny Johnson, who recurred as Norma 's brother Caleb Calhoun in the second season, was promoted to a series regular for the third season. It was announced on July 22, 2016 at San Diego Comic - Con International that Rihanna would appear in the iconic role of Marion Crane for the fifth and final season. A replica of the original Bates Motel set from the film Psycho was built on location at approximately 1054 272nd Street in Aldergrove, British Columbia, where portions of the series were filmed. The original house and motel is located in Universal Studios, Hollywood, Los Angeles. Additional filming for the series took place in multiple areas in British Columbia, including Steveston, Coquitlam, Horseshoe Bay, West Vancouver and Fort Langley. In February 2017, after filming was completed for the series, the Bates Motel exterior set in Aldergrove was demolished. The first season of Bates Motel received a score of 66 on Metacritic, indicating "generally favorable reviews ''. Review aggregator Rotten Tomatoes reported that 81 % of 37 critics gave the first season a positive review. The site 's consensus reads, "Bates Motel utilizes mind manipulation and suspenseful fear tactics, on top of consistently sharp character work and wonderfully uncomfortable familial relationships. '' The second season of Bates Motel received a score of 67 out of 100 on Metacritic, from 11 reviews, indicating "generally favorable reviews ''. Rotten Tomatoes reported an 86 % rating from 12 reviews for the second season. The site 's consensus reads, "Bates Motel reinvents a classic thriller with believable performances and distinguished writing. '' The third season of Bates Motel received a score of 72 out of 100 on Metacritic, from 5 reviews, indicating "generally favorable reviews ''. Rotten Tomatoes reported a 92 % rating from 12 reviews. The site 's consensus reads, "Bates Motel further blurs lines around TV 's creepiest taboo mother / son relationship, uncomfortably darkening its already fascinating tone. '' The fourth season of Bates Motel was met with positive reviews from critics. Rotten Tomatoes reported a 100 % positive rating from 8 reviews. The fifth and final season of Bates Motel received a score of 81 out of 100 on Metacritic, from 8 reviews, indicating "universal acclaim ''. Rotten Tomatoes reported a 100 % rating from 8 reviews. In Canada, the series airs only on the U.S. network A&E, which is available through most Canadian cable and satellite companies. In Australia, the series premiered on Fox8 on May 26, 2013. In the UK and Ireland, it premiered on Universal Channel on April 2, 2014. In Jamaica, it premiered on CVM TV on August 11, 2014. In the Middle East, it premiered on OSN First HD in mid-2014. The second season premiered on January 5, 2015. In the Philippines, Bates Motel began airing on Jack TV on August 12, 2013. In South Africa, the series premiered on MNet on June 21, 2013. The series premiered in India on Colors Infinity on November 6, 2015. NBCUniversal partnered with Hot Topic, the American retailer of pop culture merchandise, to introduce a collection of clothing and accessories inspired by Bates Motel. The merchandise, including items such as bathrobes and bloody shower curtains, became available at Hot Topic 's website and select stores on March 18, 2014. As of 2018, the merchandise is no longer available through Hot Topic.
who plays henrietta lange on ncis los angeles
Linda Hunt - wikipedia Lydia Susanna Hunter (born April 2, 1945), better known by her stage name Linda Hunt, is an American film, stage, and television actress. After making her film debut playing Mrs. Oxheart in Popeye (1980), Hunt 's breakthrough came playing the male character Billy Kwan in The Year of Living Dangerously (1982), for which she won the Academy Award for Best Supporting Actress, becoming the first person to win an Oscar portraying a character of the opposite sex. She has had great success in films such as The Bostonians (1984), Dune (1984), Silverado (1985), Eleni (1985), Waiting for the Moon (1987), She - Devil (1989), Kindergarten Cop (1990), If Looks Could Kill (1991), Rain Without Thunder (1992), Twenty Bucks (1993), Younger and Younger (1993), Prêt - à - Porter (1994), Pocahontas (1995), The Relic (1997), Pocahontas II: Journey to a New World (1998), Dragonfly (2002), Yours Mine and Ours (2005), and Stranger than Fiction (2006). Hunt has also had a successful television career. She played Rose in the television movie Basements (1987) and narrated in the television movie The New Chimpanzees. She guest - starred on Hallmark Hall of Fame in both 1978 and 1987, Space Rangers in 1993, Carnivale in both 2003 and 2005, Without a Trace in 2008, The Unit in 2008, and Nip Tuck in 2009. From 1997 to 2002, Hunt played the recurring role of Judge Zoey Hiller on The Practice. She currently portrays Henrietta "Hetty '' Lange on the CBS television series NCIS: Los Angeles, a role she has held since the 2009 debut, for which she has received two Teen Choice Awards. She is also the narrator in the God of War video game franchise. Hunt was born on April 2, 1945 in Morristown, New Jersey. Her father, Raymond Davy Hunter (d. 1985), was vice president of Harper Fuel Oil. Her mother, Elsie Doying Hunter (d. ~ 1994), was a piano teacher who taught at the Westport School of Music, and performed with the Saugatuck Congregational Church Choir in Westport, Connecticut, where Hunt was raised. She has one sibling, an older sister named Marcia (b. 1940). Hunt attended the Interlochen Arts Academy and the Goodman School of Drama in Chicago, Illinois, which is now part of DePaul University. Hunt 's film debut in 1980 was in Robert Altman 's musical comedy Popeye. Two years later, she co-starred as Billy Kwan in The Year of Living Dangerously, Peter Weir 's film adaptation of the novel of the same name. For her role as the male Chinese - Australian photographer Billy Kwan, Hunt won the Academy Award for Best Supporting Actress in 1983, becoming the first person to win an Oscar for playing a character of the opposite sex. In addition, the character was Asian and had the condition of dwarfism. In her screen test, Hunt wore a hairpiece, a fake moustache, and "paste - on pieces above her eyes to (appear) Asian ''. To accomplish the role during production, Hunt shortened "her hair and dye (d) it black (,) wore padding around her waist, shaved her eyebrows, and carried something in her shirt pocket. '' In her 1986 interview with the Bomb magazine, Hunt remarked that Billy Kwan "is supra - personal (with) layers of sexual ambiguity (.) '' Hunt also played a nurse in She - Devil (1989) and the austere school principal opposite Arnold Schwarzenegger in Kindergarten Cop in 1990. Hunt played the assassin Ilsa Grunt in If Looks Could Kill (1991) opposite Richard Grieco and Roger Rees. Hunt was a well - known stage actress before she entered film and television. She made her Broadway debut in a 1975 production of Ah, Wilderness. She was nominated for the Tony Award for Best Actress in a Play for her work in the 1984 play End of the World. She also received two ensemble Obie Awards for her work Off - Broadway in Top Girls and A Metamorphosis in Miniature. She created the role of Aunt Dan in Wallace Shawn 's play Aunt Dan and Lemon. She was a member of the Long Wharf Theatre Company in Connecticut. There she played the Player Queen in a production of Hamlet, among other roles. She portrayed Sister Aloysius in the Pasadena Playhouse production of John Patrick Shanley 's play Doubt. She was praised for her performance as the title character in Bertolt Brecht 's Mother Courage and Her Children. Hunt also appeared as Pope Joan in Caryl Churchill 's Top Girls when London 's Royal Court Theatre 's production was staged at the Public Theater in New York. In an interview with writer Craig Gholson and actor Vincent Caristi, Hunt discussed her experience acting in theatre, "Acting onstage is like an explosion each night. And what comes in at you all the time as you are trying to... create something which is a tremendous act of organization and concentration. ''. Her television appearances include recurring roles as Judge Zoey Hiller on David E. Kelley 's series The Practice and as Dr. Claire Bryson on Without a Trace. She has narrated several installments of The American Experience on PBS. Since 2009, she has co-starred as Operations Manager Henrietta "Hetty '' Lange, on the CBS show NCIS: Los Angeles with Chris O'Donnell, LL Cool J, Daniela Ruah, Eric Christian Olsen, and Barrett Foa. Hunt has a rich, resonant voice, which she has used in numerous documentaries, cartoons, and commercials. She is the on - air host for City Arts & Lectures, a radio program recorded by KQED public radio at the Nourse Theater in San Francisco, a program that presents interviews with celebrated writers, artists, and thinkers addressing contemporary ideas and values, often discussing the creative process. Hunt was chosen by Walt Disney Feature Animation to lend her enigmatic speaking and singing voice to Grandmother Willow in the animated musical film Pocahontas and its direct - to - video sequel Pocahontas II: Journey to a New World. Her voice work includes also the character of Management in Carnivàle, and the protogenoi Gaia, who serves as the narrator in the God of War series of video games. She narrated the introductory film at the International Spy Museum in Washington, D.C., and has also been heard in various commercials of the late 1990s for Tylenol. Hunt narrated the PBS Nature special entitled Christmas in Yellowstone. She also narrated the National Geographic documentary The Great Indian Railway. Hunt has been with psychotherapist Karen Klein since 1978. The two were married in 2008. They currently reside in a historic neighborhood in Hollywood, California with their pet dogs, in an early 20th century American Craftsman style home. As a teenager, Hunt was diagnosed as having hypopituitary dwarfism. A person of short stature, Hunt stands at just 1.45 meters, or 4 feet 9 inches tall. Hunt is an ambassador for the Best Friends Animal Society.
under 19 world cup winner captain of india
Under - 19 Cricket World Cup - wikipedia The ICC Under - 19 Cricket World Cup is an international cricket tournament organised by the International Cricket Council (ICC) contested by national under - 19 teams. First contested in 1988, as the Youth World Cup, it was not staged again until 1998. Since then, the World Cup has been held as a biennial event, organised by the ICC. The first edition of the tournament had only eight participants, but every subsequent edition has included sixteen teams. India, the current champions, has won the World Cup four times which is the highest amongst all teams, while Australia has won thrice, Pakistan twice and England, South Africa, and the West Indies once each. Two other teams -- New Zealand and Sri Lanka -- have made a tournament final without going on to win. The inaugural event was titled the McDonald 's Bicentennial Youth World Cup, and was held in 1988 as part of the Australian Bicentenary celebrations. It took place in South Australia and Victoria. Teams from the seven Test - playing nations, as well as an ICC Associates XI, competed in a round - robin format. Australia lost only one match, their final round - robin game against Pakistan by which time they had qualified for the semis. They went on to beat Pakistan by five wickets in the final, thanks to an unbeaten hundred from Brett Williams. England and West Indies made up the last four, but India were the real disappointments. After opening with a good win against England, they suffered hefty defeats in four matches to be knocked out early. The tournament was notable for the number of future international players who competed. Future England captains Nasser Hussain and Mike Atherton played, as did Indian spinner Venkatapathy Raju, New Zealand all - rounder Chris Cairns, Pakistanis Mushtaq Ahmed and Inzamam - ul - Haq, Sri Lankan Sanath Jayasuriya, and West Indians Brian Lara, Ridley Jacobs, and Jimmy Adams. Australia 's Brett Williams was the leading run - scorer, with 471 runs at an average of 52.33. Wayne Holdsworth from Australia and Mushtaq Ahmed were the leading wicket - takers, with 19 wickets at averages of 12.52 and 16.21 respectively. England were the unexpected winners of the second Under - 19 World Cup in South Africa. In 1998, the event was relaunched in South Africa as a biennial tournament. The only previous tournament of its kind was held ten years earlier. In addition to the nine Test - playing nations, there were teams from Bangladesh, Kenya, Scotland, Ireland, Denmark, Namibia and Papua New Guinea. The teams were divided into four pools, named after Gavaskar, Sobers, Cowdrey and Bradman, and the top two sides from each progressed to two Super League pools, whose winners advanced to the final. In order to give everyone a decent amount of cricket, the non-qualifiers competed in a Plate League, won by Bangladesh, who beat West Indies in the final. West Indies failed to qualify for the Super League after a fiasco concerning the composition of their squad - they arrived with seven players who contravened the age restrictions for the tournament. The Super League, in which every game was covered live on South African satellite television, also threw up a number of shocks and tense finishes; both pools came down to net run - rate at the finish. England, from being down and almost out, beat Pakistan - who surprisingly lost all three of their games - but lost a rain - affected match to India. Australia had beaten India and Pakistan and were favourites to reach the final. Only a massive defeat by England could deny them: but that is precisely what they suffered. In front of a crowd of about 6,000 at Newlands, they were bowled out for 147. New Zealand joined England in the final, where a century from England 's Stephen Peters won the day. Chris Gayle was the tournament 's leading run - scorer, with 364 runs at an average of 72.80. West Indian Ramnaresh Sarwan and Zimbabwean Mluleki Nkala were the leading wicket - takers, with 16 wickets at 10.81 and 13.06 respectively. The 2000 tournament was held in Sri Lanka, and replicated the format from 1998. Participating nations included the nine Test - playing nations, as well as Bangladesh, Kenya, Ireland, Namibia, Holland, Nepal and a combined team from the Americas development region. To the disappointment of a large crowd at Colombo 's SSC, Sri Lanka fell at the final hurdle in a final dominated by India. The winners remained unbeaten throughout, and destroyed Australia by 170 runs in the semi-final to underline their supremacy. In the other semi-final, Sri Lanka delighted a crowd of 5000 at Galle by beating Pakistan. The fact that three of the four semi-finalists were from Asia and so more attuned to the conditions was coincidental - they played the better cricket and, in Pakistan 's case, had a very experienced squad. England, the defending champions, were most disappointing, and they won only one match against a Test - playing country, and that a last - ball victory over Zimbabwe. South Africa, one of the favourites, were desperately unlucky to be eliminated after three no - results gave them three points while Nepal, with four points courtesy of one win over Kenya, went through to the Super League instead. The format of the tournament was as in 1997 - 98, with four groups of four and then a Super League and final. Graeme Smith was the tournament 's leading run - scorer, with 348 runs at an average of 87.00. Pakistan 's Zahid Saeed was the leading wicket - taker, with 15 wickets at 7.60. India 's Yuvraj Singh was named Man of the Series. India clinched the title for the first time under the captaincy of Mohammed Kaif. The fourth Under - 19 World Cup held in New Zealand only confirmed Australia 's dominance of the game, and from their opening match, when they obliterated Kenya by 430 runs, through to their comprehensive victory over South Africa in the final, they were never threatened. Participating nations included the ten Test - playing nations, plus Canada, Kenya, Namibia, Nepal, Papua New Guinea, and Scotland. Their captain, Cameron White, was singled out for praise for his leadership, and he chipped in with 423 runs at 70.50. And they did n't rely on pace either, playing only two seamers and four slow bowlers, with Xavier Doherty, a slow left - armer, leading the wicket - takers with 16 at 9.50 and all without a single wide. In contrast, India, the holders, underperformed in their semi-final against South Africa, a team they had easily beaten a week or so earlier. They also suffered embarrassing defeats to neighbours Pakistan and Bangladesh. Pakistan, however, provided the main upset when they lost to Nepal by 30 runs, and Nepal also gave England a few uneasy moments. Zimbabwe won the plate competition, with their expected opponents, Bangladesh, beaten in the semi-final by Nepal. Australian Cameron White was the tournament 's leading run - scorer, with 423 runs at an average of 70.50 and Xavier Doherty was the leading wicket - taker, with 16 wickets at 9.50. Tatenda Taibu, Zimbabwe 's captain, was Man of the Series for his 250 runs and 12 wickets, not to mention his wicket - keeping in between bowling stints. The 2004 tournament was held in Bangladesh. More than 350,000 spectators saw the 54 matches played in the tournament. The finale ended with a close final between the two best teams - West Indies and Pakistan. It was won by Pakistan by 25 runs against West Indies and a 30,000 crowd acclaimed the victorious Pakistanis almost as their own. The players, from the ten Test countries and six other nations, were feted wherever they went, and the appetite for cricket was remarkable: even Zimbabwe v Canada sold out. The shock was the elimination from the main competition of holders Australia, bowled out for 73 and beaten by Zimbabwe in the group stage when Tinashe Panyangara took 6 for 31, the second - best figures in the competition 's history. And Australia then lost to Bangladesh in the plate final amid thumping drums and gleeful celebrations. The downside was the quality of the cricket, which was often mediocre on some indifferent pitches, and the reporting of six unidentified bowlers for having suspect actions. Pakistan would have finished unbeaten but for a hiccup against England - when both teams had already qualified for the semis. England reached the last four, which was progress, and Alastair Cook looked a class apart. But they came unstuck against West Indies ' spinners in the semi-final. India completed the semi-finalists. Dhawan and Suresh Raina were the backbone of a strong batting line - up, and Raina 's 90 from just 38 balls against the hapless Scots was as brutal an innings as one will see at any level. Both looked international - class already, though faced with a tough task breaking into their senior side 's formidable top order. The captain Ambati Rayudu had been hailed as the next great batting hope, having scored a century and a double in a first - class match at the age of 17. But he did not score the runs promised and was banned by the referee John Morrison from the semi-final after allowing a funereal over-rate during the Super League win against Sri Lanka: eight overs were bowled in the first 50 minutes. India 's Shikhar Dhawan was named Man of the Tournament, and was the tournament 's leading run - scorer, with 505 runs at an average of 84.16. Bangladeshi Enamul Haque was the leading wicket - taker, with 22 wickets at 10.18. This tournament was always going to struggle to live up to the overwhelming response that greeted the previous event in Bangladesh. Despite free tickets the matches were sparsely attended even when the home side were in action, but it should n't detract from an impressive two weeks which finished with Pakistan securing their second consecutive title in an extraordinary final against India at the Premadasa Stadium. Pakistan crumbled to 109, but in a thrilling passage of play reduced India to 9 for 6. Nasir Jamshed, and Anwar Ali, two of the success stories of the tournament, did the damage and there was no way back for India who fell 38 runs short. These two teams and Australia were the pick of the sides and along with England - who surpassed expectation to reach the semi-finals after beating a talented Bangladesh side - made up the final four. A number of players caught the eye, notably Australia captain Moises Henriques, the Indian batsmen Cheteshwar Pujara - the tournament 's leading run - scorer - and teammate Rohit Sharma, along with legspinner Piyush Chawla, who a few weeks later made his Test debut against England. However, perhaps the best story of the tournament was Nepal claiming the Plate trophy after a thrilling victory against New Zealand having also beaten South Africa during the event It was the first time the tournament was held in an Associate Member country. The 2008 Under - 19 Cricket World Cup was held in Malaysia from 17 February to 2 March 2008. Along with hosts, 15 other teams battled in 44 matches packed into 15 days across three cities. India, still smarting from the loss in the previous edition had reason to be upbeat with Tanmay Srivastava, a mature batsman who eventually finished as the tournament 's leading run - getter, in their ranks. Australia and England had forgettable campaigns, coming up short against the big teams after making mincemeat of the minnows. Defending champions Pakistan were fortuitous to reach the semi-finals as their batsmen never really got going and, against South Africa in the semi-finals, their luck finally ran out while chasing 261. New Zealand, boosted by Man of the Tournament Tim Southee, were impressive before losing to India in a narrow run - chase under lights and cloudy skies in the other semi-final. South Africa 's captain Wayne Parnell had played a major role in ensuring their passage to the summit clash, picking up the most wickets in the tournament en route. But they had lost to India in the group stages and lightning did strike twice. India under the leadership of Virat Kohli now india captain, after being bowled out for 159, emerged triumphant by 12 runs under the D / L method and were crowned champions for the second time. The 2010 Under - 19 Cricket World Cup was held in New Zealand in January 2010. The tournament was hosted in New Zealand after the ICC took it away from Kenya on the flimsiest of reasons which ridiculed its own mission to spread the game. Kenya were further kicked by the ICC as their side was not allowed to participate as it had not won the African qualifying event - a weakened side had been fielded as at the time, as hosts, they did not need to qualify. As it was, New Zealand did a decent job but crowds were dismal and the group stages were as tediously predictable as in the senior tournament, with the better - funded big nations dominating. South Africa did beat Australia in a good match but a dead rubber. The competition came alive in the quarter - finals as West Indies beat England and Sri Lanka defeated South Africa. The best tie of the competition came when Pakistan beat fierce rivals India by two wickets with three balls remaining in a low - scoring match. The final between Australia and Pakistan was a rematch of the first tournament, and Australia won by 25 runs in a game where fortunes ebbed and flowed throughout. The 2012 Under - 19 Cricket World Cup was held in the Tony Ireland Stadium, Australia. Along with the ten test playing nations, Afghanistan, Nepal, Papua New Guinea, Ireland, Scotland and Namibia also participated in this tournament. Australia lost against India in the final on 26 August 2012. India 's third U19 World Cup meant they tied for the most wins with Australia. Sri Lanka could not go through into the last eight but won the Plate championship by defeating Afghanistan by 7 wickets. Reece Topley of England was the highest wicket taker whereas Anamul Haque of Bangladesh was the top run getter. India won the final against Australia with 14 balls to spare and 6 wickets remaining. Captain Unmukt Chand played a match winning knock of 111 * not out in 130 balls with the help of 6 sixes & 7 fours. Sandeep Sharma also excelled with four wickets under his belt. The 2014 Under - 19 Cricket World - Cup was held in Dubai (U.A.E.) in 2014. It was the first time that U.A.E. had hosted an ICC event. Afghanistan was the only non-full member to qualify for the Quarter Finals. This was the first time that Afghanistan reached the last eight of this tournament, courtesy of their stellar performance against Australia in the group stage. In fact, this was the second time that a non-test playing nation qualified for the Super League / Quarter Finals, Nepal being the first one in the 2000 edition. India wobbled in the Quarter Finals against England and finally lost in the final over. This was the first semi-final berth for England in the last four editions. Pakistan beat England in the semis to reach its fifth Under - 19 Final, becoming the first team to do so. South Africa beat Australia in the second semi-final. In a one - sided final, South Africa beat Pakistan and claimed its maiden U-19 World Cup title. Corbin Bosch, son of former South African cricketer late Tertius Bosch, was the Man of the Match in the finals and Aiden Markram was the Man of the Series. South Africa did not lose even a single match in the entire tournament. The 2016 Under - 19 Cricket World Cup was held in Bangladesh. It was the eleventh edition of the Under - 19 World Cup, and the second to be held in Bangladesh. On 5 January 2016, Australia announced that the Australian squad had pulled out of the tournament, citing security reasons. Defending champions South Africa were knocked out of the tournament in the group stage, with back - to - back defeats to Bangladesh and Namibia. This was the first time that two non-test playing nations -- Nepal and Namibia -- qualified for the Super League / Quarterfinals. The West Indies defeated India by five wickets in the final, claiming their first title. Bangladesh 's captain Mehedi Hasan was named player of the tournament, while England 's Jack Burnham and Namibia 's Fritz Coetzee led the tournament in runs and wickets, respectively. The 2018 Under - 19 Cricket World Cup was held in New Zealand. India and Australia played in the finals at Mount Maunganui on Feb 3 2018. It was the 12th Edition of the Under - 19 World Cup. India defeated Australia by 8 wickets, with Manjot Kalra scoring a match - winning 101 *. The Man of the match was awarded to Manjot Kalra, while player of the tournament was awarded to Shubman Gill. India now holds the most wins record in Under - 19 World Cup. In the table below, teams are sorted by best performance, then winning percentage, then (if equal) by alphabetical order. Note: the win percentage excludes no results and counts ties as half a win. Note: age restrictions were relaxed for some teams at the early editions of the tournament.
the difference between a team leader and manager
Team leader - wikipedia A team leader is someone who provides guidance, instruction, direction and leadership to a group of individuals (the team) for the purpose of achieving a key result or group of aligned results. The team leader monitors the quantitative and qualitative achievements of the team and reports results to a manager (a manager may oversee multiple teams). The leader often works within the team, as a member, carrying out the same roles but with the additional ' leader ' responsibilities - as opposed to higher level management who often have a separate job role altogether. In order for a team to function successfully, the team leader must also motivate the team to "use their knowledge and skills to achieve the shared goals. ''. When a team leader motivates a team, group members can function in a goal oriented manner. A "team leader '' is also someone who has the capability to drive performance within a group of people. Team leaders utilize their expertise, their peers, influence, and / or creativeness to formulate an effective team. Scouller (2011) defined the purpose of a leader (including a team leader) as follows: "The purpose of a leader is to make sure there is leadership... to ensure that all four dimensions of leadership are (being addressed). '' The four dimensions being: (1) a shared, motivating team purpose or vision or goal (2) action, progress and results (3) collective unity or team spirit (4) attention to individuals. Leaders also contribute by leading through example. Team Leader Core Responsibilities: While the distinction between leader and manager may be confusing, the difference between the two is that a manager focuses more on organization and keeping the team on task while a team leader relates better to an artist and tends to have a more creative minded approach to problems. Team leaders can also be described as entrepreneurial and forward thinking. Realistically, team leaders will manage a group or team consisting of less people than what a manager would be in charge of. The function of line manager and team manager are hybrid forms of leader and manager. They have a completely different job role than the team members and manage larger teams. The line manager and team manager report to middle or high management. Team leaders are expected to be focused on solving problems. Under a manager 's watch, a team should function as smoothly and efficiently as possible. This form of leadership stresses a practical approach to the work environment that instills discipline throughout the team or organization. Managers can be trained to lead a team to great heights within a certain set of limits. The creativity and critical thinking required are not as strenuous as required by a true leader or entrepreneur. While managers need to be tolerant and able to create goodwill with the team and perhaps clients, they do not need to be necessarily hard working, intelligent, or analytical. Instead managers are trained for a specific purpose. Entrepreneurs use a vision for what they see as being a success to guide their actions. Managers tend to set goals that prioritize necessities and the culture of the organization over all else. Leaders on the other hand are progressive and want to set goals based on their personal wants and desires. One way of looking at it would be to think of a business as simply wanting to perform and innovate only to the point that they think their customers would be interested in buying a product or service. An innovative spirit in a leader is what propels him to create something unique. He will use this single minded passion to inspire and push others around him to greater heights. Instead of being reactive to the wants of others, leaders will be active in pursuing their goals. The resulting desires and objectives push the organization in the direction of the leaders vision. Managers also tend to view work as something that warrants either coercion by a reward and punishment system. Managers lean toward limiting and narrowing the number of solutions available to make sure there is consistency and efficiency. Leaders move in the opposite direction and try to incorporate fresh solutions to new problems. They excite those around them with exciting images about what could be. This comes down to a fundamental character trait in which managers tend to be risk averse while leaders are more risk seeking. Where managers will work methodically to make sure everyday tasks go smoothly, leaders will have a difficult time staying focused when given the same tasks. Leaders and managers tend to both build relationships with those that are working under them. With that being said it is important to note the type of relationship that is being built. Managers tend to maintain a distance from those that work under them by showing little or no empathy for them. Leaders on the other hand are very empathetic to their employees and those that they lead. The result is that followers, or employees, are motivated to work and pursue a common goal held by the leader and the rest of the group. In inter group conflicts and relationships, the managers sole focus is usually turning a win - lose situation into a win - win situation or maintaining the win - win situation. This leads to a desensitization of the managers views towards his employees feelings. For managers relationships are n't about creating a great work environment as it is about maintaining a balance of power. According to William James, are two basic personality types: once - borns and twice - borns. Once borns generally have stable childhoods and upbringings that lead them to be more conservative in their views. They strive for harmony in their environment and use their own sense of self as their guide. Twice - borns are the exact opposite. People who are twice - borns generally have an upbringing that is defined by a struggle to create some sort of order in their lives. As a result, these individuals tend to strive for separating themselves from their peers and society. Their self - perception is not based on where they work, what organizations they are a part of, or even what they have already done in the past. Instead they are driven by the desire to create change. Managers show the traits of once - borns while leaders exhibit the traits of twice - borns. Leaders see themselves as separate from the rest and try to play this sense of self by becoming entrepreneurs or great political leaders or even by chasing any endeavor that they feel will differentiate them. Managers want to maintain their harmonic environment and commit their lives to making sure nothing causes disturbances. While traditional leadership has maintained that one person generally leads several groups, each with their own leadership hierarchy, the concertive style of leadership gives the power to the group. While there will generally be a management group responsible for bigger decisions for the direction of the company or organization, the workers get to develop their own set of values and rules to govern themselves. This includes task division, problem solving, day - to - day functions, group prioritization, and internal conflict resolution. Instead of a manager or leader being responsible for producing the results, the management expects the burden now fall on each individual member of the group. By establishing a set of values, rules, and norms these groups can go on to manage themselves, usually with success. In a holacracy people have multiple roles while increasing efficiency, confidence, and communication at the workplace. This model was adopted by Zappos, because they had "gone from being a fast speedboat to a cruise ship ''. While many cite more work to do and the large learning curve as obstacles to implementing the system, most workers are happier than when they had a managerial system of organizational structure.
the supreme court's power of judicial review was established by
Judicial review in the United States - wikipedia In the United States, judicial review is the ability of a court to examine and decide if a statute, treaty or administrative regulation contradicts or violates the provisions of existing law, a State Constitution, or ultimately the United States Constitution. While the U.S. Constitution does not explicitly define a power of judicial review, the authority for judicial review in the United States has been inferred from the structure, provisions, and history of the Constitution. Two landmark decisions by the U.S. Supreme Court served to confirm the inferred constitutional authority for judicial review in the United States: In 1796, Hylton v. United States was the first case decided by the Supreme Court involving a direct challenge to the constitutionality of an act of Congress, the Carriage Act of 1794 which imposed a "carriage tax ''. The Court engaged in the process of judicial review by examining the plaintiff 's claim that the carriage tax was unconstitutional. After review, the Supreme Court decided the Carriage Act was constitutional. In 1803, Marbury v. Madison was the first Supreme Court case where the Court asserted its authority for judicial review to strike down a law as unconstitutional. At the end of his opinion in this decision, Chief Justice John Marshall maintained that the Supreme Court 's responsibility to overturn unconstitutional legislation was a necessary consequence of their sworn oath of office to uphold the Constitution as instructed in Article Six of the Constitution. As of 2014, the United States Supreme Court has held 176 Acts of the U.S. Congress unconstitutional. -- Alexander Hamilton in Federalist No. 78 Before the Constitutional Convention in 1787, the power of judicial review had been exercised in a number of states. In the years from 1776 to 1787, state courts in at least seven of the thirteen states had engaged in judicial review and had invalidated state statutes because they violated the state constitution or other higher law. These state courts treated state constitutions as statements of governing law to be interpreted and applied by judges. These courts reasoned that because their state constitution was the fundamental law of the state, they must apply the state constitution rather than an act of the legislature that was inconsistent with the state constitution. These state court cases involving judicial review were reported in the press and produced public discussion and comment. At least seven of the delegates to the Constitutional Convention, including Alexander Hamilton and Edmund Randolph, had personal experience with judicial review because they had been lawyers or judges in these state court cases involving judicial review. Other delegates referred to some of these state court cases during the debates at the Constitutional Convention. The concept of judicial review therefore was familiar to the framers and to the public before the Constitutional Convention. The text of the Constitution does not contain a specific reference to the power of judicial review. Rather, the power to declare laws unconstitutional has been deemed an implied power, derived from Article III and Article VI. The provisions relating to the federal judicial power in Article III state: The judicial power of the United States, shall be vested in one Supreme Court, and in such inferior courts as the Congress may from time to time ordain and establish... The judicial power shall extend to all cases, in law and equity, arising under this Constitution, the laws of the United States, and treaties made, or which shall be made, under their authority... In all cases affecting ambassadors, other public ministers and consuls, and those in which a state shall be party, the Supreme Court shall have original jurisdiction. In all the other cases before mentioned, the Supreme Court shall have appellate jurisdiction, both as to law and fact, with such exceptions, and under such regulations as the Congress shall make. The Supremacy Clause of Article VI states: This Constitution, and the Laws of the United States which shall be made in Pursuance thereof; and all Treaties made, or which shall be made, under the Authority of the United States, shall be the supreme Law of the Land; and the Judges in every State shall be bound thereby, any Thing in the Constitution or Laws of any State to the Contrary notwithstanding... (A) ll executive and judicial Officers, both of the United States and of the several States, shall be bound by Oath or Affirmation, to support this Constitution. The power of judicial review has been implied from these provisions based on the following reasoning. It is the inherent duty of the courts to determine the applicable law in any given case. The Supremacy Clause says "(t) his Constitution '' is the "supreme law of the land. '' The Constitution therefore is the fundamental law of the United States. Federal statutes are the law of the land only when they are "made in pursuance '' of the Constitution. State constitutions and statutes are valid only if they are consistent with the Constitution. Any law contrary to the Constitution is void. The federal judicial power extends to all cases "arising under this Constitution. '' As part of their inherent duty to determine the law, the federal courts have the duty to interpret and apply the Constitution and to decide whether a federal or state statute conflicts with the Constitution. All judges are bound to follow the Constitution. If there is a conflict, the federal courts have a duty to follow the Constitution and to treat the conflicting statute as unenforceable. The Supreme Court has final appellate jurisdiction in all cases arising under the Constitution, so the Supreme Court has the ultimate authority to decide whether statutes are consistent with the Constitution. During the debates at the Constitutional Convention, the Founding Fathers made a number of references to the concept of judicial review. The greatest number of these references occurred during the discussion of the proposal known as the Virginia Plan. The Virginia Plan included a "council of revision '' that would have examined proposed new federal laws and would have accepted or rejected them, similar to today 's presidential veto. The "council of revision '' would have included the President along with some federal judges. Several delegates objected to the inclusion of federal judges on the council of revision. They argued the federal judiciary, through its power to declare laws unconstitutional, already had the opportunity to protect against legislative encroachment, and the judiciary did not need a second way to negate laws by participating in the council of revision. For example, Elbridge Gerry said federal judges "would have a sufficient check against encroachments on their own department by their exposition of the laws, which involved a power of deciding on their constitutionality. In some states the judges had actually set aside laws, as being against the constitution. This was done too with general approbation. '' Luther Martin said: "(A) s to the constitutionality of laws, that point will come before the judges in their official character. In this character they have a negative on the laws. Join them with the executive in the revision, and they will have a double negative. '' These and other similar comments by the delegates indicated that the federal courts would have the power of judicial review. Other delegates argued that if federal judges were involved in the law - making process through participation on the council of revision, their objectivity as judges in later deciding on the constitutionality of those laws could be impaired. These comments indicated a belief that the federal courts would have the power to declare laws unconstitutional. At several other points in the debates at the Constitutional Convention, delegates made comments indicating their belief that under the Constitution, federal judges would have the power of judicial review. For example, James Madison said: "A law violating a constitution established by the people themselves, would be considered by the Judges as null & void. '' George Mason said that federal judges "could declare an unconstitutional law void. '' However, Mason added that the power of judicial review is not a general power to strike down all laws, but only ones that are unconstitutional: But with regard to every law however unjust, oppressive or pernicious, which did not come plainly under this description, they would be under the necessity as Judges to give it a free course. In all, fifteen delegates from nine states made comments regarding the power of the federal courts to review the constitutionality of laws. All but two of them supported the idea that the federal courts would have the power of judicial review. Some delegates to the Constitutional Convention did not speak about judicial review during the Convention, but did speak about it before or after the Convention. Including these additional comments by Convention delegates, scholars have found that twenty - five or twenty - six of the Convention delegates made comments indicating support for judicial review, while three to six delegates opposed judicial review. One review of the debates and voting records of the convention counted as many as forty delegates who supported judicial review, with four or five opposed. In their comments relating to judicial review, the framers indicated that the power of judges to declare laws unconstitutional was part of the system of separation of powers. The framers stated that the courts ' power to declare laws unconstitutional would provide a check on the legislature, protecting against excessive exercise of legislative power. Judicial review was discussed in at least seven of the thirteen state ratifying conventions, and was mentioned by almost two dozen delegates. In each of these conventions, delegates asserted that the proposed Constitution would allow the courts to exercise judicial review. There is no record of any delegate to a state ratifying convention who indicated that the federal courts would not have the power of judicial review. For example, James Wilson asserted in the Pennsylvania ratifying convention that federal judges would exercise judicial review: "If a law should be made inconsistent with those powers vested by this instrument in Congress, the judges, as a consequence of their independence, and the particular powers of government being defined, will declare such law to be null and void. For the power of the Constitution predominates. Anything, therefore, that shall be enacted by Congress contrary thereto will not have the force of law. '' In the Connecticut ratifying convention, Oliver Ellsworth likewise described judicial review as a feature of the Constitution: "This Constitution defines the extent of the powers of the general government. If the general legislature should at any time overleap their limits, the judicial department is a constitutional check. If the United States go beyond their powers, if they make a law which the Constitution does not authorize, it is void; and the judicial power, the national judges, who, to secure their impartiality, are to be made independent, will declare it to be void. '' During the ratification process, supporters and opponents of ratification published pamphlets, essays, and speeches debating various aspects of the Constitution. Publications by over a dozen authors in at least twelve of the thirteen states asserted that under the Constitution, the federal courts would have the power of judicial review. There is no record of any opponent to the Constitution who claimed that the Constitution did not involve a power of judicial review. After reviewing the statements made by the founders, one scholar concluded: "The evidence from the Constitutional Convention and from the state ratification conventions is overwhelming that the original public meaning of the term ' judicial power ' (in Article III) included the power to nullify unconstitutional laws. '' The Federalist Papers, which were published in 1787 -- 1788 to promote ratification of the Constitution, made several references to the power of judicial review. The most extensive discussion of judicial review was in Federalist No. 78, written by Alexander Hamilton, which clearly explained that the federal courts would have the power of judicial review. Hamilton stated that under the Constitution, the federal judiciary would have the power to declare laws unconstitutional. Hamilton asserted that this was appropriate because it would protect the people against abuse of power by Congress: (T) he courts were designed to be an intermediate body between the people and the legislature, in order, among other things, to keep the latter within the limits assigned to their authority. The interpretation of the laws is the proper and peculiar province of the courts. A constitution is, in fact, and must be regarded by the judges, as a fundamental law. It therefore belongs to them to ascertain its meaning, as well as the meaning of any particular act proceeding from the legislative body. If there should happen to be an irreconcilable variance between the two, that which has the superior obligation and validity ought, of course, to be preferred; or, in other words, the Constitution ought to be preferred to the statute, the intention of the people to the intention of their agents. Nor does this conclusion by any means suppose a superiority of the judicial to the legislative power. It only supposes that the power of the people is superior to both; and that where the will of the legislature, declared in its statutes, stands in opposition to that of the people, declared in the Constitution, the judges ought to be governed by the latter rather than the former. They ought to regulate their decisions by the fundamental laws, rather than by those which are not fundamental... (A) ccordingly, whenever a particular statute contravenes the Constitution, it will be the duty of the Judicial tribunals to adhere to the latter and disregard the former... (T) he courts of justice are to be considered as the bulwarks of a limited Constitution against legislative encroachments. In Federalist No. 80, Hamilton rejected the idea that the power to decide the constitutionality of an act of Congress should lie with each of the states: "The mere necessity of uniformity in the interpretation of the national laws, decides the question. Thirteen independent courts of final jurisdiction over the same causes, arising upon the same laws, is a hydra in government, from which nothing but contradiction and confusion can proceed. '' Consistent with the need for uniformity in interpretation of the Constitution, Hamilton explained in Federalist No. 82 that the Supreme Court has authority to hear appeals from the state courts in cases relating to the Constitution. The arguments against ratification by the Anti-Federalists agreed that the federal courts would have the power of judicial review, though the Anti-Federalists viewed this negatively. Robert Yates, writing under the pseudonym "Brutus '', stated: (T) he judges under this constitution will control the legislature, for the supreme court are authorised in the last resort, to determine what is the extent of the powers of the Congress. They are to give the constitution an explanation, and there is no power above them to set aside their judgment... The supreme court then have a right, independent of the legislature, to give a construction to the constitution and every part of it, and there is no power provided in this system to correct their construction or do it away. If, therefore, the legislature pass any laws, inconsistent with the sense the judges put upon the constitution, they will declare it void. The first Congress passed the Judiciary Act of 1789, establishing the lower federal courts and specifying the details of federal court jurisdiction. Section 25 of the Judiciary Act provided for the Supreme Court to hear appeals from state courts when the state court decided that a federal statute was invalid, or when the state court upheld a state statute against a claim that the state statute was repugnant to the Constitution. This provision gave the Supreme Court the power to review state court decisions involving the constitutionality of both federal statutes and state statutes. The Judiciary Act thereby incorporated the concept of judicial review. Between the ratification of the Constitution in 1788 and the decision in Marbury v. Madison in 1803, judicial review was employed in both the federal and state courts. A detailed analysis has identified thirty - one state or federal cases during this time in which statutes were struck down as unconstitutional, and seven additional cases in which statutes were upheld but at least one judge concluded the statute was unconstitutional. The author of this analysis, Professor William Treanor, concluded: "The sheer number of these decisions not only belies the notion that the institution of judicial review was created by Chief Justice Marshall in Marbury, it also reflects widespread acceptance and application of the doctrine. '' Several other cases involving judicial review issues reached the Supreme Court before the issue was definitively decided in Marbury in 1803. In Hayburn 's Case, 2 U.S. (2 Dall.) 408 (1792), federal circuit courts held an act of Congress unconstitutional for the first time. Three federal circuit courts found that Congress had violated the Constitution by passing an act requiring circuit court judges to decide pension applications, subject to the review of the Secretary of War. These circuit courts found that this was not a proper judicial function under Article III. These three decisions were appealed to the Supreme Court, but the appeals became moot when Congress repealed the statute while the appeals were pending. In an unreported Supreme Court decision in 1794, United States v. Yale Todd, the Supreme Court reversed a pension that was awarded under the same pension act that had been at issue in Hayburn 's Case. The Court apparently decided that the act designating judges to decide pensions was not constitutional because this was not a proper judicial function. This apparently was the first Supreme Court case to find an act of Congress unconstitutional. However, there was not an official report of the case and it was not used as a precedent. Hylton v. United States, 3 U.S. (3 Dall.) 171 (1796), was the first case decided by the Supreme Court that involved a challenge to the constitutionality of an act of Congress. It was argued that a federal tax on carriages violated the constitutional provision regarding "direct '' taxes. The Supreme Court upheld the tax, finding it was constitutional. Although the Supreme Court did not strike down the act in question, the Court engaged in the process of judicial review by considering the constitutionality of the tax. The case was widely publicized at the time, and observers understood that the Court was testing the constitutionality of an act of Congress. Because it found the statute valid, the Court did not have to assert that it had the power to declare a statute unconstitutional. In Ware v. Hylton, 3 U.S. (3 Dall.) 199 (1796), the Supreme Court for the first time struck down a state statute. The Court reviewed a Virginia statute regarding pre-Revolutionary war debts and found that it was inconsistent with the peace treaty between the United States and Great Britain. Relying on the Supremacy Clause, the Court found the Virginia statute invalid. In Hollingsworth v. Virginia, 3 U.S. (3 Dall.) 378 (1798), the Supreme Court found that it did not have jurisdiction to hear the case because of the jurisdiction limitations of the Eleventh Amendment. This holding could be viewed as an implicit finding that the Judiciary Act of 1789, which would have allowed the Court jurisdiction, was unconstitutional in part. However, the Court did not provide any reasoning for its conclusion and did not say that it was finding the statute unconstitutional. In Cooper v. Telfair, 4 U.S. (4 Dall.) 14 (1800), Justice Chase stated: "It is indeed a general opinion -- it is expressly admitted by all this bar and some of the judges have, individually in the circuits decided, that the Supreme Court can declare an act of Congress to be unconstitutional, and therefore invalid, but there is no adjudication of the Supreme Court itself upon the point. '' In 1798, the Kentucky and Virginia legislatures passed a series of resolutions asserting that the states have the power to determine whether acts of Congress are constitutional. In response, ten states passed their own resolutions disapproving the Kentucky and Virginia resolutions. Six of these states took the position that the power to declare acts of Congress unconstitutional lies in the federal courts, not in the state legislatures. For example, Vermont 's resolution stated: "It belongs not to state legislatures to decide on the constitutionality of laws made by the general government; this power being exclusively vested in the judiciary courts of the Union. '' Thus, five years before Marbury v. Madison, a number of state legislatures stated their understanding that under the Constitution, the federal courts possess the power of judicial review. The Supreme Court 's landmark decision regarding judicial review is Marbury v. Madison, 5 U.S. (1 Cranch) 137 (1803). Marbury was the first Supreme Court decision to strike down an act of Congress as unconstitutional. Chief Justice John Marshall wrote the opinion for a unanimous Court. The case arose when William Marbury filed a lawsuit seeking an order (a "writ of mandamus '') requiring the Secretary of State, James Madison, to deliver to Marbury a commission appointing him as a justice of the peace. Marbury filed his case directly in the Supreme Court, invoking the Court 's "original jurisdiction '', rather than filing in a lower court. The constitutional issue involved the question of whether the Supreme Court had jurisdiction to hear the case. The Judiciary Act of 1789 gave the Supreme Court original jurisdiction in cases involving writs of mandamus. So, under the Judiciary Act, the Supreme Court would have had jurisdiction to hear Marbury 's case. However, the Constitution describes the cases in which the Supreme Court has original jurisdiction, and does not include mandamus cases. The Judiciary Act therefore attempted to give the Supreme Court jurisdiction that was not "warranted by the Constitution. '' Marshall 's opinion stated that in the Constitution, the people established a government of limited powers: "The powers of the Legislature are defined and limited; and that those limits may not be mistaken or forgotten, the Constitution is written. '' The limits established in the Constitution would be meaningless "if these limits may at any time be passed by those intended to be restrained. '' Marshall observed that the Constitution is "the fundamental and paramount law of the nation '', and that it can not be altered by an ordinary act of the legislature. Therefore, "an act of the Legislature repugnant to the Constitution is void. '' Marshall then discussed the role of the courts, which is at the heart of the doctrine of judicial review. It would be an "absurdity '', said Marshall, to require the courts to apply a law that is void. Rather, it is the inherent duty of the courts to interpret and apply the Constitution, and to determine whether there is a conflict between a statute and the Constitution: It is emphatically the province and duty of the Judicial Department to say what the law is. Those who apply the rule to particular cases must, of necessity, expound and interpret that rule. If two laws conflict with each other, the Courts must decide on the operation of each. So, if a law be in opposition to the Constitution, if both the law and the Constitution apply to a particular case, so that the Court must either decide that case conformably to the law, disregarding the Constitution, or conformably to the Constitution, disregarding the law, the Court must determine which of these conflicting rules governs the case. This is of the very essence of judicial duty. If, then, the Courts are to regard the Constitution, and the Constitution is superior to any ordinary act of the Legislature, the Constitution, and not such ordinary act, must govern the case to which they both apply... Marshall stated that the courts are authorized by the provisions of the Constitution itself to "look into '' the Constitution, that is, to interpret and apply it, and that they have the duty to refuse to enforce any laws that are contrary to the Constitution. Specifically, Article III provides that the federal judicial power "is extended to all cases arising under the Constitution. '' Article VI requires judges to take an oath "to support this Constitution. '' Article VI also states that only laws "made in pursuance of the Constitution '' are the law of the land. Marshall concluded: "Thus, the particular phraseology of the Constitution of the United States confirms and strengthens the principle, supposed to be essential to all written Constitutions, that a law repugnant to the Constitution is void, and that courts, as well as other departments, are bound by that instrument. '' Marbury long has been regarded as the seminal case with respect to the doctrine of judicial review. Some scholars have suggested that Marshall 's opinion in Marbury essentially created judicial review. In his book The Least Dangerous Branch, Professor Alexander Bickel wrote: (T) he institution of the judiciary needed to be summoned up out of the constitutional vapors, shaped, and maintained. And the Great Chief Justice, John Marshall -- not single - handed, but first and foremost -- was there to do it and did. If any social process can be said to have been ' done ' at a given time, and by a given act, it is Marshall 's achievement. The time was 1803; the act was the decision in the case of Marbury v. Madison. Other scholars view this as an overstatement, and argue that Marbury was decided in a context in which judicial review already was a familiar concept. These scholars point to the facts showing that judicial review was acknowledged by the Constitution 's framers, was explained in the Federalist Papers and in the ratification debates, and was used by both state and federal courts for more than twenty years before Marbury, including the Supreme Court in Hylton v. United States. One scholar concluded: "(B) efore Marbury, judicial review had gained wide support. '' Marbury was the point at which the Supreme Court adopted a monitoring role over government actions. After the Court exercised its power of judicial review in Marbury, it avoided striking down a federal statute during the next fifty years. The court would not do so again until Dred Scott v. Sandford, 60 U.S. (19 How.) 393 (1857). However, the Supreme Court did exercise judicial review in other contexts. In particular, the Court struck down a number of state statutes that were contrary to the Constitution. The first case in which the Supreme Court struck down a state statute as unconstitutional was Fletcher v. Peck, 10 U.S. (6 Cranch) 87 (1810). In a few cases, state courts took the position that their judgments were final and were not subject to review by the Supreme Court. They argued that the Constitution did not give the Supreme Court the authority to review state court decisions. They asserted that the Judiciary Act of 1789, which provided that the Supreme Court could hear certain appeals from state courts, was unconstitutional. In effect, these state courts were asserting that the principle of judicial review did not extend to allow federal review of state court decisions. This would have left the states free to adopt their own interpretations of the Constitution. The Supreme Court rejected this argument. In Martin v. Hunter 's Lessee, 14 U.S. (1 Wheat.) 304 (1816), the Court held that under Article III, the federal courts have jurisdiction to hear all cases arising under the Constitution and laws of the United States, and that the Supreme Court has appellate jurisdiction in all such cases, whether those cases are filed in state or federal courts. The Court issued another decision to the same effect in the context of a criminal case, Cohens v. Virginia, 19 U.S. (6 Wheat.) 264 (1821). It is now well established that the Supreme Court may review decisions of state courts that involve federal law. The Supreme Court also has reviewed actions of the federal executive branch to determine whether those actions were authorized by acts of Congress or were beyond the authority granted by Congress. Judicial review is now well established as a cornerstone of constitutional law. As of September 2017, the United States Supreme Court had held unconstitutional portions or the entirety of some 182 Acts of the U.S. Congress, the most recent in the Supreme Court 's June 2017 Matal v. Tam decision striking down a portion of July 1946 's Lanham Act. Although judicial review has now become an established part of constitutional law in the United States, there are some who disagree with the doctrine. At the Constitutional Convention, neither proponents nor opponents of judicial review disputed that any government based on a written constitution requires some mechanism to prevent laws that violate that constitution from being made and enforced. Otherwise, the document would be meaningless, and the legislature, with the power to enact any laws whatsoever, would be the supreme arm of government (the British doctrine of parliamentary sovereignty). The delegates at the Convention differed with respect to the question of whether Congress or the judiciary should make determinations regarding constitutionality of statutes. Hamilton addressed this in Federalist No. 78, in which he explained the reasons that the federal judiciary has the role of reviewing the constitutionality of statutes: If it be said that the legislative body are themselves the constitutional judges of their own powers, and that the construction they put upon them is conclusive upon the other departments, it may be answered, that this can not be the natural presumption, where it is not to be collected from any particular provisions in the Constitution. It is not otherwise to be supposed, that the Constitution could intend to enable the representatives of the people to substitute their will to that of their constituents. It is far more rational to suppose, that the courts were designed to be an intermediate body between the people and the legislature, in order, among other things, to keep the latter within the limits assigned to their authority. Since the adoption of the Constitution, some have argued that the power of judicial review gives the courts the ability to impose their own views of the law, without an adequate check from any other branch of government. Robert Yates, a delegate to the Constitutional Convention from New York, argued during the ratification process in the Anti-Federalist Papers that the courts would use the power of judicial review loosely to impose their views about the "spirit '' of the Constitution: (I) n their decisions they will not confine themselves to any fixed or established rules, but will determine, according to what appears to them, the reason and spirit of the constitution. The opinions of the supreme court, whatever they may be, will have the force of law; because there is no power provided in the constitution, that can correct their errors, or controul their adjudications. From this court there is no appeal. In 1820, Thomas Jefferson expressed his opposition to the doctrine of judicial review: You seem... to consider the judges as the ultimate arbiters of all constitutional questions; a very dangerous doctrine indeed, and one which would place us under the despotism of an oligarchy. Our judges are as honest as other men, and not more so. They have, with others, the same passions for party, for power, and the privilege of their corps... Their power (is) the more dangerous as they are in office for life, and not responsible, as the other functionaries are, to the elective control. The Constitution has erected no such single tribunal, knowing that to whatever hands confided, with the corruptions of time and party, its members would become despots. It has more wisely made all the departments co-equal and co-sovereign within themselves. In 1861, Abraham Lincoln touched upon the same subject, during his first inaugural address: (T) he candid citizen must confess that if the policy of the Government upon vital questions affecting the whole people is to be irrevocably fixed by decisions of the Supreme Court, the instant they are made in ordinary litigation between parties in personal actions the people will have ceased to be their own rulers, having to that extent practically resigned their Government into the hands of that eminent tribunal. Nor is there in this view any assault upon the court or the judges. It is a duty from which they may not shrink to decide cases properly brought before them, and it is no fault of theirs if others seek to turn their decisions to political purposes. Lincoln was alluding here to the case of Dred Scott v. Sandford, in which the Court had struck down a federal statute for the first time since Marbury v. Madison. It has been argued that the judiciary is not the only branch of government that may interpret the meaning of the Constitution. Article VI requires federal and state officeholders to be bound "by Oath or Affirmation, to support this Constitution. '' It has been argued that such officials may follow their own interpretations of the Constitution, at least until those interpretations have been tested in court. Some have argued that judicial review is unconstitutional based on two arguments. First, the power of judicial review is not expressly delegated to the courts in the Constitution. The Tenth Amendment reserves to the states (or to the people) those powers not delegated to the federal government. The second argument is that the states alone have the power to ratify changes to the "supreme law '' (the U.S. Constitution), and that the states should play some role in interpreting its meaning. Under this theory, allowing only federal courts to definitively conduct judicial review of federal law allows the national government to interpret its own restrictions as it sees fit, with no meaningful input from the ratifying power. In the United States, unconstitutionality is the only ground for a federal court to strike down a federal statute. Justice Washington, speaking for the Marshall Court, put it this way in an 1829 case: We intend to decide no more than that the statute objected to in this case is not repugnant to the Constitution of the United States, and that unless it be so, this Court has no authority, under the 25th section of the judiciary act, to re-examine and to reverse the judgement of the supreme court of Pennsylvania in the present case. If a state statute conflicts with a valid federal statute, then courts may strike down the state statute as an unstatutable violation of the Supremacy Clause. But a federal court may not strike down a statute absent a violation of federal law or of the federal Constitution. Moreover, a suspicion or possibility of unconstitutionality is not enough for American courts to strike down a statute. Alexander Hamilton explained in Federalist 78 that the standard of review should be "irreconcilable variance '' with the Constitution. Anti-federalists agreed that courts would be unable to strike down federal statutes absent a conflict with the Constitution. For example, Robert Yates, writing under the pseudonym "Brutus '', asserted that "the courts of the general government (will) be under obligation to observe the laws made by the general legislature not repugnant to the constitution. '' These principles -- that federal statutes can only be struck down for unconstitutionality and that the unconstitutionality must be clear -- were very common views at the time of the framing of the Constitution. For example, George Mason explained during the constitutional convention that judges "could declare an unconstitutional law void. But with regard to every law, however unjust, oppressive or pernicious, which did not come plainly under this description, they would be under the necessity as Judges to give it a free course. '' For a number of years, the courts were relatively deferential to Congress. Justice Washington put it this way, in an 1827 case: "It is but a decent respect to the wisdom, integrity, and patriotism of the legislative body, by which any law is passed, to presume in favor of its validity, until its violation of the Constitution is proved beyond a reasonable doubt. '' Although judges usually adhered to this principle that a statute could only be deemed unconstitutional in case of a clear contradiction until the twentieth century, this presumption of constitutionality weakened somewhat during the twentieth century, as exemplified by the Supreme Court 's famous footnote four in United States v. Carolene Products Co., 304 U.S. 144 (1938), which suggested that statutes may be subjected to closer scrutiny in certain types of cases. Nevertheless, the federal courts have not departed from the principle that courts may only strike down statutes for unconstitutionality. Of course, the practical implication of this principle is that a court can not strike down a statute, even if it recognizes that the statute is obviously poorly drafted, irrational, or arises from legislators ' corrupt motives, unless the flaw in the statute rises to the level of a clear constitutional violation. In 2008, Justice John Paul Stevens reaffirmed this point in a concurring opinion: "(A) s I recall my esteemed former colleague, Thurgood Marshall, remarking on numerous occasions: ' The Constitution does not prohibit legislatures from enacting stupid laws. ' '' In the federal system, courts may only decide actual cases or controversies; it is not possible to request the federal courts to review a law without at least one party having legal standing to engage in a lawsuit. This principle means that courts sometimes do not exercise their power of review, even when a law is seemingly unconstitutional, for want of jurisdiction. In some state courts, such as the Massachusetts Supreme Judicial Court, legislation may be referred in certain circumstances by the legislature or by the executive for an advisory ruling on its constitutionality prior to its enactment (or enforcement). The U.S. Supreme Court seeks to avoid reviewing the Constitutionality of an act where the case before it could be decided on other grounds, an attitude and practice exemplifying judicial restraint. Justice Brandeis framed it thus (citations omitted): The Court developed, for its own governance in the cases within its jurisdiction, a series of rules under which it has avoided passing upon a large part of all the constitutional questions pressed upon it for decision. They are: Although the Supreme Court continues to review the constitutionality of statutes, Congress and the states retain some power to influence what cases come before the Court. For example, the Constitution at Article III, Section 2, gives Congress power to make exceptions to the Supreme Court 's appellate jurisdiction. The Supreme Court has historically acknowledged that its appellate jurisdiction is defined by Congress, and thus Congress may have power to make some legislative or executive actions unreviewable. This is known as jurisdiction stripping. Another way for Congress to limit judicial review was tried in January 1868, when a bill was proposed requiring a two - thirds majority of the Court in order to deem any Act of Congress unconstitutional. The bill was approved by the House, 116 to 39. That measure died in the Senate, partly because the bill was unclear about how the bill 's own constitutionality would be decided. Many other bills have been proposed in Congress that would require a supermajority in order for the justices to exercise judicial review. During the early years of the United States, a two - thirds majority was necessary for the Supreme Court to exercise judicial review; because the Court then consisted of six members, a simple majority and a two - thirds majority both required four votes. Currently, the constitutions of two states require a supermajority of supreme court justices in order to exercise judicial review: Nebraska (five out of seven justices) and North Dakota (four out of five justices). The procedure for judicial review of federal administrative regulation in the United States is set forth by the Administrative Procedure Act although the courts have ruled such as in Bivens v. Six Unknown Named Agents that a person may bring a case on the grounds of an implied cause of action when no statutory procedure exists.
the temple of concord in the roman forum
Temple of Concord - wikipedia Coordinates: 41 ° 53 ′ 35 '' N 12 ° 29 ′ 03 '' E  /  41.89293 ° N 12.484245 ° E  / 41.89293; 12.484245 The Temple of Concord (Latin: Aedes Concordiae) in the ancient city of Rome refers to a series of shrines or temples dedicated to the Roman goddess Concordia, and erected at the western end of the Roman Forum. The earliest may have vowed by Marcus Furius Camillus in 367 BC, but history also records such a temple erected in the Vulcanal in 304, and another immediately west of the Vulcanal, on the spot the temple later occupied, commissioned in 217. The temple was rebuilt in 121 BC, and again by the future emperor Tiberius between 7 BC and AD 10. One tradition ascribes the first Temple of Concord to a vow made by Camillus in 367 BC, on the occasion of the Lex Licinia Sextia, the law passed by the tribunes Gaius Licinius Stolo and Lucius Sextius Lateranus, opening the consulship to the plebeians. The two had prevented the election of any magistrates for a period of several years, as part of the conflict of the orders. Nominated dictator to face an invasion of the Gauls, Camillus, encouraged by his fellow patrician Marcus Fabius Ambustus, Stolo 's father - in - law, determined to resolve the crisis by declaring his support for the law, and vowing a temple to Concordia, symbolizing reconciliation between the patricians and plebeians. Camillus ' vow is not mentioned by Livy, who instead describes the dedication of the Temple of Concord in the Vulcanal, a precinct sacred to Vulcan on the western end of the forum, by the aedile Gnaeus Flavius in 304 BC. Flavius ' actions were an affront to the senate, partly because he had undertaken the matter without first consulting them, and partly because of his low social standing: not only was Flavius a plebeian, but he was the son of a freedman, and had previously served as a scribe to Appius Claudius Caecus. The Pontifex Maximus, Rome 's chief priest, was compelled to instruct Flavius on the proper formulae for dedicating a temple. Cicero and Pliny report that Flavius was a scribe, rather than aedile, at the time of the dedication, and a law was passed immediately afterward forbidding anyone from dedicating a temple without the authorization of the senate or a majority of the plebeian tribunes. Yet a third Temple of Concord was begun in 217 BC, early in the Second Punic War, by the duumviri Marcus Pupius and Caeso Quinctius Flamininus, in fulfillment of a vow made by the praetor Lucius Manlius Vulso on the occasion of his deliverance from the Gauls in 218. The reason why Manlius vowed a temple to Concordia is not immediately apparent, but Livy alludes to a mutiny that had apparently occurred among the praetor 's men. The temple was completed and dedicated the following year by the duumviri Marcus and Gaius Atilius. The murder of Gaius Gracchus in 121 BC marked a low point in the relationship between the emerging Roman aristocracy and the popular party, and was immediately followed by the reconstruction of the Temple of Concord by Lucius Opimius at the senate 's behest, which was regarded as an utterly insincere attempt to clothe its actions in a symbolic act of reconciliation. From this period, the temple was frequently used as a meeting place for both the senate and the Arval Brethren, and in later times it came to house a number of works of art, many of which are described by Pliny. A statue of Victoria placed on the roof of the temple was struck by lightning in 211 BC, and prodigies were reported in the Concordiae, the neighborhood of the temple, in 183 and 181. Little else is heard of the temple until 7 BC, when the future emperor Tiberius undertook another restoration, which lasted until AD 10, when the structure was rededicated on the 16th of January as the Aedes Concordiae Augustae, the Temple of Concordia of Augustus. The temple is occasionally mentioned in imperial times, and may have been restored again following a fire in AD 284. By the eighth century, the temple was reportedly in poor condition, and in danger of collapsing. The temple was razed circa 1450, and the stone turned into a lime kiln to recover the marble for building. Backed up against the Tabularium at the foot of the Capitoline Hill, the architecture had to accommodate the limitations of the site. The cella of the temple, for instance, is almost twice as wide (45m) as it is deep (24m), as is the pronaos. In the cella a row of Corinthian columns rose from a continuous plinth projecting from the wall, which divided the cella into bays, each containing a niche. The capitals of these columns had pairs of leaping rams in place of the corner volutes. Only the platform now remains, partially covered by a road up to the Capitol. The main temple in the Forum in Rome seems to have been a model for temples to the goddess elsewhere in the empire -- a reproduction of this temple was found at Mérida in Spain, during the excavations of the town 's forum in 2002.
who built the temple of jupiter optimus maximus
Temple of Jupiter Optimus Maximus - wikipedia The Temple of Jupiter Optimus Maximus, also known as the Temple of Jupiter Capitolinus (Latin: Aedes Iovis Optimi Maximi Capitolini; Italian: Tempio di Giove Ottimo Massimo; English: "Temple of Jupiter Best and Greatest on the Capitoline '') was the most important temple in Ancient Rome, located on the Capitoline Hill. It had a cathedral - like position in the official religion of Rome, and was surrounded by the Area Capitolina, a precinct where certain assemblies met, and numerous shrines, altars, statues, and victory trophies were displayed. The first building was the oldest large temple in Rome, and can be considered as essentially Etruscan architecture. It was traditionally dedicated in 509 BC, but in 83 BC it was destroyed by fire, and a replacement in Greek style completed in 69 BC (there were to be two more fires and new buildings). For the first temple Etruscan specialists were brought in for various aspects of the building, including making and painting the extensive terracotta elements of the Temple of Zeus or upper parts, such as antefixes. But for the second building they were summoned from Greece, and the building was presumably essentially Greek in style, though like other Roman temples it retained many elements of Etruscan form. The two further buildings were evidently of contemporary Roman style, although of exceptional size. The first version is the largest Etruscan temple recorded, and much larger than other Roman temples for centuries after. However, its size remains heavily disputed by specialists; based on an ancient visitor it has been claimed to have been almost 60 m × 60 m (200 ft × 200 ft), not far short of the largest Greek temples. Whatever its size, its influence on other early Roman temples was significant and long - lasting. Reconstructions usually. people shows how very wide eaves, and a wide colonnade stretching down the sides, though not round the back wall as it would have done in a Greek temple. A crude image on a coin of 78 BC shows only four columns, and a very busy roofline. With two further fires, the third temple only lasted five years, to 80 AD, but the fourth survived until the fall of the empire. Remains of the last temple survived to be pillaged for spolia in the Middle Ages and Renaissance, but now only elements of the foundations and podium or base survive; as the subsequent temples apparently reused these, they may partly date to the first building. Much about the various buildings remains uncertain. Much of what is known of the first Temple of Jupiter is from later Roman tradition. Lucius Tarquinius Priscus vowed this temple while battling with the Sabines and, according to Dionysius of Halicarnassus, began the terracing necessary to support the foundations of the temple. Modern coring on the Capitoline has confirmed the extensive work needed just to create a level building site. According to Dionysius of Halicarnassus and Livy, the foundations and most of the superstructure of the temple were completed by Lucius Tarquinius Superbus, the last King of Rome. Livy also records that before the temple 's construction shrines to other gods occupied the site. When the augurs carried out the rites seeking permission to remove them, only Terminus and Juventas were believed to have refused. Their shrines were therefore incorporated into the new structure. Because he was the god of boundaries, Terminus 's refusal to be moved was interpreted as a favorable omen for the future of the Roman state. A second portent was the appearance of the head of a man to workmen digging the foundations of the temple. This was said by the augurs (including augurs brought especially from Etruria) to mean that Rome was to be the head of a great empire. Traditionally the Temple was dedicated on September 13, the founding year of the Roman Republic, 509 BCE according to Livy. According to Dionysius, it was consecrated two years later in 507 BCE. It was sacred to the Capitoline Triad consisting of Jupiter and his companion deities, Juno and Minerva. The man to perform the dedication of the temple was chosen by lot. The duty fell to Marcus Horatius Pulvillus, one of the consuls in that year. Livy records that in 495 BCE the Latins, as a mark of gratitude to the Romans for the release of 6,000 Latin prisoners, delivered a crown of gold to the temple. The original temple may have measured almost 60 m × 60 m (200 ft × 200 ft), though this estimate is hotly disputed by some specialists. It was certainly considered the most important religious temple of the whole state of Rome. Each deity of the Triad had a separate cella, with Juno Regina on the left, Minerva on the right, and Jupiter Optimus Maximus in the middle. The first temple was decorated with many terra cotta sculptures. The most famous of these was of Jupiter driving a quadriga, a chariot drawn by four horses, which was on top of the roof as an acroterion. This sculpture, as well as the cult statue of Jupiter in the main cella, was said to have been the work of Etruscan artisan Vulca of Veii. An image of Summanus, a thunder god, was among the pedimental statues. The original temple decoration was discovered in 2014. The findings allowed the archaeologists to reconstruct for the first time the real appearance of the temple in the earliest phase. The wooden elements of the roof and lintels were lined with terracotta revetment plaques and other elements of exceptional size and richly decorated with painted reliefs, following the so - called Second Phase model (referring to the decorative systems of Etruscan and Latin temples), that had its first expression precisely with the Temple of Jupiter Optimus Maximus. The temple, which immediately rose to fame, established a new model for sacred architecture that was adopted in the terracotta decorations of many temples in Italy up to the 2nd century BC. The original elements were partially replaced with other elements in different style in the early 4th century BC and anew at the end of 3rd -- beginning 2nd century BC. The removed material was dumped into the layers forming the square in front of the temple, the so - called Area Capitolina, in the middle years the 2nd century BC. The plan and exact dimensions of the temple have been heavily debated. Five different plans of the temple have been published following recent excavations on the Capitoline Hill that revealed portions of the archaic foundations. According to Dionysius of Halicarnassus, the same plan and foundations were used for later rebuildings of the temple, but there is disagreement over what the dimensions he mentions referred to (the building itself or the podium). The first temple burned in 83 BCE, during the civil wars under the dictatorship of Sulla. Also lost in this fire were the Sibylline Books, which were said to have been written by classical sibyls, and stored in the temple (to be guarded and consulted by the quindecimviri (council of fifteen) on matters of state only on emergencies). Speculative plan of the first temple Another of the many guesses at a plan 19th century artist 's impression Back wall in 2005 During Lucius Cornelius Sulla 's sack of Athens in 86 BC, while looting the city, Sulla seized some of the gigantic incomplete columns from the Temple of Zeus and transported them back to Rome, where they were re-used in the Temple of Jupiter. Sulla hoped to live until the temple was rebuilt, but Quintus Lutatius Catulus (Capitolinus) had the honor of dedicating the new structure in 69 BCE. The new temple was built to the same plan on the same foundations, but with more expensive materials for the superstructure. Literary sources indicate that the temple was not entirely completed until the late 60s BCE. Brutus and the other assassins locked themselves inside it after murdering Caesar. The new temple of Quintus Lutatius Catulus was renovated and repaired by Augustus. The second building burnt down during the course of fighting on the hill on December 19, 69 CE, when an army loyal to Vespasian battled to enter the city in the Year of the Four Emperors. Domitian narrowly escaped with his life. ((cn)) The new emperor, Vespasian, rapidly rebuilt the temple on the same foundations but with a lavish superstructure. The third temple of Jupiter was dedicated in 75CE. The third temple burned during the reign of Titus in the great fire of 80CE. Domitian immediately began rebuilding the temple, again on the same foundations, but with the most lavish superstructure yet. According to Plutarch, Domitian used at least twelve thousands talents of gold for the gilding of the bronze roof tiles alone. Elaborate sculpture adorned the pediment. A Renaissance drawing of a damaged relief in the Louvre Museum shows a four - horse chariot (quadriga) beside a two - horse chariot (biga) to the right of the latter at the highest point of the pediment, the two statues serving as the central acroterion, and statues of the god Mars and goddess Venus surmounting the corners of the cornice, serving as acroteria. In the centre of the pediment the god Jupiter was flanked by Juno and Minerva, seated on thrones. Below was an eagle with wings spread out. A biga driven by the sun god and a biga driven by the moon were depicted either side of the three gods. The temple completed by Domitian is thought to have lasted more or less intact for over three hundred years, until all pagan temples were closed by emperor Theodosius I in 392. During the 5th century the temple was damaged by Stilicho (who according to Zosimus removed the gold that adorned the doors) and Gaiseric (Procopius states that the Vandals plundered the temple during the sack of Rome in 455, stripping away the roof shingles made of gold and bronze). In 571, Narses removed many of the statues and ornaments. The ruins were still well preserved in 1447 when the 15th - century humanist Poggio Bracciolini visited Rome. The remaining ruins were destroyed in the 16th century, when Giovanni Pietro Caffarelli built a palace (Palazzo Caffarelli) on the site reusing material from the temple. Today, portions of the temple podium and foundations can be seen behind the Palazzo dei Conservatori, in an exhibition area built in the Caffarelli Garden, and within the Musei Capitolini. A part of front corner is also visible in via del Tempio di Giove. The second Medici lion was sculpted in the late 16th century by Flaminio Vacca from a capital from the Temple of Jupiter Optimus Maximus. The Area Capitolina was the precinct on the southern part of the Capitoline that surrounded the Temple of Jupiter, enclosing it with irregular retaining walls following the hillside contours. The precinct was enlarged in 388 BCE, to about 3,000 m. The Clivus Capitolinus ended at the main entrance in the center of the southeast side, and the Porta Pandana seems to have been a secondary entrance; these gates were closed at night. The sacred geese of Juno, said to have sounded the alarm during the Gallic siege of Rome, were kept in the Area, which was guarded during the Imperial period by dogs kept by a temple attendant. Domitian hid in the dog handler 's living quarters when the forces of Vitellius overtook the Capitoline. Underground chambers called favissae held damaged building materials, old votive offerings, and dedicated objects that were not suitable for display. It was religiously prohibited to disturb these. The precinct held numerous shrines, altars, statues, and victory trophies. Some plebeian and tribal assemblies met there. In late antiquity, it was a market for luxury goods, and continued as such into the medieval period: in a letter from 468, Sidonius describes a shopper negotiating over the price of gems, silk, and fine fabrics. Mura Sommella, A. (2000), "'' La grande Roma dei tarquini ": Alterne vicende di una felice intuizione '', Bullettino della Commissione Archeologica Comunale di Roma, 101: 7 -- 26. Coordinates: 41 ° 53 ′ 32 '' N 12 ° 28 ′ 54 '' E  /  41.89222 ° N 12.48167 ° E  / 41.89222; 12.48167
who went home last week on masterchef junior
MasterChef Junior (U.S. Season 6) - wikipedia Season 6 of the American competitive reality television series MasterChef Junior premiered on Fox on March 2, 2018. The season is hosted by regular judges Gordon Ramsay, Christina Tosi, and returning judge Joe Bastianich. The winner was Beni Cwiakala, a 9 - year - old from Chicago, Illinois, with Avery Meadows from Kingwood, Texas and Quani Fields from Lawrenceville, Georgia being the runners - up. This marks the first time that three contestants have gone against each other in the finale.
who is one of the first german composers that we know about
List of German composers - wikipedia This is an alphabetical list of composers from Germany.
when did the us drop the bombs on hiroshima and nagasaki
Atomic bombings of Hiroshima and Nagasaki - wikipedia Southeast Asia Burma Southwest Pacific North America Japan Manchuria During the final stage of World War II, the United States detonated two nuclear weapons over the Japanese cities of Hiroshima and Nagasaki on August 6 and 9, 1945, respectively. The United States dropped the bombs after obtaining the consent of the United Kingdom, as required by the Quebec Agreement. The two bombings killed 129,000 -- 226,000 people, most of whom were civilians. They remain the only use of nuclear weapons in the history of warfare. In the final year of the war, the Allies prepared for what was anticipated to be a very costly invasion of the Japanese mainland. This undertaking was preceded by a conventional and firebombing campaign that destroyed 67 Japanese cities. The war in Europe had concluded when Germany signed its instrument of surrender on May 8, 1945. As the Allies turned their full attention to the Pacific War, the Japanese faced the same fate. The Allies called for the unconditional surrender of the Imperial Japanese armed forces in the Potsdam Declaration on July 26, 1945 -- the alternative being "prompt and utter destruction ''. The Japanese rejected the ultimatum and the war continued. By August 1945, the Allies ' Manhattan Project had produced two types of atomic bombs, and the 509th Composite Group of the United States Army Air Forces (USAAF) was equipped with the specialized Silverplate version of the Boeing B - 29 Superfortress that could deliver them from Tinian in the Mariana Islands. Orders for atomic bombs to be used on four Japanese cities were issued on July 25. On August 6, one of its B - 29s dropped a Little Boy uranium gun - type bomb on Hiroshima. Three days later, on August 9, a Fat Man plutonium implosion - type bomb was dropped by another B - 29 on Nagasaki. The bombs immediately devastated their targets. Over the next two to four months, the acute effects of the atomic bombings killed 90,000 -- 146,000 people in Hiroshima and 39,000 -- 80,000 people in Nagasaki; roughly half of the deaths in each city occurred on the first day. Large numbers of people continued to die from the effects of burns, radiation sickness, and other injuries, compounded by illness and malnutrition, for many months afterward. In both cities, most of the dead were civilians, although Hiroshima had a sizable military garrison. Japan announced its surrender to the Allies on August 15, six days after the bombing of Nagasaki and the Soviet Union 's declaration of war. On September 2, the Japanese government signed the instrument of surrender, effectively ending World War II. The ethical and legal justification for the bombings is still debated to this day. In 1945, the Pacific War between the Empire of Japan and the Allies entered its fourth year. Most Japanese military units fought fiercely, ensuring that the Allied victory would come at an enormous cost. The 1.25 million battle casualties incurred in total by the United States in World War II included both military personnel killed in action and wounded in action. Nearly one million of the casualties occurred during the last year of the war, from June 1944 to June 1945. In December 1944, American battle casualties hit an all - time monthly high of 88,000 as a result of the German Ardennes Offensive. America 's reserves of manpower were running out. Deferments for groups such as agricultural workers were tightened, and there was consideration of drafting women. At the same time, the public was becoming war - weary, and demanding that long - serving servicemen be sent home. In the Pacific, the Allies returned to the Philippines, recaptured Burma, and invaded Borneo. Offensives were undertaken to reduce the Japanese forces remaining in Bougainville, New Guinea and the Philippines. In April 1945, American forces landed on Okinawa, where heavy fighting continued until June. Along the way, the ratio of Japanese to American casualties dropped from 5: 1 in the Philippines to 2: 1 on Okinawa. Although some Japanese soldiers were taken prisoner, most fought until they were killed or committed suicide. Nearly 99 % of the 21,000 defenders of Iwo Jima were killed. Of the 117,000 Okinawan and Japanese troops defending Okinawa in April -- June 1945, 94 % were killed; 7,401 Japanese soldiers surrendered, an unprecedented large number. As the Allies advanced towards Japan, conditions became steadily worse for the Japanese people. Japan 's merchant fleet declined from 5,250,000 gross tons in 1941 to 1,560,000 tons in March 1945, and 557,000 tons in August 1945. Lack of raw materials forced the Japanese war economy into a steep decline after the middle of 1944. The civilian economy, which had slowly deteriorated throughout the war, reached disastrous levels by the middle of 1945. The loss of shipping also affected the fishing fleet, and the 1945 catch was only 22 % of that in 1941. The 1945 rice harvest was the worst since 1909, and hunger and malnutrition became widespread. U.S. industrial production was overwhelmingly superior to Japan 's. By 1943, the U.S. produced almost 100,000 aircraft a year, compared to Japan 's production of 70,000 for the entire war. By the middle of 1944, the U.S. had almost a hundred aircraft carriers in the Pacific, far more than Japan 's twenty - five for the entire war. In February 1945, Prince Fumimaro Konoe advised Emperor Hirohito that defeat was inevitable, and urged him to abdicate. Even before the surrender of Nazi Germany on May 8, 1945, plans were underway for the largest operation of the Pacific War, Operation Downfall, the Allied invasion of Japan. The operation had two parts: Operation Olympic and Operation Coronet. Set to begin in October 1945, Olympic involved a series of landings by the U.S. Sixth Army intended to capture the southern third of the southernmost main Japanese island, Kyūshū. Operation Olympic was to be followed in March 1946 by Operation Coronet, the capture of the Kantō Plain, near Tokyo on the main Japanese island of Honshū by the U.S. First, Eighth and Tenth Armies, as well as a Commonwealth Corps made up of Australian, British and Canadian divisions. The target date was chosen to allow for Olympic to complete its objectives, for troops to be redeployed from Europe, and the Japanese winter to pass. Japan 's geography made this invasion plan obvious to the Japanese; they were able to predict the Allied invasion plans accurately and thus adjust their defensive plan, Operation Ketsugō, accordingly. The Japanese planned an all - out defense of Kyūshū, with little left in reserve for any subsequent defense operations. Four veteran divisions were withdrawn from the Kwantung Army in Manchuria in March 1945 to strengthen the forces in Japan, and 45 new divisions were activated between February and May 1945. Most were immobile formations for coastal defense, but 16 were high quality mobile divisions. In all, there were 2.3 million Japanese Army troops prepared to defend the home islands, backed by a civilian militia of 28 million men and women. Casualty predictions varied widely, but were extremely high. The Vice Chief of the Imperial Japanese Navy General Staff, Vice Admiral Takijirō Ōnishi, predicted up to 20 million Japanese deaths. A study from June 15, 1945, by the Joint War Plans Committee, who provided planning information to the Joint Chiefs of Staff, estimated that Olympic would result in between 130,000 and 220,000 U.S. casualties, of which U.S. dead would be in the range from 25,000 to 46,000. Delivered on June 15, 1945, after insight gained from the Battle of Okinawa, the study noted Japan 's inadequate defenses due to the very effective sea blockade and the American firebombing campaign. The Chief of Staff of the United States Army, General of the Army George Marshall, and the Army Commander in Chief in the Pacific, General of the Army Douglas MacArthur, signed documents agreeing with the Joint War Plans Committee estimate. The Americans were alarmed by the Japanese buildup, which was accurately tracked through Ultra intelligence. Secretary of War Henry L. Stimson was sufficiently concerned about high American estimates of probable casualties to commission his own study by Quincy Wright and William Shockley. Wright and Shockley spoke with Colonels James McCormack and Dean Rusk, and examined casualty forecasts by Michael E. DeBakey and Gilbert Beebe. Wright and Shockley estimated the invading Allies would suffer between 1.7 and 4 million casualties in such a scenario, of whom between 400,000 and 800,000 would be dead, while Japanese fatalities would have been around 5 to 10 million. Marshall began contemplating the use of a weapon that was "readily available and which assuredly can decrease the cost in American lives '': poison gas. Quantities of phosgene, mustard gas, tear gas and cyanogen chloride were moved to Luzon from stockpiles in Australia and New Guinea in preparation for Operation Olympic, and MacArthur ensured that Chemical Warfare Service units were trained in their use. Consideration was also given to using biological weapons against Japan. While the United States had developed plans for an air campaign against Japan prior to the Pacific War, the capture of Allied bases in the western Pacific in the first weeks of the conflict meant that this offensive did not begin until mid-1944 when the long - ranged Boeing B - 29 Superfortress became ready for use in combat. Operation Matterhorn involved India - based B - 29s staging through bases around Chengdu in China to make a series of raids on strategic targets in Japan. This effort failed to achieve the strategic objectives that its planners had intended, largely because of logistical problems, the bomber 's mechanical difficulties, the vulnerability of Chinese staging bases, and the extreme range required to reach key Japanese cities. Brigadier General Haywood S. Hansell determined that Guam, Tinian, and Saipan in the Mariana Islands would better serve as B - 29 bases, but they were in Japanese hands. Strategies were shifted to accommodate the air war, and the islands were captured between June and August 1944. Air bases were developed, and B - 29 operations commenced from the Marianas in October 1944. These bases were easily resupplied by cargo ships. The XXI Bomber Command began missions against Japan on November 18, 1944. The early attempts to bomb Japan from the Marianas proved just as ineffective as the China - based B - 29s had been. Hansell continued the practice of conducting so - called high - altitude precision bombing, aimed at key industries and transportation networks, even after these tactics had not produced acceptable results. These efforts proved unsuccessful due to logistical difficulties with the remote location, technical problems with the new and advanced aircraft, unfavorable weather conditions, and enemy action. Hansell 's successor, Major General Curtis LeMay, assumed command in January 1945 and initially continued to use the same precision bombing tactics, with equally unsatisfactory results. The attacks initially targeted key industrial facilities but much of the Japanese manufacturing process was carried out in small workshops and private homes. Under pressure from United States Army Air Forces (USAAF) headquarters in Washington, LeMay changed tactics and decided that low - level incendiary raids against Japanese cities were the only way to destroy their production capabilities, shifting from precision bombing to area bombardment with incendiaries. Like most strategic bombing during World War II, the aim of the air offensive against Japan was to destroy the enemy 's war industries, kill or disable civilian employees of these industries, and undermine civilian morale. Over the next six months, the XXI Bomber Command under LeMay firebombed 67 Japanese cities. The firebombing of Tokyo, codenamed Operation Meetinghouse, on March 9 -- 10 killed an estimated 100,000 people and destroyed 16 square miles (41 km) of the city and 267,000 buildings in a single night. It was the deadliest bombing raid of the war, at a cost of 20 B - 29s shot down by flak and fighters. By May, 75 % of bombs dropped were incendiaries designed to burn down Japan 's "paper cities ''. By mid-June, Japan 's six largest cities had been devastated. The end of the fighting on Okinawa that month provided airfields even closer to the Japanese mainland, allowing the bombing campaign to be further escalated. Aircraft flying from Allied aircraft carriers and the Ryukyu Islands also regularly struck targets in Japan during 1945 in preparation for Operation Downfall. Firebombing switched to smaller cities, with populations ranging from 60,000 to 350,000. According to Yuki Tanaka, the U.S. fire - bombed over a hundred Japanese towns and cities. These raids were devastating. The Japanese military was unable to stop the Allied attacks and the country 's civil defense preparations proved inadequate. Japanese fighters and antiaircraft guns had difficulty engaging bombers flying at high altitude. From April 1945, the Japanese interceptors also had to face American fighter escorts based on Iwo Jima and Okinawa. That month, the Imperial Japanese Army Air Service and Imperial Japanese Navy Air Service stopped attempting to intercept the air raids in order to preserve fighter aircraft to counter the expected invasion. By mid-1945 the Japanese only occasionally scrambled aircraft to intercept individual B - 29s conducting reconnaissance sorties over the country, in order to conserve supplies of fuel. In July 1945, the Japanese had 1,156,000 US barrels (137,800,000 l) of avgas stockpiled for the invasion of Japan. About 604,000 US barrels (72,000,000 l) had been consumed in the home islands area in April, May and June 1945. While the Japanese military decided to resume attacks on Allied bombers from late June, by this time there were too few operational fighters available for this change of tactics to hinder the Allied air raids. The discovery of nuclear fission by German chemists Otto Hahn and Fritz Strassmann in 1938, and its theoretical explanation by Lise Meitner and Otto Frisch, made the development of an atomic bomb a theoretical possibility. Fears that a German atomic bomb project would develop atomic weapons first, especially among scientists who were refugees from Nazi Germany and other fascist countries, were expressed in the Einstein - Szilard letter. This prompted preliminary research in the United States in late 1939. Progress was slow until the arrival of the British MAUD Committee report in late 1941, which indicated that only 5 to 10 kilograms of isotopically enriched uranium - 235 were needed for a bomb instead of tons of natural uranium and a neutron moderator like heavy water. The 1943 Quebec Agreement merged the nuclear weapons projects of the United Kingdom and Canada, Tube Alloys and the Montreal Laboratory, with the Manhattan Project, under the direction of Major General Leslie R. Groves, Jr., of the U.S. Army Corps of Engineers. Groves appointed J. Robert Oppenheimer to organize and head the project 's Los Alamos Laboratory in New Mexico, where bomb design work was carried out. Two types of bombs were eventually developed, both named by Robert Serber. Little Boy was a gun - type fission weapon that used uranium - 235, a rare isotope of uranium separated at the Clinton Engineer Works at Oak Ridge, Tennessee. The other, known as a Fat Man device, was a more powerful and efficient, but more complicated, implosion - type nuclear weapon that used plutonium created in nuclear reactors at Hanford, Washington. There was a Japanese nuclear weapon program, but it lacked the human, mineral and financial resources of the Manhattan Project, and never made much progress towards developing an atomic bomb. The 509th Composite Group was constituted on December 9, 1944, and activated on December 17, 1944, at Wendover Army Air Field, Utah, commanded by Colonel Paul Tibbets. Tibbets was assigned to organize and command a combat group to develop the means of delivering an atomic weapon against targets in Germany and Japan. Because the flying squadrons of the group consisted of both bomber and transport aircraft, the group was designated as a "composite '' rather than a "bombardment '' unit. Working with the Manhattan Project at Los Alamos, Tibbets selected Wendover for his training base over Great Bend, Kansas, and Mountain Home, Idaho, because of its remoteness. Each bombardier completed at least 50 practice drops of inert or conventional explosive pumpkin bombs and Tibbets declared his group combat - ready. The 509th Composite Group had an authorized strength of 225 officers and 1,542 enlisted men, almost all of whom eventually deployed to Tinian. In addition to its authorized strength, the 509th had attached to it on Tinian 51 civilian and military personnel from Project Alberta, known as the 1st Technical Detachment. The 509th Composite Group 's 393d Bombardment Squadron was equipped with 15 Silverplate B - 29s. These aircraft were specially adapted to carry nuclear weapons, and were equipped with fuel - injected engines, Curtiss Electric reversible - pitch propellers, pneumatic actuators for rapid opening and closing of bomb bay doors and other improvements. The ground support echelon of the 509th Composite Group moved by rail on April 26, 1945, to its port of embarkation at Seattle, Washington. On May 6 the support elements sailed on the SS Cape Victory for the Marianas, while group materiel was shipped on the SS Emile Berliner. The Cape Victory made brief port calls at Honolulu and Eniwetok but the passengers were not permitted to leave the dock area. An advance party of the air echelon, consisting of 29 officers and 61 enlisted men flew by C - 54 to North Field on Tinian, between May 15 and May 22. There were also two representatives from Washington, D.C., Brigadier General Thomas Farrell, the deputy commander of the Manhattan Project, and Rear Admiral William R. Purnell of the Military Policy Committee, who were on hand to decide higher policy matters on the spot. Along with Captain William S. Parsons, the commander of Project Alberta, they became known as the "Tinian Joint Chiefs ''. In April 1945, Marshall asked Groves to nominate specific targets for bombing for final approval by himself and Stimson. Groves formed a Target Committee, chaired by himself, that included Farrell, Major John A. Derry, Colonel William P. Fisher, Joyce C. Stearns and David M. Dennison from the USAAF; and scientists John von Neumann, Robert R. Wilson and William Penney from the Manhattan Project. The Target Committee met in Washington on April 27; at Los Alamos on May 10, where it was able to talk to the scientists and technicians there; and finally in Washington on May 28, where it was briefed by Tibbets and Commander Frederick Ashworth from Project Alberta, and the Manhattan Project 's scientific advisor, Richard C. Tolman. The Target Committee nominated five targets: Kokura, the site of one of Japan 's largest munitions plants; Hiroshima, an embarkation port and industrial center that was the site of a major military headquarters; Yokohama, an urban center for aircraft manufacture, machine tools, docks, electrical equipment and oil refineries; Niigata, a port with industrial facilities including steel and aluminum plants and an oil refinery; and Kyoto, a major industrial center. The target selection was subject to the following criteria: These cities were largely untouched during the nightly bombing raids and the Army Air Forces agreed to leave them off the target list so accurate assessment of the damage caused by the atomic bombs could be made. Hiroshima was described as "an important army depot and port of embarkation in the middle of an urban industrial area. It is a good radar target and it is such a size that a large part of the city could be extensively damaged. There are adjacent hills which are likely to produce a focusing effect which would considerably increase the blast damage. Due to rivers it is not a good incendiary target. '' The Target Committee stated that "It was agreed that psychological factors in the target selection were of great importance. Two aspects of this are (1) obtaining the greatest psychological effect against Japan and (2) making the initial use sufficiently spectacular for the importance of the weapon to be internationally recognized when publicity on it is released... Kyoto has the advantage of the people being more highly intelligent and hence better able to appreciate the significance of the weapon. Hiroshima has the advantage of being such a size and with possible focussing from nearby mountains that a large fraction of the city may be destroyed. The Emperor 's palace in Tokyo has a greater fame than any other target but is of least strategic value. '' Edwin O. Reischauer, a Japan expert for the U.S. Army Intelligence Service, was incorrectly said to have prevented the bombing of Kyoto. In his autobiography, Reischauer specifically refuted this claim: ... the only person deserving credit for saving Kyoto from destruction is Henry L. Stimson, the Secretary of War at the time, who had known and admired Kyoto ever since his honeymoon there several decades earlier. On May 30, Stimson asked Groves to remove Kyoto from the target list due to its historical, religious and cultural significance, but Groves pointed to its military and industrial significance. Stimson then approached President Harry S. Truman about the matter. Truman agreed with Stimson, and Kyoto was temporarily removed from the target list. Groves attempted to restore Kyoto to the target list in July, but Stimson remained adamant. On July 25, Nagasaki was put on the target list in place of Kyoto. It was a major military port, one of Japan 's largest shipbuilding and repair centers, and an important producer of naval ordnance. In early May 1945, the Interim Committee was created by Stimson at the urging of leaders of the Manhattan Project and with the approval of Truman to advise on matters pertaining to nuclear energy. During the meetings on May 31 and June 1, scientist Ernest Lawrence had suggested giving the Japanese a non-combat demonstration. Arthur Compton later recalled that: It was evident that everyone would suspect trickery. If a bomb were exploded in Japan with previous notice, the Japanese air power was still adequate to give serious interference. An atomic bomb was an intricate device, still in the developmental stage. Its operation would be far from routine. If during the final adjustments of the bomb the Japanese defenders should attack, a faulty move might easily result in some kind of failure. Such an end to an advertised demonstration of power would be much worse than if the attempt had not been made. It was now evident that when the time came for the bombs to be used we should have only one of them available, followed afterwards by others at all - too - long intervals. We could not afford the chance that one of them might be a dud. If the test were made on some neutral territory, it was hard to believe that Japan 's determined and fanatical military men would be impressed. If such an open test were made first and failed to bring surrender, the chance would be gone to give the shock of surprise that proved so effective. On the contrary, it would make the Japanese ready to interfere with an atomic attack if they could. Though the possibility of a demonstration that would not destroy human lives was attractive, no one could suggest a way in which it could be made so convincing that it would be likely to stop the war. The possibility of a demonstration was raised again in the Franck Report issued by physicist James Franck on June 11 and the Scientific Advisory Panel rejected his report on June 16, saying that "we can propose no technical demonstration likely to bring an end to the war; we see no acceptable alternative to direct military use. '' Franck then took the report to Washington, D.C., where the Interim Committee met on June 21 to re-examine its earlier conclusions; but it reaffirmed that there was no alternative to the use of the bomb on a military target. Like Compton, many U.S. officials and scientists argued that a demonstration would sacrifice the shock value of the atomic attack, and the Japanese could deny the atomic bomb was lethal, making the mission less likely to produce surrender. Allied prisoners of war might be moved to the demonstration site and be killed by the bomb. They also worried that the bomb might be a dud since the Trinity test was of a stationary device, not an air - dropped bomb. In addition, although more bombs were in production, only two would be available at the start of August, and they cost billions of dollars, so using one for a demonstration would be expensive. For several months, the U.S. had warned civilians of potential air raids by dropping more than 63 million leaflets across Japan. Many Japanese cities suffered terrible damage from aerial bombings; some were as much as 97 % destroyed. LeMay thought that leaflets would increase the psychological impact of bombing, and reduce the international stigma of area - bombing cities. Even with the warnings, Japanese opposition to the war remained ineffective. In general, the Japanese regarded the leaflet messages as truthful, with many Japanese choosing to leave major cities. The leaflets caused such concern that the government ordered the arrest of anyone caught in possession of a leaflet. Leaflet texts were prepared by recent Japanese prisoners of war because they were thought to be the best choice "to appeal to their compatriots ''. In preparation for dropping an atomic bomb on Hiroshima, the Oppenheimer - led Scientific Panel of the Interim Committee decided against a demonstration bomb and against a special leaflet warning. Those decisions were implemented because of the uncertainty of a successful detonation and also because of the wish to maximize shock in the leadership. No warning was given to Hiroshima that a new and much more destructive bomb was going to be dropped. Various sources gave conflicting information about when the last leaflets were dropped on Hiroshima prior to the atomic bomb. Robert Jay Lifton wrote that it was July 27, and Theodore H. McNelly wrote that it was July 30. The USAAF history noted that eleven cities were targeted with leaflets on July 27, but Hiroshima was not one of them, and there were no leaflet sorties on July 30. Leaflet sorties were undertaken on August 1 and August 4. Hiroshima may have been leafleted in late July or early August, as survivor accounts talk about a delivery of leaflets a few days before the atomic bomb was dropped. Three versions were printed of a leaflet listing 11 or 12 cities targeted for firebombing; a total of 33 cities listed. With the text of this leaflet reading in Japanese "... we can not promise that only these cities will be among those attacked... '' Hiroshima was not listed. Under the Quebec Agreement with the United Kingdom, nuclear weapons would not be used against another country without mutual consent. Stimson therefore had to obtain British permission. A meeting of the Combined Policy Committee was held at the Pentagon on July 4, 1945. Field Marshal Sir Henry Maitland Wilson announced that the British government concurred with the use of nuclear weapons against Japan, which would be officially recorded as a decision of the Combined Policy Committee. As the release of information to third parties was also controlled by the Quebec Agreement, discussion then turned to what scientific details would be revealed in the press announcement of the bombing. The meeting also considered what Truman could reveal to Joseph Stalin, the leader of the Soviet Union, at the upcoming Potsdam Conference, as this also required British concurrence. Orders for the attack were issued to General Carl Spaatz on July 25 under the signature of General Thomas T. Handy, the acting Chief of Staff, since Marshall was at the Potsdam Conference with Truman. It read: That day, Truman noted in his diary that: This weapon is to be used against Japan between now and August 10th. I have told the Sec. of War, Mr. Stimson, to use it so that military objectives and soldiers and sailors are the target and not women and children. Even if the Japs are savages, ruthless, merciless and fanatic, we as the leader of the world for the common welfare can not drop that terrible bomb on the old capital (Kyoto) or the new (Tokyo). He and I are in accord. The target will be a purely military one. The July 16 success of the Trinity Test in the New Mexico desert exceeded expectations. On July 26, Allied leaders issued the Potsdam Declaration, which outlined the terms of surrender for Japan. The declaration was presented as an ultimatum and stated that without a surrender, the Allies would attack Japan, resulting in "the inevitable and complete destruction of the Japanese armed forces and just as inevitably the utter devastation of the Japanese homeland ''. The atomic bomb was not mentioned in the communiqué. On July 28, Japanese papers reported that the declaration had been rejected by the Japanese government. That afternoon, Prime Minister Suzuki Kantarō declared at a press conference that the Potsdam Declaration was no more than a rehash (yakinaoshi) of the Cairo Declaration and that the government intended to ignore it (mokusatsu, "kill by silence ''). The statement was taken by both Japanese and foreign papers as a clear rejection of the declaration. Emperor Hirohito, who was waiting for a Soviet reply to non-committal Japanese peace feelers, made no move to change the government position. Japan 's willingness to surrender remained conditional on the preservation of the kokutai (Imperial institution and national polity), assumption by the Imperial Headquarters of responsibility for disarmament and demobilization, no occupation of the Japanese Home Islands, Korea or Formosa, and delegation of the punishment of war criminals to the Japanese government. At Potsdam, Truman agreed to a request from Winston Churchill that Britain be represented when the atomic bomb was dropped. William Penney and Group Captain Leonard Cheshire were sent to Tinian, but found that LeMay would not let them accompany the mission. All they could do was send a strongly worded signal to Wilson. The Little Boy bomb, except for the uranium payload, was ready at the beginning of May 1945. There were two uranium - 235 components, a hollow cylindrical projectile and a cylindrical target insert. The projectile was completed on June 15, and the target insert on July 24. The projectile and eight bomb pre-assemblies (partly assembled bombs without the powder charge and fissile components) left Hunters Point Naval Shipyard, California, on July 16 aboard the cruiser USS Indianapolis, and arrived on Tinian on July 26. The target insert followed by air on July 30, accompanied by Commander Francis Birch from Project Alberta. Responding to concerns expressed by the 509th Composite Group about the possibility of a B - 29 crashing on takeoff, Birch had modified the Little Boy design to incorporate a removable breech plug that would permit the bomb to be armed in flight. The first plutonium core, along with its polonium - beryllium urchin initiator, was transported in the custody of Project Alberta courier Raemer Schreiber in a magnesium field carrying case designed for the purpose by Philip Morrison. Magnesium was chosen because it does not act as a tamper. The core departed from Kirtland Army Air Field on a C - 54 transport aircraft of the 509th Composite Group 's 320th Troop Carrier Squadron on July 26, and arrived at North Field July 28. Three Fat Man high - explosive pre-assemblies, designated F31, F32, and F33, were picked up at Kirtland on July 28 by three B - 29s, two from the 393d Bombardment Squadron plus one from the 216th Army Air Force Base Unit, and transported to North Field, arriving on August 2. At the time of its bombing, Hiroshima was a city of both industrial and military significance. A number of military units were located nearby, the most important of which was the headquarters of Field Marshal Shunroku Hata 's Second General Army, which commanded the defense of all of southern Japan, and was located in Hiroshima Castle. Hata 's command consisted of some 400,000 men, most of whom were on Kyushu where an Allied invasion was correctly anticipated. Also present in Hiroshima were the headquarters of the 59th Army, the 5th Division and the 224th Division, a recently formed mobile unit. The city was defended by five batteries of 7 - cm and 8 - cm (2.8 and 3.1 inch) anti-aircraft guns of the 3rd Anti-Aircraft Division, including units from the 121st and 122nd Anti-Aircraft Regiments and the 22nd and 45th Separate Anti-Aircraft Battalions. In total, an estimated 40,000 Japanese military personnel were stationed in the city. Hiroshima was a supply and logistics base for the Japanese military. The city was a communications center, a key port for shipping, and an assembly area for troops. It was a beehive of war industry, manufacturing parts for planes and boats, for bombs, rifles, and handguns. The center of the city contained several reinforced concrete buildings and lighter structures. Outside the center, the area was congested by a dense collection of small timber workshops set among Japanese houses. A few larger industrial plants lay near the outskirts of the city. The houses were constructed of timber with tile roofs, and many of the industrial buildings were also built around timber frames. The city as a whole was highly susceptible to fire damage. It was the second largest city in Japan after Kyoto that was still undamaged by air raids, primarily because it lacked the aircraft manufacturing industry that was the XXI Bomber Command 's priority target. On July 3, the Joint Chiefs of Staff placed it off limits to bombers, along with Kokura, Niigata and Kyoto. The population of Hiroshima had reached a peak of over 381,000 earlier in the war but prior to the atomic bombing, the population had steadily decreased because of a systematic evacuation ordered by the Japanese government. At the time of the attack, the population was approximately 340,000 -- 350,000. Residents wondered why Hiroshima had been spared destruction by firebombing. Some speculated that the city was to be saved for U.S. occupation headquarters, others thought perhaps their relatives in Hawaii and California had petitioned the U.S. government to avoid bombing Hiroshima. More realistic city officials had ordered buildings torn down to create long, straight firebreaks. These continued to be expanded and extended up to the morning of August 6, 1945. Hiroshima was the primary target of the first nuclear bombing mission on August 6, with Kokura and Nagasaki as alternative targets. The 393d Bombardment Squadron B - 29 Enola Gay, named after Tibbets ' mother and piloted by Tibbets, took off from North Field, Tinian, about six hours ' flight time from Japan. Enola Gay was accompanied by two other B - 29s. The Great Artiste, commanded by Major Charles Sweeney, which carried instrumentation, and a then - nameless aircraft later called Necessary Evil, commanded by Captain George Marquardt, which served as the photography aircraft. After leaving Tinian the aircraft made their way separately to Iwo Jima to rendezvous with Sweeney and Marquardt at 05: 55 at 9,200 feet (2,800 m), and set course for Japan. The aircraft arrived over the target in clear visibility at 31,060 feet (9,470 m). Parsons, who was in command of the mission, armed the bomb in flight to minimize the risks during takeoff. He had witnessed four B - 29s crash and burn at takeoff, and feared that a nuclear explosion would occur if a B - 29 crashed with an armed Little Boy on board. His assistant, Second Lieutenant Morris R. Jeppson, removed the safety devices 30 minutes before reaching the target area. During the night of August 5 -- 6, Japanese early warning radar detected the approach of numerous American aircraft headed for the southern part of Japan. Radar detected 65 bombers headed for Saga, 102 bound for Maebashi, 261 en route to Nishinomiya, 111 headed for Ube and 66 bound for Imabari. An alert was given and radio broadcasting stopped in many cities, among them Hiroshima. The all - clear was sounded in Hiroshima at 00: 05. About an hour before the bombing, the air raid alert was sounded again, as Straight Flush flew over the city. It broadcast a short message which was picked up by Enola Gay. It read: "Cloud cover less than 3 / 10th at all altitudes. Advice: bomb primary. '' The all - clear was sounded over Hiroshima again at 07: 09. At 08: 09, Tibbets started his bomb run and handed control over to his bombardier, Major Thomas Ferebee. The release at 08: 15 (Hiroshima time) went as planned, and the Little Boy containing about 64 kg (141 lb) of uranium - 235 took 44.4 seconds to fall from the aircraft flying at about 31,000 feet (9,400 m) to a detonation height of about 1,900 feet (580 m) above the city. Enola Gay traveled 11.5 mi (18.5 km) before it felt the shock waves from the blast. Due to crosswind, the bomb missed the aiming point, the Aioi Bridge, by approximately 800 ft (240 m) and detonated directly over Shima Surgical Clinic. It released the equivalent energy of 16 kilotons of TNT (67 TJ), ± 2 kt. The weapon was considered very inefficient, with only 1.7 % of its material fissioning. The radius of total destruction was about 1 mile (1.6 km), with resulting fires across 4.4 square miles (11 km). Enola Gay stayed over the target area for two minutes and was ten miles away when the bomb detonated. Only Tibbets, Parsons, and Ferebee knew of the nature of the weapon; the others on the bomber were only told to expect a blinding flash and given black goggles. "It was hard to believe what we saw '', Tibbets told reporters, while Parsons said "the whole thing was tremendous and awe - inspiring... the men aboard with me gasped ' My God ' ''. He and Tibbets compared the shockwave to "a close burst of ack - ack fire ''. People on the ground reported a pika (ピカ) -- a brilliant flash of light -- followed by a don (ドン) -- a loud booming sound. Some 70,000 -- 80,000 people, or around 30 % of the population of Hiroshima, were killed by the blast and resultant firestorm, and another 70,000 were injured. Perhaps as many as 20,000 Japanese military personnel were killed. U.S. surveys estimated that 4.7 square miles (12 km) of the city were destroyed. Japanese officials determined that 69 % of Hiroshima 's buildings were destroyed and another 6 -- 7 % damaged. Some of the reinforced concrete buildings in Hiroshima had been very strongly constructed because of the earthquake danger in Japan, and their framework did not collapse even though they were fairly close to the blast center. Since the bomb detonated in the air, the blast was directed more downward than sideways, which was largely responsible for the survival of the Prefectural Industrial Promotional Hall, now commonly known as the Genbaku (A-bomb) dome. This building was designed and built by the Czech architect Jan Letzel, and was only 150 m (490 ft) from ground zero. The ruin was named Hiroshima Peace Memorial and was made a UNESCO World Heritage Site in 1996 over the objections of the United States and China, which expressed reservations on the grounds that other Asian nations were the ones who suffered the greatest loss of life and property, and a focus on Japan lacked historical perspective. The bombing started fires that spread rapidly through timber and paper homes. As in other Japanese cities, the firebreaks proved ineffective. The intense fires started gutted everything in a 2 kilometers (1.2 mi) radius. The air raid warning had been cleared at 07: 31, and many people were outside, going about their activities. Eizō Nomura was the closest known survivor, being in the basement of a reinforced concrete building (it remained as the Rest House after the war) only 170 meters (560 ft) from ground zero (the hypocenter) at the time of the attack. He died in 1982, aged 84. Akiko Takakura was among the closest survivors to the hypocenter of the blast. She was in the solidly built Bank of Hiroshima only 300 meters (980 ft) from ground - zero at the time of the attack. Over 90 % of the doctors and 93 % of the nurses in Hiroshima were killed or injured -- most had been in the downtown area which received the greatest damage. The hospitals were destroyed or heavily damaged. Only one doctor, Terufumi Sasaki, remained on duty at the Red Cross Hospital. Nonetheless, by early afternoon, the police and volunteers had established evacuation centres at hospitals, schools and tram stations, and a morgue was established in the Asano library. Most elements of the Japanese Second General Army headquarters were at physical training on the grounds of Hiroshima Castle, barely 900 yards (820 m) from the hypocenter. The attack killed 3,243 troops on the parade ground. The communications room of Chugoku Military District Headquarters that was responsible for issuing and lifting air raid warnings was in a semi-basement in the castle. Yoshie Oka, a Hijiyama Girls High School student who had been mobilized to serve as a communications officer had just sent a message that the alarm had been issued for Hiroshima and neighboring Yamaguchi, when the bomb exploded. She used a special phone to inform Fukuyama Headquarters (some 100 kilometers (62 mi) away) that "Hiroshima has been attacked by a new type of bomb. The city is in a state of near - total destruction. '' Since Mayor Senkichi Awaya had been killed while eating breakfast with his son and granddaughter at the mayoral residence, Field Marshal Hata, who was only slightly wounded, took over the administration of the city, and coordinated relief efforts. Many of his staff had been killed or fatally wounded, including a Korean prince of the Joseon Dynasty, Yi Wu, who was serving as a lieutenant colonel in the Japanese Army. Hata 's senior surviving staff officer was the wounded Colonel Kumao Imoto, who acted as his chief of staff. Soldiers from the undamaged Hiroshima Ujina Harbor used suicide boats, intended to repel the American invasion, to collect the wounded and take them down the rivers to the military hospital at Ujina. Trucks and trains brought in relief supplies and evacuated survivors from the city. Twelve American airmen were imprisoned at the Chugoku Military Police Headquarters, about 1,300 feet (400 m) from the hypocenter of the blast. Most died instantly, although two were reported to have been executed by their captors, and two prisoners badly injured by the bombing were left next to the Aioi Bridge by the Kempei Tai, where they were stoned to death. Eight U.S. prisoners of war killed as part of the medical experiments program at Kyushu University were falsely reported by Japanese authorities as having been killed in the atomic blast as part of an attempted cover up. The Tokyo control operator of the Japan Broadcasting Corporation noticed that the Hiroshima station had gone off the air. He tried to re-establish his program by using another telephone line, but it too had failed. About 20 minutes later the Tokyo railroad telegraph center realized that the main line telegraph had stopped working just north of Hiroshima. From some small railway stops within 16 km (10 mi) of the city came unofficial and confused reports of a terrible explosion in Hiroshima. All these reports were transmitted to the headquarters of the Imperial Japanese Army General Staff. Military bases repeatedly tried to call the Army Control Station in Hiroshima. The complete silence from that city puzzled the General Staff; they knew that no large enemy raid had occurred and that no sizable store of explosives was in Hiroshima at that time. A young officer was instructed to fly immediately to Hiroshima, to land, survey the damage, and return to Tokyo with reliable information for the staff. It was felt that nothing serious had taken place and that the explosion was just a rumor. The staff officer went to the airport and took off for the southwest. After flying for about three hours, while still nearly 160 km (100 mi) from Hiroshima, he and his pilot saw a great cloud of smoke from the bomb. After circling the city in order to survey the damage they landed south of the city, where the staff officer, after reporting to Tokyo, began to organize relief measures. Tokyo 's first indication that the city had been destroyed by a new type of bomb came from President Truman 's announcement of the strike, sixteen hours later. After the Hiroshima bombing, Truman issued a statement announcing the use of the new weapon. He stated, "We may be grateful to Providence '' that the German atomic bomb project had failed, and that the United States and its allies had "spent two billion dollars on the greatest scientific gamble in history -- and won ''. Truman then warned Japan: "If they do not now accept our terms, they may expect a rain of ruin from the air, the like of which has never been seen on this earth. Behind this air attack will follow sea and land forces in such numbers and power as they have not yet seen and with the fighting skill of which they are already well aware. '' This was a widely broadcast speech picked up by Japanese news agencies. The 50,000 - watt standard wave station on Saipan, the OWI radio station, broadcast a similar message to Japan every 15 minutes about Hiroshima, stating that more Japanese cities would face a similar fate in the absence of immediate acceptance of the terms of the Potsdam Declaration and emphatically urged civilians to evacuate major cities. Radio Japan, which continued to extoll victory for Japan by never surrendering, had informed the Japanese of the destruction of Hiroshima by a single bomb. Prime Minister Suzuki felt compelled to meet the Japanese press, to whom he reiterated his government 's commitment to ignore the Allies ' demands and fight on. Soviet Foreign Minister Vyacheslav Molotov informed Tokyo of the Soviet Union 's unilateral abrogation of the Soviet -- Japanese Neutrality Pact on August 5. At two minutes past midnight on August 9, Tokyo time, Soviet infantry, armor, and air forces had launched the Manchurian Strategic Offensive Operation. Four hours later, word reached Tokyo of the Soviet Union 's official declaration of war. The senior leadership of the Japanese Army began preparations to impose martial law on the nation, with the support of Minister of War Korechika Anami, in order to stop anyone attempting to make peace. On August 7, a day after Hiroshima was destroyed, Dr. Yoshio Nishina and other atomic physicists arrived at the city, and carefully examined the damage. They then went back to Tokyo and told the cabinet that Hiroshima was indeed destroyed by a nuclear weapon. Admiral Soemu Toyoda, the Chief of the Naval General Staff, estimated that no more than one or two additional bombs could be readied, so they decided to endure the remaining attacks, acknowledging "there would be more destruction but the war would go on ''. American Magic codebreakers intercepted the cabinet 's messages. Purnell, Parsons, Tibbets, Spaatz, and LeMay met on Guam that same day to discuss what should be done next. Since there was no indication of Japan surrendering, they decided to proceed with dropping another bomb. Parsons said that Project Alberta would have it ready by August 11, but Tibbets pointed to weather reports indicating poor flying conditions on that day due to a storm, and asked if the bomb could be readied by August 9. Parsons agreed to try to do so. The city of Nagasaki had been one of the largest seaports in southern Japan, and was of great wartime importance because of its wide - ranging industrial activity, including the production of ordnance, ships, military equipment, and other war materials. The four largest companies in the city were Mitsubishi Shipyards, Electrical Shipyards, Arms Plant, and Steel and Arms Works, which employed about 90 % of the city 's labor force, and accounted for 90 % of the city 's industry. Although an important industrial city, Nagasaki had been spared from firebombing because its geography made it difficult to locate at night with AN / APQ - 13 radar. Unlike the other target cities, Nagasaki had not been placed off limits to bombers by the Joint Chiefs of Staff 's July 3 directive, and was bombed on a small scale five times. During one of these raids on August 1, a number of conventional high - explosive bombs were dropped on the city. A few hit the shipyards and dock areas in the southwest portion of the city, and several hit the Mitsubishi Steel and Arms Works. By early August, the city was defended by the 134th Anti-Aircraft Regiment of the 4th Anti-Aircraft Division with four batteries of 7 cm (2.8 in) anti-aircraft guns and two searchlight batteries. In contrast to Hiroshima, almost all of the buildings were of old - fashioned Japanese construction, consisting of timber or timber - framed buildings with timber walls (with or without plaster) and tile roofs. Many of the smaller industries and business establishments were also situated in buildings of timber or other materials not designed to withstand explosions. Nagasaki had been permitted to grow for many years without conforming to any definite city zoning plan; residences were erected adjacent to factory buildings and to each other almost as closely as possible throughout the entire industrial valley. On the day of the bombing, an estimated 263,000 people were in Nagasaki, including 240,000 Japanese residents, 10,000 Korean residents, 2,500 conscripted Korean workers, 9,000 Japanese soldiers, 600 conscripted Chinese workers, and 400 Allied prisoners of war in a camp to the north of Nagasaki. Responsibility for the timing of the second bombing was delegated to Tibbets. Scheduled for August 11 against Kokura, the raid was moved earlier by two days to avoid a five - day period of bad weather forecast to begin on August 10. Three bomb pre-assemblies had been transported to Tinian, labeled F - 31, F - 32, and F - 33 on their exteriors. On August 8, a dress rehearsal was conducted off Tinian by Sweeney using Bockscar as the drop airplane. Assembly F - 33 was expended testing the components and F - 31 was designated for the August 9 mission. At 03: 49 on the morning of August 9, 1945, Bockscar, flown by Sweeney 's crew, carried Fat Man, with Kokura as the primary target and Nagasaki the secondary target. The mission plan for the second attack was nearly identical to that of the Hiroshima mission, with two B - 29s flying an hour ahead as weather scouts and two additional B - 29s in Sweeney 's flight for instrumentation and photographic support of the mission. Sweeney took off with his weapon already armed but with the electrical safety plugs still engaged. During pre-flight inspection of Bockscar, the flight engineer notified Sweeney that an inoperative fuel transfer pump made it impossible to use 640 US gallons (2,400 l; 530 imp gal) of fuel carried in a reserve tank. This fuel would still have to be carried all the way to Japan and back, consuming still more fuel. Replacing the pump would take hours; moving the Fat Man to another aircraft might take just as long and was dangerous as well, as the bomb was live. Tibbets and Sweeney therefore elected to have Bockscar continue the mission. This time Penney and Cheshire were allowed to accompany the mission, flying as observers on the third plane, Big Stink, flown by the group 's operations officer, Major James I. Hopkins, Jr. Observers aboard the weather planes reported both targets clear. When Sweeney 's aircraft arrived at the assembly point for his flight off the coast of Japan, Big Stink failed to make the rendezvous. According to Cheshire, Hopkins was at varying heights including 9,000 feet (2,700 m) higher than he should have been, and was not flying tight circles over Yakushima as previously agreed with Sweeney and Captain Frederick C. Bock, who was piloting the support B - 29 The Great Artiste. Instead, Hopkins was flying 40 - mile (64 km) dogleg patterns. Though ordered not to circle longer than fifteen minutes, Sweeney continued to wait for Big Stink for forty minutes. Before leaving the rendezvous point, Sweeney consulted Ashworth, who was in charge of the bomb. As commander of the aircraft, Sweeney made the decision to proceed to the primary, the city of Kokura. After exceeding the original departure time limit by nearly a half - hour, Bockscar, accompanied by The Great Artiste, proceeded to Kokura, thirty minutes away. The delay at the rendezvous had resulted in clouds and drifting smoke over Kokura from fires started by a major firebombing raid by 224 B - 29s on nearby Yahata the previous day. Additionally, the Yahata Steel Works intentionally burned coal tar, to produce black smoke. The clouds and smoke resulted in 70 % of the area over Kokura being covered, obscuring the aiming point. Three bomb runs were made over the next 50 minutes, burning fuel and exposing the aircraft repeatedly to the heavy defenses around Kokura, but the bombardier was unable to drop visually. By the time of the third bomb run, Japanese antiaircraft fire was getting close, and Second Lieutenant Jacob Beser, who was monitoring Japanese communications, reported activity on the Japanese fighter direction radio bands. After three runs over the city, and with fuel running low because of the failed fuel pump, Bockscar and The Great Artiste headed for their secondary target, Nagasaki. Fuel consumption calculations made en route indicated that Bockscar had insufficient fuel to reach Iwo Jima and would be forced to divert to Okinawa, which had become entirely Allied - occupied territory only six weeks earlier. After initially deciding that if Nagasaki were obscured on their arrival the crew would carry the bomb to Okinawa and dispose of it in the ocean if necessary, Ashworth agreed with Sweeney 's suggestion that a radar approach would be used if the target was obscured. At about 07: 50 Japanese time, an air raid alert was sounded in Nagasaki, but the "all clear '' signal was given at 08: 30. When only two B - 29 Superfortresses were sighted at 10: 53, the Japanese apparently assumed that the planes were only on reconnaissance and no further alarm was given. A few minutes later at 11: 00, The Great Artiste dropped instruments attached to three parachutes. These instruments also contained an unsigned letter to Professor Ryokichi Sagane, a physicist at the University of Tokyo who studied with three of the scientists responsible for the atomic bomb at the University of California, Berkeley, urging him to tell the public about the danger involved with these weapons of mass destruction. The messages were found by military authorities but not turned over to Sagane until a month later. In 1949, one of the authors of the letter, Luis Alvarez, met with Sagane and signed the letter. At 11: 01, a last - minute break in the clouds over Nagasaki allowed Bockscar 's bombardier, Captain Kermit Beahan, to visually sight the target as ordered. The Fat Man weapon, containing a core of about 5 kg (11 lb) of plutonium, was dropped over the city 's industrial valley. It exploded 47 seconds later at 1,650 ± 33 ft (503 ± 10 m), above a tennis court, halfway between the Mitsubishi Steel and Arms Works in the south and the Nagasaki Arsenal in the north. This was nearly 3 km (1.9 mi) northwest of the planned hypocenter; the blast was confined to the Urakami Valley and a major portion of the city was protected by the intervening hills. The resulting explosion released the equivalent energy of 21 ± 2 kt (87.9 ± 8.4 TJ). Big Stink spotted the explosion from a hundred miles away, and flew over to observe. Bockscar flew on to Okinawa, arriving with only sufficient fuel for a single approach. Sweeney tried repeatedly to contact the control tower for landing clearance, but received no answer. He could see heavy air traffic landing and taking off from Yontan Airfield. Firing off every flare on board to alert the field to his emergency landing, the Bockscar came in fast, landing at 140 miles per hour (230 km / h) instead of the normal 120 miles per hour (190 km / h). The number two engine died from fuel starvation as he began the final approach. Touching down on only three engines midway down the landing strip, Bockscar bounced up into the air again for about 25 feet (7.6 m) before slamming back down hard. The heavy B - 29 slewed left and towards a row of parked B - 24 bombers before the pilots managed to regain control. Its reversible propellers were insufficient to slow the aircraft adequately, and with both pilots standing on the brakes, Bockscar made a swerving 90 - degree turn at the end of the runway to avoid running off it. A second engine died from fuel exhaustion before the plane came to a stop. Following the mission, there was confusion over the identification of the plane. The first eyewitness account by war correspondent William L. Laurence of The New York Times, who accompanied the mission aboard the aircraft piloted by Bock, reported that Sweeney was leading the mission in The Great Artiste. He also noted its "Victor '' number as 77, which was that of Bockscar. Laurence had interviewed Sweeney and his crew, and was aware that they referred to their airplane as The Great Artiste. Except for Enola Gay, none of the 393d 's B - 29s had yet had names painted on the noses, a fact which Laurence himself noted in his account. Unaware of the switch in aircraft, Laurence assumed Victor 77 was The Great Artiste, which was in fact, Victor 89. Although the bomb was more powerful than the one used on Hiroshima, the effect was confined by hillsides to the narrow Urakami Valley. Of 7,500 Japanese employees who worked inside the Mitsubishi Munitions plant, including "mobilized '' students and regular workers, 6,200 were killed. Some 17,000 -- 22,000 others who worked in other war plants and factories in the city died as well. Casualty estimates for immediate deaths vary widely, ranging from 22,000 to 75,000. At least 35,000 -- 40,000 people were killed and 60,000 others injured. In the days and months following the explosion, more people died from their injuries. Because of the presence of undocumented foreign workers, and a number of military personnel in transit, there are great discrepancies in the estimates of total deaths by the end of 1945; a range of 39,000 to 80,000 can be found in various studies. Unlike Hiroshima 's military death toll, only 150 Japanese soldiers were killed instantly, including thirty - six from the 134th AAA Regiment of the 4th AAA Division. At least eight known POWs died from the bombing and as many as 13 may have died, including a British prisoner of war, Royal Air Force Corporal Ronald Shaw, and seven Dutch POWs. One American POW, Joe Kieyoomia, was in Nagasaki at the time of the bombing but survived, reportedly having been shielded from the effects of the bomb by the concrete walls of his cell. There were 24 Australian POWs in Nagasaki, all of whom survived. The radius of total destruction was about 1 mi (1.6 km), followed by fires across the northern portion of the city to 2 mi (3.2 km) south of the bomb. About 58 % of the Mitsubishi Arms Plant was damaged, and about 78 % of the Mitsubishi Steel Works. The Mitsubishi Electric Works suffered only 10 % structural damage as it was on the border of the main destruction zone. The Nagasaki Arsenal was destroyed in the blast. Although many fires likewise burnt following the bombing, in contrast to Hiroshima where sufficient fuel density was available, no firestorm developed in Nagasaki as the damaged areas did not furnish enough fuel to generate the phenomenon. Instead, the ambient wind at the time pushed the fire spread along the valley. As in Hiroshima, the bombing badly dislocated the city 's medical facilities. A makeshift hospital was established at the Shinkozen Primary School, which served as the main medical centre. The trains were still running, and evacuated many victims to hospitals in nearby towns. A medical team from a naval hospital reached the city in the evening, and fire - fighting brigades from the neighboring towns assisted in fighting the fires. Takashi Nagai was a doctor working in the radiology department of Nagasaki Medical College Hospital. He received a serious injury that severed his right temporal artery, but joined the rest of the surviving medical staff in treating bombing victims. Groves expected to have another atomic bomb ready for use on August 19, with three more in September and a further three in October. On August 10, he sent a memorandum to Marshall in which he wrote that "the next bomb... should be ready for delivery on the first suitable weather after 17 or 18 August. '' Marshall endorsed the memo with the hand - written comment, "It is not to be released over Japan without express authority from the President '', something Truman had requested that day. This modified the previous order that the target cities were to be attacked with atomic bombs "as made ready ''. There was already discussion in the War Department about conserving the bombs then in production for Operation Downfall, and Marshall suggested to Stimson that the remaining cities on the target list be spared attack with atomic bombs. Two more Fat Man assemblies were readied, and scheduled to leave Kirtland Field for Tinian on August 11 and 14, and Tibbets was ordered by LeMay to return to Albuquerque, New Mexico, to collect them. At Los Alamos, technicians worked 24 hours straight to cast another plutonium core. Although cast, it still needed to be pressed and coated, which would take until August 16. Therefore, it could have been ready for use on August 19. Unable to reach Marshall, Groves ordered on his own authority on August 13 that the core should not be shipped. Until August 9, Japan 's war council still insisted on its four conditions for surrender. The full cabinet met on 14: 30 on August 9, and spent most of the day debating surrender. Anami conceded that victory was unlikely, but argued in favour of continuing the war nonetheless. The meeting ended at 17: 30, with no decision having been reached. Suzuki went to the palace to report on the outcome of meeting, where he met with Kōichi Kido, the Lord Keeper of the Privy Seal of Japan. Kido informed him that the emperor had agreed to hold an imperial conference, and gave a strong indication that the emperor would consent to surrender on condition that kokutai be preserved. A second cabinet meeting was held at 18: 00. Only four ministers supported Anami 's position of adhering to the four conditions, but since cabinet decisions had to be unanimous, no decision was reached before it ended at 22: 00. Calling an imperial conference required the signatures of the prime minister and the two service chiefs, but the Chief Cabinet Secretary Hisatsune Sakomizu had already obtained signatures from Toyoda and General Yoshijirō Umezu in advance, and he reneged on his promise to inform them if a meeting was to be held. The meeting commenced at 23: 50. No consensus had emerged by 02: 00, but the emperor gave his "sacred decision '', authorizing the Foreign Minister, Shigenori Tōgō, to notify the Allies that Japan would accept their terms on one condition, that the declaration "does not comprise any demand which prejudices the prerogatives of His Majesty as a Sovereign ruler. '' On August 12, the Emperor informed the imperial family of his decision to surrender. One of his uncles, Prince Asaka, then asked whether the war would be continued if the kokutai could not be preserved. Hirohito simply replied, "Of course. '' As the Allied terms seemed to leave intact the principle of the preservation of the Throne, Hirohito recorded on August 14 his capitulation announcement which was broadcast to the Japanese nation the next day despite a short rebellion by militarists opposed to the surrender. In his declaration, Hirohito referred to the atomic bombings, and did not explicitly mention the Soviets as a factor for surrender: Despite the best that has been done by every one -- the gallant fighting of military and naval forces, the diligence and assiduity of Our servants of the State and the devoted service of Our one hundred million people, the war situation has developed not necessarily to Japan 's advantage, while the general trends of the world have all turned against her interest. Moreover, the enemy now possesses a new and terrible weapon with the power to destroy many innocent lives and do incalculable damage. Should we continue to fight, not only would it result in an ultimate collapse and obliteration of the Japanese nation, but also it would lead to the total extinction of human civilization. Such being the case, how are we to save the millions of our subjects, or to atone ourselves before the hallowed spirits of our imperial ancestors? This is the reason why we have ordered the acceptance of the provisions of the joint declaration of the powers. In his "Rescript to the Soldiers and Sailors '' delivered on August 17, however, he stressed the impact of the Soviet invasion on his decision to surrender. On August 10, 1945, the day after the Nagasaki bombing, Yōsuke Yamahata, correspondent Higashi, and artist Yamada arrived in the city with orders to record the destruction for maximum propaganda purposes, Yamahata took scores of photographs, and on August 21, they appeared in Mainichi Shimbun, a popular Japanese newspaper. Leslie Nakashima filed the first personal account of the scene to appear in American newspapers. A version of his August 27 UPI article appeared in The New York Times on August 31. Wilfred Burchett was the first western journalist to visit Hiroshima after the bombing, arriving alone by train from Tokyo on September 2. His Morse code dispatch, "The Atomic Plague '', was printed by the Daily Express newspaper in London on September 5, 1945. Nakashima 's and Burchett 's reports were the first public reports to mention the effects of radiation and nuclear fallout -- radiation burns and radiation poisoning. Burchett 's reporting was unpopular with the U.S. military, who accused Burchett of being under the sway of Japanese propaganda, and suppressed a supporting story submitted by George Weller of the Chicago Daily News. Laurence dismissed the reports on radiation sickness as Japanese efforts to undermine American morale, ignoring his own account published one week earlier. A member of the U.S. Strategic Bombing Survey, Lieutenant Daniel McGovern, used a film crew to document the effects of the bombings in early 1946. The film crew shot 90,000 ft (27,000 m) of film, resulting in a three - hour documentary titled The Effects of the Atomic Bombs Against Hiroshima and Nagasaki. The documentary included images from hospitals showing the human effects of the bomb; it showed burned - out buildings and cars, and rows of skulls and bones on the ground. It was classified "secret '' for the next 22 years. Motion picture company Nippon Eigasha started sending cameramen to Nagasaki and Hiroshima in September 1945. On October 24, 1945, a U.S. military policeman stopped a Nippon Eigasha cameraman from continuing to film in Nagasaki. All Nippon Eigasha 's reels were confiscated by the American authorities, but they were requested by the Japanese government, and declassified. The public release of film footage of the city post-attack, and some research about the effects of the attack, was restricted during the occupation of Japan, but the Hiroshima - based magazine, Chugoku Bunka, in its first issue published on March 10, 1946, devoted itself to detailing the damage from the bombing. The book Hiroshima, written by Pulitzer Prize winner John Hersey, which was originally published in article form in the popular magazine The New Yorker, on August 31, 1946, is reported to have reached Tokyo in English by January 1947, and the translated version was released in Japan in 1949. It narrated the stories of the lives of six bomb survivors from immediately prior to, and months after, the dropping of the Little Boy bomb. Beginning in 1974, a compilation of drawings and artwork made by the survivors of the bombings began to be compiled, with completion in 1977, and under both book and exhibition format, it was titled The Unforgettable Fire. The bombing amazed Otto Hahn and other German atomic scientists, whom the British held at Farm Hall in Operation Epsilon. Hahn stated that he had not believed an atomic weapon "would be possible for another twenty years ''; Werner Heisenberg did not believe the news at first. Carl Friedrich von Weizsäcker said "I think it 's dreadful of the Americans to have done it. I think it is madness on their part '', but Heisenberg replied, "One could equally well say ' That 's the quickest way of ending the war ' ''. Hahn was grateful that the German project had not succeeded in developing "such an inhumane weapon ''; Karl Wirtz observed that even if it had, "we would have obliterated London but would still not have conquered the world, and then they would have dropped them on us ''. Hahn told the others, "Once I wanted to suggest that all uranium should be sunk to the bottom of the ocean ''. The Vatican agreed; L'Osservatore Romano expressed regret that the bomb 's inventors did not destroy the weapon for the benefit of humanity. Rev. Cuthbert Thicknesse, the Dean of St Albans, prohibited using St Albans Abbey for a thanksgiving service for the war 's end, calling the use of atomic weapons "an act of wholesale, indiscriminate massacre ''. Nonetheless, news of the atomic bombing was greeted enthusiastically in the U.S.; a poll in Fortune magazine in late 1945 showed a significant minority of Americans (23 %) wishing that more atomic bombs could have been dropped on Japan. The initial positive response was supported by the imagery presented to the public (mainly the powerful images of the mushroom cloud). During this time in America, it was a common practice for editors to keep graphic images of death out of films, magazines, and newspapers. Frequent estimates are that 140,000 people in Hiroshima (39 % of the population) and 70,000 people in Nagasaki (28 % of the population) died in 1945, though the number which died immediately as a result of exposure to the blast, heat, or due to radiation, is unknown. One Atomic Bomb Casualty Commission report discusses 6,882 people examined in Hiroshima, and 6,621 people examined in Nagasaki, who were largely within 2000 meters from the hypocenter, who suffered injuries from the blast and heat but died from complications frequently compounded by acute radiation syndrome (ARS), all within about 20 -- 30 days. The most well known of which being Midori Naka, some 650 meters from the hypocenter at Hiroshima, who would travel to Tokyo and then with her death on August 24, 1945 was to be the first death officially certified as a result of radiation poisoning, or as it was referred to by many, "Atomic bomb disease ''. It was unappreciated at the time but the average radiation dose that will kill approximately 50 % of adults, the LD50, was approximately halved, that is, smaller doses were made more lethal, when the individual experienced concurrent blast or burn polytraumatic injuries. Conventional skin injuries that cover a large area frequently result in bacterial infection; the risk of sepsis and death is increased when a usually non-lethal radiation dose moderately suppresses the white blood cell count. In the spring of 1948, the Atomic Bomb Casualty Commission (ABCC) was established in accordance with a presidential directive from Truman to the National Academy of Sciences -- National Research Council to conduct investigations of the late effects of radiation among the survivors in Hiroshima and Nagasaki. In 1956, the ABCC published The Effect of Exposure to the Atomic Bombs on Pregnancy Termination in Hiroshima and Nagasaki. The ABCC became the Radiation Effects Research Foundation (RERF), on April 1, 1975. A binational organization run by both the United States and Japan, the RERF is still in operation today. As cancers do not immediately emerge after exposure to radiation instead radiation - induced cancer has a minimum latency period of some 5 + years and Leukemia some 2 + which peaks around 6 -- 8 years later. Dr Jarrett Foley published the first major reports on the significant increased incidence of the latter among survivors, almost all cases of leukemia over the following 50 years were in people exposed to more than 1 Gy. In a strictly dependent manner dependent on their distance from the hypocenter, in the 1987 Life Span Study, conducted by the Radiation Effects Research Foundation, a statistical excess of 507 cancers, of undefined lethality, were observed in 79,972 hibakusha who had still been living between 1958 -- 1987 and who took part in the study. As the epidemiology study continues with time, the RERF estimates that from 1950 to 2000, 46 % of leukemia deaths which may include Sadako Sasaki and 11 % of solid cancers of unspecificed lethality, were likely due to radiation from the bombs or some other post-attack city effects, with the statistical excess being 200 leukemia deaths and 1,700 solid cancers of undeclared lethality. Both of these statistics being derived from the observation of approximately half of the total survivors, strictly those who took part in the study. While during the preimplantation period, that is 1 -- 10 days following conception, interuterine radiation exposure of "at least 0.2 Gy '' can cause complications of implantation and death of the human embryo. The number of miscarriages caused by the radiation from the bombings, during this radiosensitive period, is not known. One of the early studies conducted by the ABCC was on the outcome of pregnancies occurring in Hiroshima and Nagasaki, and in a control city, Kure, located 18 mi (29 km) south of Hiroshima, in order to discern the conditions and outcomes related to radiation exposure. James V. Neel led the study which found that the overall number of birth defects was not significantly higher among the children of survivors who were pregnant at the time of the bombings. He also studied the longevity of the children who survived the bombings of Hiroshima and Nagasaki, reporting that between 90 and 95 percent were still living 50 years later. While The National Academy of Sciences raised the possibility that Neel 's procedure did not filter the Kure population for possible radiation exposure which could bias the results. Overall, a statistically insignificant increase in birth defects occurred directly after the bombings of Nagasaki and Hiroshima when the cities were taken as wholes, in terms of distance from the hypocenters however, Neel and others noted that in approximately 50 humans who were of an early gestational age at the time of the bombing and who were all within about 1 kilometre (0.62 mi) from the hypocenter, an increase in microencephaly and anencephaly was observed upon birth, with the incidence of these two particular malformations being nearly 3 times what was to be expected when compared to the control group in Kure, were approximately 20 cases were observed in a similar sample size. In 1985, Johns Hopkins University geneticist James F. Crow examined Neel 's research and confirmed that the number of birth defects was not significantly higher in Hiroshima and Nagasaki. Many members of the ABCC and its successor Radiation Effects Research Foundation (RERF) were still looking for possible birth defects among the survivors decades later, but found no evidence that they were significantly common among the survivors, or inherited in the children of survivors. Despite the small sample size of 1600 to 1800 persons who came forth as prenatally exposed at the time of the bombings, that were both within a close proximity to the two hypocenters, to survive the In utero absorption of a substantial dose of radiation and then the malnourished post-attack environment, data from this cohort does support the increased risk of severe mental retardation (SMR), that was observed in some 30 individuals, with SMR being a common outcome of the aforementioned microencephaly. While a lack of statistical data, with just 30 individuals out of 1800, prevents a definitive determination of a threshold point, the data collected suggests a threshold interuterine or fetal dose for SMR, at the most radiosensitive period of cognitive development, when there is the largest number of undifferentiated neural cells (8 to 15 weeks post-conception) to begin at a threshold dose of approximately "0.09 '' to "0.15 '' Gy, with the risk then linearly increasing to a 43 % rate of SMR when exposed to a fetal dose of 1 Gy at any point during these weeks of rapid Neurogenesis. However either side of this radiosensitive age, none of the prenatally exposed to the bombings at an age less than 8 weeks, that is prior to synaptogenesis or at a gestational age more than 26 weeks "were observed to be mentally retarded '', with the condition therefore being isolated to those solely of 8 -- 26 weeks of age and who absorbed more than approximately "0.09 '' to "0.15 '' Gy of prompt radiation energy. Examination of the prenatally exposed in terms of IQ performance and school records, determined the beginning of a statistically significant reduction in both, when exposed to greater than 0.1 to 0.5 Gray, during the same gestational period of 8 -- 25 weeks. However outside this period, at less than 8 weeks and greater than 26 after conception, "there is no evidence of a radiation - related effect on scholastic performance. '' The reporting of doses in terms of absorbed energy in units of (Gy and rad) rather than the use of the biologically significant, biologically weighted Sievert, in both the SMR and cognitive performance data is typical. The reported threshold dose variance between the two cities, is suggested to be a manifestation of the difference between X-ray and neutron absorption, with Little Boy emitting substantially more neutron flux, whereas the Baratol that surrounded the core of Fat Man, filtered or shifted the absorbed neutron - radiation profile, so that the dose of radiation energy received in Nagasaki, is mostly that from exposure to x-rays / gamma rays, in contrast to the environment within 1500 meters of the hypocenter at Hiroshima, were instead the in - utero dose more depended on the absorption of neutrons, which have a higher biological effect per unit of energy absorbed. From the Radiation dose reconstruction work, which were also informed by the 1962 BREN Tower - Japanese city analog, the estimated dosimetry at Hiroshima still has the largest uncertainty as the Little Boy - bomb design was never tested before deployment or afterward, therefore the estimated radiation profile absorbed by individuals at Hiroshima had required greater reliance on calculations than the Japanese soil, concrete and roof - tile measurements which began to reach accurate levels and thereby inform researchers, in the 1990s. Many other investigations into cognitive outcomes, such as Schizophrenia as a result of prenatal exposure, have been conducted with "no statistically significant linear relationship seen '', there is a suggestion that in the most extremely exposed, those who survived within a kilometer or so of the hypocenters, a trend emerges akin to that seen in SMR, though the sample size is too small to determine with any significance. The survivors of the bombings are called hibakusha (被爆 者, Japanese pronunciation: (çibakɯ̥ɕa)), a Japanese word that literally translates to "explosion - affected people ''. The Japanese government has recognized about 650,000 people as hibakusha. As of March 31, 2018, 154,859 were still alive, mostly in Japan. The government of Japan recognizes about 1 % of these as having illnesses caused by radiation. The memorials in Hiroshima and Nagasaki contain lists of the names of the hibakusha who are known to have died since the bombings. Updated annually on the anniversaries of the bombings, as of August 2018, the memorials record the names of almost 495,000 hibakusha; 314,118 in Hiroshima and 179,226 in Nagasaki. If they discuss their background, Hibakusha and their children were (and still are) victims of fear based discrimination and exclusion when it comes to prospects of marriage or work due to public ignorance about the consequences of radiation sickness or that the low doses that the majority received were less than a routine diagnostic x-ray, much of the public however persist with the belief that the Hibakusha carry some hereditary or even contagious disease. This is despite the fact that no statistically demonstrable increase of birth defects / congenital malformations was found among the later conceived children born to survivors of the nuclear weapons used at Hiroshima and Nagasaki, or indeed has been found in the later conceived children of cancer survivors who had previously received radiotherapy. The surviving women of Hiroshima and Nagasaki, that could conceive, who were exposed to substantial amounts of radiation, went on and had children with no higher incidence of abnormalities / birth defects than the rate which is observed in the Japanese average. A study of the long - term psychological effects of the bombings on the survivors found that even 17 -- 20 years after the bombings had occurred survivors showed a higher prevalence of anxiety and somatization symptoms. Perhaps as many as 200 people from Hiroshima sought refuge in Nagasaki. The 2006 documentary Twice Survived: The Doubly Atomic Bombed of Hiroshima and Nagasaki documented 165 nijū hibakusha (lit. double explosion - affected people), nine of whom claimed to be in the blast zone in both cities. On March 24, 2009, the Japanese government officially recognized Tsutomu Yamaguchi as a double hibakusha. He was confirmed to be 3 km (1.9 mi) from ground zero in Hiroshima on a business trip when the bomb was detonated. He was seriously burnt on his left side and spent the night in Hiroshima. He arrived at his home city of Nagasaki on August 8, the day before the bombing, and he was exposed to residual radiation while searching for his relatives. He was the first officially recognized survivor of both bombings. He died on January 4, 2010, at the age of 93, after a battle with stomach cancer. During the war, Japan brought as many as 670,000 Korean conscripts to Japan to work as forced labor. About 5,000 -- 8,000 Koreans were killed in Hiroshima and another 1,500 -- 2,000 died in Nagasaki. For many years, Korean survivors had a difficult time fighting for the same recognition as Hibakusha as afforded to all Japanese survivors, a situation which resulted in the denial of the free health benefits to them in Japan. Most issues were eventually addressed in 2008 through lawsuits. Hiroshima was subsequently struck by Typhoon Ida on September 17, 1945. More than half the bridges were destroyed, and the roads and railroads were damaged, further devastating the city. The population increased from 83,000 soon after the bombing to 146,000 in February 1946. The city was rebuilt after the war, with help from the national government through the Hiroshima Peace Memorial City Construction Law passed in 1949. It provided financial assistance for reconstruction, along with land donated that was previously owned by the national government and used for military purposes. In 1949, a design was selected for the Hiroshima Peace Memorial Park. Hiroshima Prefectural Industrial Promotion Hall, the closest surviving building to the location of the bomb 's detonation, was designated the Hiroshima Peace Memorial. The Hiroshima Peace Memorial Museum was opened in 1955 in the Peace Park. Hiroshima also contains a Peace Pagoda, built in 1966 by Nipponzan - Myōhōji. Nagasaki was also rebuilt after the war, but was dramatically changed in the process. The pace of reconstruction was initially slow, and the first simple emergency dwellings were not provided until 1946. The focus on redevelopment was the replacement of war industries with foreign trade, shipbuilding and fishing. This was formally declared when the Nagasaki International Culture City Reconstruction Law was passed in May 1949. New temples were built, as well as new churches owing to an increase in the presence of Christianity. Some of the rubble was left as a memorial, such as a torii at Sannō Shrine, and an arch near ground zero. New structures were also raised as memorials, such as the Nagasaki Atomic Bomb Museum, which was opened in the mid-1990s. The role of the bombings in Japan 's surrender, and the ethical, legal, and military controversies surrounding the United States ' justification for them have been the subject of scholarly and popular debate. On one hand, it has been argued, that the bombings caused the Japanese surrender, thereby preventing casualties that an invasion of Japan would have involved. Stimson talked of saving one million casualties. The naval blockade might have starved the Japanese into submission without an invasion, but this would also have resulted in many more Japanese deaths. It has also been pointed out that the conventional bombing of Japan caused just as much destruction as the atomic bombs, if not more so. Indeed, Operation Meetinghouse, known as the Great Tokyo Air Raid in Japan, was the single most devastating air raid of the war, with a higher death toll than either of the two atomic bombings. Japanese historian Tsuyoshi Hasegawa argued that the entry of the Soviet Union into the war against Japan "played a much greater role than the atomic bombs in inducing Japan to surrender because it dashed any hope that Japan could terminate the war through Moscow 's mediation ''. A view among critics of the bombings, that was popularized by American historian Gar Alperovitz in 1965, is the idea of atomic diplomacy: that the United States used nuclear weapons in order to intimidate the Soviet Union in the early stages of the Cold War. Although not accepted by mainstream historians, this became the position in Japanese school history textbooks. Those who oppose the bombings, give other reasons for their view; among them: a belief, that atomic bombing is fundamentally immoral, that the bombings counted as war crimes, that they constituted state terrorism, and that they involved racism against and the dehumanization of the Japanese people. Like the way it began, the manner in which World War II ended cast a long shadow over international relations for decades to come. By June 30, 1946, there were components for only nine atomic bombs in the US arsenal, all Fat Man devices identical to the one used in the bombing of Nagasaki. The nuclear weapons were handmade devices, and a great deal of work remained to improve their ease of assembly, safety, reliability and storage before they were ready for production. There were also many improvements to their performance that had been suggested or recommended, but that had not been possible under the pressure of wartime development. The Chairman of the Joint Chiefs of Staff, Fleet Admiral William D. Leahy had decried the use of the atomic bombs as adopting "an ethical standard common to the barbarians of the Dark Ages '', but in October 1947, he reported a military requirement for 400 bombs. The American monopoly on nuclear weapons lasted only four years before the Soviet Union detonated an atomic bomb in September 1949. The United States responded with the development of the hydrogen bomb, a nuclear weapon a thousand times as powerful as the bombs that devastated Hiroshima and Nagasaki. Such ordinary fission bombs would henceforth be regarded as small tactical nuclear weapons. By 1986, the United States would have 23,317 nuclear weapons, while the Soviet Union had 40,159. By 2017, nine nations had nuclear weapons, but Japan was not one of them. Japan reluctantly signed the Treaty on the Non-Proliferation of Nuclear Weapons in February 1970, but it still sheltered under the American nuclear umbrella. American nuclear weapons were stored on Okinawa, and sometimes in Japan itself, albeit in contravention of agreements between the two nations. Lacking the resources to fight the Soviet Union using conventional forces, the Western Alliance came to depend on the use of nuclear weapons to defend itself during the Cold War, a policy that became known in the 1950s as the New Look. In the decades after Hiroshima and Nagasaki, the United States would threaten to use its nuclear weapons many times.
the diamond of drury lane by julia golding
The diamond of Drury Lane - wikipedia The Diamond of Drury Lane is a children 's historical novel by Julia Golding which won the Nestle Children 's Book Prize Gold Award and the Waterstone 's Children 's Book Prize in 2006. The book is set on 1 January 1790. An orphaned 13 - year - old girl named Catherine ' Cat ' Royal lives in the Theatre Royal, after the owner, Mr. Sheridan, who named her after the theatre, found her as a baby. She knows well the Theatre and its surroundings, later 18th century England. One night Cat overhears Mr. Sheridan and his colleague Marchmont, discussing a valuable diamond hidden in the theatre. Cat is intrigued, but she promised to protect it for Mr. Sheridan after he tells her that nobody can know about it as it is difficult to know whom to trust with this secret. Cat befriends an African boy violinist, Pedro, who arrives to be the musician 's apprentice. Cat also meets the aristocratic Avon family, the duke and duchess of Avon, and their children, Lord Francis and Lady Elizabeth, who are not as arrogant as other wealthy people, and actually want to be Cat 's friends. She also meets Johnny, the new prompt with a rather unmistakable talent for art, specifically controversial political cartoons, and a mysterious past of which he does not speak much of, other than the fact he ran away from home at a young age. She learns that Johnny is the "Captain Sparkler '' accused of treason for the cartoons against the King of England. Cat soon also finds out that Johnny had had a romantic past with Lady Elizabeth, and would possibly be bidding for her in the marriage market if his situation was different. Pedro is told of the diamond by Cat, which she later regrets when she finds him looking for it in Mr Sheridan 's office hoping it will give him payment for a boat to his homeland in Africa, where he can escape the grasp of slave traders. This is before Cat learns that the diamond is not a real diamond, but is a metaphor used by Johnny 's friends to refer to him as to avoid giving away valuable information to those who might hand him over to the court. He is of value, because of the reward for his capture. However, a street gang led by Cat 's enemy Billy "Boil '' Shepherd ("Boil '' being a reference to the boil on his nose), learns about the diamond, and assumes it is a real diamond. He breaks into the Theatre to steal it, bringing a few members of the gang with him. Cat and Pedro manage to evade him, but Cat is arrested for having money that was supposed to be for smuggling Johnny out of England where he will be safe. The money was really pawned jewels from Elizabeth, which she had given Cat permission to take to the pawn brokers, but the people who arrest her do not believe this. Billy is also arrested for stealing the money, although it was not actually him who stole it, but two members of his gang. Johnny 's father Lord Fitzroy, however, knows the true reason to why Cat had the money in the first place, and arranges for her to be set free. Unfortunately, Billy is also set free later on, and the two gang members replace him in prison. Johnny eventually gets out of England, with the help of Frank (Lord Francis), Lizzie (Lady Elizabeth) and Cat. Cat is reunited with her theatre and everyone in it. In the end, Mr. Sheridan tells her that there was no real diamond, which Cat is already aware of. Cat feels ashamed for not realizing it earlier, but Mr Sheridan reminds that in actual fact Johnny never was the so - called "diamond '' and that Cat is the true diamond of the Theatre Royal, and the theatre would be nothing without its "Cat. '' There are five sequels: Cat among the Pigeons, Den of Thieves, Cat O'Nine Tails, Black Heart of Jamaica and Cat 's Cradle and "Middle Passage ''.
once upon a time in mumbaai based on whose story
Once Upon a Time in Mumbaai - Wikipedia Once Upon a Time in Mumbaai is a 2010 Indian gangster film written by Rajat Arora, directed by Milan Luthria and produced by Ekta Kapoor. It stars Ajay Devgn, Emraan Hashmi, Kangana Ranaut, Prachi Desai and Randeep Hooda. The film is produced under Balaji Motion Pictures and released on 30 July 2010. Once Upon A Time in Mumbaai received generally positive reviews from critics and was a box office success. The film is loosely based on the lives of Mumbai underworld gangsters Haji Mastan and Dawood Ibrahim. Its sequel, Once Upon a Time in Mumbai Dobaara! was released on 15 August 2013. It featured Akshay Kumar replacing Hashmi from the first film alongside Imran Khan. The filming for the sequel began in August 2012 apparently in Qatar. The sequel received negative reviews from both critics and audiences and performed poorly at the box office. The film opens with a suicide attempt by Assistant Commissioner of Police (ACP) Agnel Wilson (Randeep Hooda) on the pretext of the Bombay Bombings in 1993. When questioned by his superior over his actions, he breaks down and claims that the recent tragic events are his own fault. Wilson recounts that 18 years ago, when he was posted as the ACP in the Mumbai crime branch, his inability to take the necessary action led to the rise of Shoaib Khan (Emraan Hashmi) a dreaded gangster, who played a central role in the bombings. Throughout the film, Wilson narrates the story of 1970s Bombay, when it was ruled by a kind hearted smuggler Sultan Mirza (Ajay Devgn), and how Mirza 's eventual downfall led to Shoaib 's rise to power. After being hit by a flood in his hometown in Madras, a young Mirza arrives in Mumbai, where he lands a job as a coal shoveller. In spite of his meager earnings, the boy never fails to help the poor and needy, which soon gains their respect and admiration. Mirza is given the nickname of "Sultan. '' As a grown man, Sultan Mirza becomes the kingpin of Mumbai 's smuggling underworld. Through his influence, Mirza peacefully divides the city among four gangsters, thus thwarting police efforts to curb illegal activities. Despite being a criminal, Sultan Mirza is portrayed as a man of principle with a heart of gold and a godfather - like figure to the people. He even refrains from smuggling contraband, as it is against his Muslim faith. Mirza has a crush on Bollywood actress Rehana (Kangana Ranaut) and eventually the two begin dating. Sultan invests black money in her upcoming films. ACP Wilson moves to stop Rehana 's films funded by Sultan. Later, Sultan and Rehana frame Wilson to make it look as if Wilson is accepting a bribe, which damages his credibility. Meanwhile, Shoaib is even in childhood a very ambitious person with a dark and daring character. He is frequently involved in petty theft. His father, Hussain Khan (Asif Basra), who is a sub-inspector with the Bombay Police, tries in vain to guide and control his son, his anger against Shoaib began years ago when Shoaib and his best friend Javed were stealing money and got caught red handed by a man, by teaching his son a lesson, Khan slaps him 5 times. Khan locks Shoaib in jail but Shoaib angers him as both Shoaib and Wilson make a deal saying that Shoaib wants to follow another path. Worried, the father turns to Sultan for help. Sultan agrees and helps the young man set up an electronics shop. But Shoaib is unsatisfied, as his only real ambition is to become rich and powerful, like Sultan Mirza who is his idol. Shoaib 's beautiful girlfriend, Mumtaz (Prachi Desai), works in a local jewellery shop, which Shoaib visits often, to the aggravation of the girl 's boss. Shoaib gives her a beautiful necklace, which, unbeknownst to Mumtaz, Shoaib had stolen from a lady during a home robbery. Later, that lady comes to the shop with her husband to buy more jewellery. The lady soon recognises her own necklace being worn by Mumtaz; she then admits to the outraged customer that her boyfriend had given the item to her. The lady and her husband demand she take them to her boyfriend 's shop, where they confront him. This enrages Shoaib, who beats up the husband and destroys his own shop. Shoaib goes to Sultan and asks to be a part of his crime ring. Seeing his potential, Sultan agrees to take him under his wing. Shoaib learns the tricks of the trade and soon becomes Sultan 's trusted aide. ACP Wilson hatches a plan to use Shoaib 's reckless ambition for quick money and power as a way to cause the downfall of Sultan. Wilson even refrains from killing Sultan and Shoaib when he has the opportunity. Wilson 's plan backfires, however. Finally, when Shoaib becomes invincible, Wilson blames himself for the subsequent catastrophe as he now can not stop Shoaib 's rise to power. Sultan decides to hand over his power to Shoaib, and opts to enter state politics. He travels to Delhi to meet the Home Minister of India. Shoaib 's unscrupulous ambitions lead him to carry out trades and acts which Sultan himself would strongly condemn and abhors. Shoaib starts manufacturing illicit liquor, accepts contract killings, invests in drug peddling and runs extortion rackets. When Sultan returns to Bombay, he learns of Shoaib 's misdeeds and is outraged. He finds Shoaib at a party and slaps him in public for his unethical activities and states he can never really be like Sultan. This infuriates Shoaib and he plots revenge as he now knows that Sultan and he can not possibly rule Mumbai together due to Mirza 's strong principals and moral ethos. The next day, as Sultan campaigns for his new party, Shoaib appears and assassinates Sultan Mirza whilst he is addressing the people at the rally as a horrified Wilson looks on, thus ending the reign of the smuggler who was loved by his people. In his narration, Wilson laments that he and the police are responsible for the bombings because of their lack of forethought, Wilson also says that Shoaib now rules Mumbai despite living abroad and the people are now forever at his mercy -- as Mumbai 's new underworld kingpin -- he has since established a global smuggling empire. No government or force can reach him now. The film was made on a budget of ₹ 38 crore (₹ 28 crore for production and ₹ 10 crore for publicity and advertising). The film depicts the growth of the Mumbai underworld, from crime and smuggling in its early stages to its connection with international terrorism in recent times. It is believed to be based on the lives of real - life gangsters Haji Mastan and Dawood Ibrahim, portrayed by the characters Sultan and Shoaib, respectively. Originally Sanjay Dutt was chosen to play the role of Haji Mastan but the role went to Ajay Devgn instead. Rajeev Masand of CNN - IBN rated the film 2.5 out of 5 saying, "The film is watchable and enjoyable in parts even, but it does n't quite pull off the retro chic tone it was going for ''. IANS rated the movie 3.5 out of 5 saying, "Rajat Arora 's dialogues flow from the storytelling in a smooth flow of poetry and street wisdom. '' Taran Adarsh of Bollywood Hungama gave it 4 / 5 and called it "An outstanding Cinematic experience ''. Nikhat Kazmi of the Times of India gave it 4 / 5 and stated, "Once Upon A Time in Mumbaai offers you both substance and soul, even as it dabbles with a slice of reality ''. Rediff gave it 4 / 5 and said, "Book your tickets now ''. Once Upon A Time in Mumbaai managed to have a decent weekend despite starting slowly. The film picked up from Friday evening and managed to have good Saturday and Sunday collections. The approximate breakdowns are 5.50 crore (Friday), 7 crore (Saturday) and 7.75 crore (Sunday) for a 20.25 crore weekend. The film grossed Rs. 58.50 crore in India at the end of its ninth week. Once Upon A Time in Mumbaai was declared a hit by Box Office India. In its opening weekend the film showed a day-wise growth in U.K. (Friday £ 15,755, Saturday £ 19,381 and Sunday £ 19,644) and a decent start in U.S. (approx. $1, 37,000 at 32 venues). In its second weekend, the film collected £ 16,249 on 14 screens, with the per screen average working out to £ 1,161. Total: £ 1, 07,988 at U.K boxoffice. In its third weekend, the film collected £ 5,909 on 8 screens, with the per screen average working out to £ 739 (total: £ 1, 22,257 in U.K.) In its fourth weekend in U.K, the film collected £ 743 on four screens, with the per screen average working out to £ 186 (total: £ 1, 26,696). In its fifth weekend in the U.K, the film collected £ 105 on two screens, with the per screen average working out to £ 53 (total: £ 1, 27,338). In its fifth weekend at the U.S. boxoffice, the film collected $1,131 on two screens, with the per screen average working out to $566 (total: $3, 02,862). The film received many awards at several award functions. Ajay Devgn and Prachi Desai received accolades for their nominations, while other awards were won for the film 's music, playback and technical direction. Nominated Nominated Won Won Nominated Won Won Nominated Won Nominated Won Won The films songs were released on 28 June 2010. There were a total of 14 songs composed by Pritam with lyrics penned by Irshad Kamil, Neelesh Misra and Amitabh Bhattacharya. The film score was composed by Sandeep Shirodkar. The song "Parda '' is a medley containing samples from the following 1970 's Bollywood songs; "Duniya Mein Logon Ko '' (Apna Desh), "Piya Tu Ab To Aaja '' (Caravan). Due to the film 's commercial and critical success, a sequel was planned. Akshay Kumar and Imran Khan were roped in as the male leads. The sequel, Once Upon Ay Time in Mumbai Dobaara!, features Sonakshi Sinha playing the role of actress Mandakini, whilst Sonali Bendre was also roped in for a role. The film started shooting in August 2012, whilst the film released on 15 August 2013, thus avoids clashing with Rohit Shetty directorial Chennai Express, starring Shah Rukh Khan and Deepika Padukone, which released one week before the release of OUATIMD. The film was a flop at the box office as it had very slow opening with very low occupancy while Chennai Express was breaking various box office records throughout its release.
when did the xbox 360 elite come out
Xbox 360 - Wikipedia DVD, CD, digital distribution Original models 2.4 GHz wireless, 3 × USB 2.0, IR receiver, 100 Mbit / s Ethernet Add - on: Wifi 802.11 a / b / g, Wifi 802.11 a / b / g / n Revised "S '' models 2.4 GHz wireless, 5 × USB 2.0, Digital Optical audio out, IR receiver, 100 Mbit / s Ethernet, Wifi 802.11 b / g / n, AUX port, HDMI port Revised "E '' models The Xbox 360 is a home video game console developed by Microsoft. As the successor to the original Xbox, it is the second console in the Xbox series. It competed with Sony 's PlayStation 3 and Nintendo 's Wii as part of the seventh generation of video game consoles. It was officially unveiled on MTV on May 12, 2005, with detailed launch and game information announced later that month at the 2005 E3 expo. The Xbox 360 features an online service, Xbox Live, which was expanded from its previous iteration on the original Xbox and received regular updates during the console 's lifetime. Available in free and subscription - based varieties, Xbox Live allows users to: play games online; download games (through Xbox Live Arcade) and game demos; purchase and stream music, television programs, and films through the Xbox Music and Xbox Video portals; and access third - party content services through media streaming applications. In addition to online multimedia features, it allows users to stream media from local PCs. Several peripherals have been released, including wireless controllers, expanded hard drive storage, and the Kinect motion sensing camera. The release of these additional services and peripherals helped the Xbox brand grow from gaming - only to encompassing all multimedia, turning it into a hub for living - room computing entertainment. Launched worldwide across 2005 -- 2006, the Xbox 360 was initially in short supply in many regions, including North America and Europe. The earliest versions of the console suffered from a high failure rate, indicated by the so - called "Red Ring of Death '', necessitating an extension of the device 's warranty period. Microsoft released two redesigned models of the console: the Xbox 360 S in 2010, and the Xbox 360 E in 2013. As of June 2014, 84 million Xbox 360 consoles have been sold worldwide, making it the sixth - highest - selling video game console in history, and the highest - selling console made by an American company. Although not the best - selling console of its generation, the Xbox 360 was deemed by TechRadar to be the most influential through its emphasis on digital media distribution and multiplayer gaming on Xbox Live. The Xbox 360 's successor, the Xbox One, was released on November 22, 2013. On April 20, 2016, Microsoft announced that it would end the production of new Xbox 360 hardware, although the company will continue to support the platform. Known during development as Xbox Next, Xenon, Xbox 2, Xbox FS or NextBox, the Xbox 360 was conceived in early 2003. In February 2003, planning for the Xenon software platform began, and was headed by Microsoft 's Vice President J Allard. That month, Microsoft held an event for 400 developers in Bellevue, Washington to recruit support for the system. Also that month, Peter Moore, former president of Sega of America, joined Microsoft. On August 12, 2003, ATI signed on to produce the graphic processing unit for the new console, a deal which was publicly announced two days later. Before the launch of the Xbox 360, several Alpha development kits were spotted using Apple 's Power Mac G5 hardware. This was because the system 's PowerPC 970 processor running the same PowerPC architecture that the Xbox 360 would eventually run under IBM 's Xenon processor. The cores of the Xenon processor were developed using a slightly modified version of the PlayStation 3 's Cell Processor PPE architecture. According to David Shippy and Mickie Phipps, the IBM employees were "hiding '' their work from Sony and Toshiba, IBM 's partners in developing the Cell Processor. Jeff Minter created the music visualization program Neon which is included with the Xbox 360. The Xbox 360 was released on November 22, 2005, in the United States and Canada; December 2, 2005, in Europe and December 10, 2005, in Japan. It was later launched in Mexico, Brazil, Chile, Colombia, Hong Kong, Singapore, South Korea, Taiwan, Australia, New Zealand, South Africa, India, and Russia. In its first year on the market, the system launched in 36 countries, more countries than any other console has launched in a single year. In 2009, IGN named the Xbox 360 the sixth - greatest video game console of all time, out of a field of 25. Although not the best - selling console of the seventh - generation, the Xbox 360 was deemed by TechRadar to be the most influential, by emphasizing digital media distribution and online gaming through Xbox Live, and by popularizing game achievement awards. PC Magazine considered the Xbox 360 the prototype for online gaming as it "proved that online gaming communities could thrive in the console space ''. Five years after the Xbox 360 's original debut, the well - received Kinect motion capture camera was released, which set the record of being the fastest selling consumer electronic device in history, and extended the life of the console. Edge ranked Xbox 360 the second - best console of the 1993 -- 2013 period, stating "It had its own social network, cross-game chat, new indie games every week, and the best version of just about every multiformat game... Killzone is no Halo and nowadays Gran Turismo is no Forza, but it 's not about the exclusives -- there 's nothing to trump Naughty Dog 's PS3 output, after all. Rather, it 's about the choices Microsoft made back in the original Xbox 's lifetime. The PC - like architecture meant the early EA Sports games ran at 60fps compared to only 30 on PS3, Xbox Live meant every dedicated player had an existing friends list, and Halo meant Microsoft had the killer next - generation exclusive. And when developers demo games on PC now they do it with a 360 pad -- another industry benchmark, and a critical one. '' The Xbox 360 began production only 69 days before launch, and Microsoft was not able to supply enough systems to meet initial consumer demand in Europe or North America, selling out completely upon release in all regions except in Japan. Forty thousand units were offered for sale on auction site eBay during the initial week of release, 10 % of the total supply. By year 's end, Microsoft had shipped 1.5 million units, including 900,000 in North America, 500,000 in Europe, and 100,000 in Japan. In May 2008 Microsoft announced that 10 million Xbox 360s had been sold and that it was the "first current generation gaming console '' to surpass the 10 million figure in the US. In the US, the Xbox 360 was the leader in current - generation home console sales until June 2008, when it was surpassed by the Wii. The Xbox 360 has sold a total of 870,000 units in Canada as of August 1, 2008. Between January 2011 and October 2013, the Xbox 360 was the best - selling console in the United States for these 32 consecutive months. In Europe, the Xbox 360 has sold seven million units as of November 20, 2008, according to Microsoft. In the United Kingdom, the Xbox 360 has sold 3.9 million units as of June 27, 2009, according to GfK Chart - Track. While the original Xbox sold poorly in Japan, selling just 2 million units while it was on the market (between 2002 and 2005), the Xbox 360 sold even more poorly, selling only 1.5 million units from 2005 to 2011. Edge magazine reported in August 2011 that initially lackluster and subsequently falling sales in Japan, where Microsoft had been unable to make serious inroads into the dominance of domestic rivals Sony and Nintendo, had led to retailers scaling down and in some cases discontinuing sales of the Xbox 360 completely. The significance of Japan 's poor sales might be overstated in the media in comparison to overall international sales. The Xbox 360 sold much better than its predecessor, and although not the best - selling console of the seventh generation, it is regarded as a success since it strengthened Microsoft as a major force in the console market at the expense of well - established rivals. The inexpensive Nintendo Wii did sell the most console units but eventually saw a collapse of third - party software support in its later years, and it has been viewed by some as a fad since the succeeding Wii U had a poor debut in 2012. The PlayStation 3 struggled for a time due to being too expensive and initially lacking quality games, making it far less dominant than its predecessor, the PlayStation 2, and it took until late in the PlayStation 3 's lifespan for its sales and games to reach parity with the Xbox 360. TechRadar proclaimed that "Xbox 360 passes the baton as the king of the hill -- a position that puts all the more pressure on its successor, Xbox One ''. The Xbox 360 's advantage over its competitors was due to the release of high - profile games from both first party and third party developers. The 2007 Game Critics Awards honored the platform with 38 nominations and 12 wins -- more than any other platform. By March 2008, the Xbox 360 had reached a software attach rate of 7.5 games per console in the US; the rate was 7.0 in Europe, while its competitors were 3.8 (PS3) and 3.5 (Wii), according to Microsoft. At the 2008 Game Developers Conference, Microsoft announced that it expected over 1,000 games available for Xbox 360 by the end of the year. As well as enjoying exclusives such as additions to the Halo franchise and Gears of War, the Xbox 360 has managed to gain a simultaneous release of games that were initially planned to be PS3 exclusives, including Devil May Cry 4, Ace Combat 6, Virtua Fighter 5, Grand Theft Auto IV, Final Fantasy XIII, Tekken 6, Metal Gear Solid: Rising, and L.A. Noire. In addition, Xbox 360 versions of cross-platform games were generally considered superior to their PS3 counterparts in 2006 and 2007, due in part to the difficulties of programming for the PS3. TechRadar deemed the Xbox 360 as the most influential game system through its emphasis of digital media distribution, Xbox Live online gaming service, and game achievement feature. During the console 's lifetime, the Xbox brand has grown from gaming - only to encompassing all multimedia, turning it into a hub for "living - room computing environment ''. Five years after the Xbox 360 's original debut, the well - received Kinect motion capture camera was released, which became the fastest selling consumer electronic device in history, and extended the life of the console. Microsoft announced the Xbox One, successor to the Xbox 360, at E3 on June 10, 2013. Although succeeded as Microsoft 's main console by the Xbox One, support from publishers for the Xbox 360 is expected to continue until at least 2016. On April 20, 2016, Microsoft announced the end of production of new Xbox 360 hardware; the company will continue to provide hardware and software support for the platform, as selected Xbox 360 games can be played on Xbox One. The main unit of the Xbox 360 itself has slight double concavity in matte white or black. The official color of the white model is Arctic Chill. It features a port on the top when vertical (left side when horizontal) to which a custom - housed hard disk drive unit can be attached. On the Slim and E models, the hard drive bay is on the bottom when vertical (right side when horizontal) and requires the opening of a concealed door to access it. (This does not void the warranty.) The Xbox 360 Slim / E hard drives are standard 2.5 '' SATA laptop drives, but have a custom enclosure and firmware so that the Xbox 360 can recognize it. Various hard disk drives have been produced, including options at 20, 60, 120, 250, or 320 GB. Inside, the Xbox 360 uses the triple - core IBM designed Xenon as its CPU, with each core capable of simultaneously processing two threads, and can therefore operate on up to six threads at once. Graphics processing is handled by the ATI Xenos, which has 10 MB of eDRAM. Its main memory pool is 512 MB in size. Many accessories are available for the console, including both wired and wireless controllers, faceplates for customization, headsets for chatting, a webcam for video chatting, dance mats and Gamercize for exercise, three sizes of memory units and five sizes of hard drives (20, 60, 120, 250 (initially Japan only, but later also available elsewhere) and 320 GB), among other items, all of which are styled to match the console. Kinect is a "controller - free gaming and entertainment experience '' for the Xbox 360. It was first announced on June 1, 2009 at the Electronic Entertainment Expo, under the codename, Project Natal. The add - on peripheral enables users to control and interact with the Xbox 360 without a game controller by using gestures, spoken commands and presented objects and images. The Kinect accessory is compatible with all Xbox 360 models, connecting to new models via a custom connector, and to older ones via a USB and mains power adapter. During their CES 2010 keynote speech, Robbie Bach and Microsoft CEO Steve Ballmer went on to say that Kinect will be released during the holiday period (November -- January) and work with every 360 console. Its name and release date of November 4, 2010, were officially announced on June 13 of that year, prior to Microsoft 's press conference at E3 2010. Built - in Through AV connector (excluding E models which have no AV connector) At launch, the Xbox 360 was available in two configurations: the "Xbox 360 '' package (unofficially known as the 20 GB Pro or Premium), priced at US $399 or GB £ 279.99, and the "Xbox 360 Core '', priced at US $299 and GB £ 209.99. The original shipment of the Xbox 360 version included a cut - down version of the Media Remote as a promotion. The Elite package was launched later at US $479. The "Xbox 360 Core '' was replaced by the "Xbox 360 Arcade '' in October 2007 and a 60 GB version of the Xbox 360 Pro was released on August 1, 2008. The Pro package was discontinued and marked down to US $249 on August 28, 2009 to be sold until stock ran out, while the Elite was also marked down in price to US $299. Two major hardware revisions of the Xbox 360 have succeeded the original models; the Xbox 360 S (also referred to as the "Slim '') replaced the original "Elite '' and "Arcade '' models in 2010. The S model carries a smaller, streamlined appearance with an angular case, and utilizes a redesigned motherboard designed to alleviate the hardware and overheating issues experienced by prior models. It also includes a proprietary port for use with the Kinect sensor. The Xbox 360 E, a further streamlined variation of the 360 S with a two - tone rectangular case inspired by Xbox One, was released in 2013. In addition to its revised aesthetics, the Xbox 360 E also has one fewer USB port, no AV connector (and thus is HDMI - only), and no longer supports S / PDIF. November 22, 2005 April 29, 2007 August 6, 2007 October 27, 2007 July 13, 2008 August 1, 2008 September 5, 2008 August 28, 2009 June 19, 2010 August 3, 2010 June 10, 2013 April 20, 2016 The Xbox 360 (not Slim and E models) has been subject to a number of technical problems. Since the console 's release in 2005, users have reported concerns over its reliability and failure rate. To aid customers with defective consoles, Microsoft extended the Xbox 360 's manufacturer 's warranty to three years for hardware failure problems that generate a "General Hardware Failure '' error report. A "General Hardware Failure '' is recognized on all models released before the Xbox 360 S by three quadrants of the ring around the power button flashing red. This error is often known as the "Red Ring of Death ''. In April 2009 the warranty was extended to also cover failures related to the E74 error code. The warranty extension is not granted for any other types of failures that do not generate these specific error codes. Since these problems surfaced, Microsoft has attempted to modify the console to improve its reliability. Modifications include a reduction in the number, size, and placement of components, the addition of dabs of epoxy on the corners and edges of the CPU and GPU as glue to prevent movement relative to the board during heat expansion, and a second GPU heatsink to dissipate more heat. With the release of the redesigned Xbox 360 S, the warranty for the newer models does not include the three - year extended coverage for "General Hardware Failures ''. The newer Xbox 360 S and E models indicate system overheating when the console 's power button begins to flash red, unlike previous models where the first and third quadrant of the ring would light up red around the power button if overheating occurred. The system will then warn the user of imminent system shutdown until the system has cooled, whereas a flashing power button that alternates between green and red is an indication of a "General Hardware Failure '' unlike older models where three of the quadrants would light up red. The Xbox 360 launched with 14 games in North America and 13 in Europe. The console 's best - selling game for 2005, Call of Duty 2, sold over a million copies. Five other games sold over a million copies in the console 's first year on the market: Ghost Recon Advanced Warfighter, The Elder Scrolls IV: Oblivion, Dead or Alive 4, Saints Row, and Gears of War. Gears of War would become the best - selling game on the console with 3 million copies in 2006, before being surpassed in 2007 by Halo 3 with over 8 million copies. Six games were initially available in Japan, while eagerly anticipated games such as Dead or Alive 4 and Enchanted Arms were released in the weeks following the console 's launch. Games targeted specifically for the region, such as Chromehounds, Ninety - Nine Nights, and Phantasy Star Universe, were also released in the console 's first year. Microsoft also had the support of Japanese developer Mistwalker, founded by Final Fantasy creator Hironobu Sakaguchi. Mistwalker 's first game, Blue Dragon, was released in 2006 and had a limited - edition bundle which sold out quickly with over 10,000 pre-orders. Blue Dragon is one of three Xbox 360 games to surpass 200,000 units in Japan, along with Tales of Vesperia and Star Ocean: The Last Hope. Mistwalker 's second game, Lost Odyssey also sold over 100,000 copies. The 2007 Game Critics Awards honored the Xbox 360 platform with 38 Nominations and 11 Wins. The Xbox 360 's original graphical user interface was the Xbox 360 Dashboard; a tabbed interface that featured five "Blades '' (formerly four blades), and was designed by AKQA and Audiobrain. It could be launched automatically when the console booted without a disc in it, or when the disc tray was ejected, but the user had the option to select what the console does if a game is in the tray on start up, or if inserted when already on. A simplified version of it was also accessible at any time via the Xbox Guide button on the gamepad. This simplified version showed the user 's gamercard, Xbox Live messages and friends list. It also allowed for personal and music settings, in addition to voice or video chats, or returning to the Xbox Dashboard from the game. On November 19, 2008, the Xbox 360 's dashboard was changed from the "Blade '' interface, to a dashboard reminiscent of that present on the Zune and Windows Media Center, known as the "New Xbox Experience '' or NXE. Since the console 's release, Microsoft has released several updates for the Dashboard software. These updates have included adding new features to the console, enhancing Xbox Live functionality and multimedia playback capabilities, adding compatibility for new accessories, and fixing bugs in the software. Such updates are mandatory for users wishing to use Xbox Live, as access to Xbox Live is disabled until the update is performed. At E3 2008, at Microsoft 's Show, Microsoft 's Aaron Greenberg and Marc Whitten announced the new Xbox 360 interface called the "New Xbox Experience '' (NXE). The update was intended to ease console menu navigation. Its GUI uses the Twist UI, previously used in Windows Media Center and the Zune. Its new Xbox Guide retains all Dashboard functionality (including the Marketplace browser and disk ejection) and the original "Blade '' interface (although the color scheme has been changed to match that of the NXE Dashboard). The NXE also provides many new features. Users can now install games from disc to the hard drive to play them with reduced load time and less disc drive noise, but each game 's disc must remain in the system in order to run. A new, built - in Community system allows the creation of digitized Avatars that can be used for multiple activities, such as sharing photos or playing Arcade games like 1 vs. 100. The update was released on November 19, 2008. While previous system updates have been stored on internal memory, the NXE update was the first to require a storage device -- at least a 128 MB memory card or a hard drive. Microsoft released a further update to the Xbox 360 Dashboard starting on December 6, 2011. It included a completely new user interface which utilizes Microsoft 's Metro design language, and added new features such as cloud storage for game saves and profiles, live television, Bing voice search, access to YouTube videos and better support for Kinect voice commands. The Xbox 360 supports videos in Windows Media Video (WMV) format (including high - definition and PlaysForSure videos), as well as H. 264 and MPEG - 4 media. The December 2007 dashboard update added support for the playback of MPEG - 4 ASP format videos. The console can also display pictures and perform slideshows of photo collections with various transition effects, and supports audio playback, with music player controls accessible through the Xbox 360 Guide button. Users may play back their own music while playing games or using the dashboard, and can play music with an interactive visual synthesizer. Music, photos and videos can be played from standard USB mass storage devices, Xbox 360 proprietary storage devices (such as memory cards or Xbox 360 hard drives), and servers or computers with Windows Media Center or Windows XP with Service pack 2 or higher within the local - area network in streaming mode. As the Xbox 360 uses a modified version of the UPnP AV protocol, some alternative UPnP servers such as uShare (part of the GeeXboX project) and MythTV can also stream media to the Xbox 360, allowing for similar functionality from non-Windows servers. This is possible with video files up to HD - resolution and with several codecs (MPEG - 2, MPEG - 4, WMV) and container formats (WMV, MOV, TS). As of October 27, 2009, UK and Ireland users are also able to access live and on - demand streams of Sky television programming. At the 2007, 2008, and 2009 Consumer Electronics Shows, Microsoft had announced that IPTV services would soon be made available to use through the Xbox 360. In 2007, Microsoft chairman Bill Gates stated that IPTV on Xbox 360 was expected to be available to consumers by the holiday season, using the Microsoft TV IPTV Edition platform. In 2008, Gates and president of Entertainment & Devices Robbie Bach announced a partnership with BT in the United Kingdom, in which the BT Vision advanced TV service, using the newer Microsoft Mediaroom IPTV platform, would be accessible via Xbox 360, planned for the middle of the year. BT Vision 's DVR - based features would not be available on Xbox 360 due to limited hard drive capacity. In 2010, while announcing version 2.0 of Microsoft Mediaroom, Microsoft CEO Steve Ballmer mentioned that AT&T 's U-verse IPTV service would enable Xbox 360s to be used as set - top boxes later in the year. As of January 2010, IPTV on Xbox 360 has yet to be deployed beyond limited trials. In 2012, Microsoft released the Live Event Player, allowing for events such as video game shows, beauty pageants, award shows, concerts, news and sporting events to be streamed on the console via Xbox Live. The first live events streamed on Live were the 2012 Revolver Golden Gods, Microsoft 's E3 2012 media briefing and the Miss Teen USA 2012 beauty pageant. XNA Community is a feature whereby Xbox 360 owners can receive community - created games, made with Microsoft XNA Game Studio, from the XNA Creators Club. The games are written, published, and distributed through a community managed portal. XNA Community provides a channel for digital videogame delivery over Xbox Live that can be free of royalties, publishers and licenses. XNA game sales, however, did not meet original expectations. though Xbox Live Indie Games (XBLIG) has had some "hits. '' When the Xbox 360 was released, Microsoft 's online gaming service Xbox Live was shut down for 24 hours and underwent a major upgrade, adding a basic non-subscription service called Xbox Live Silver (later renamed Xbox Live Free) to its already established premium subscription - based service (which was renamed Gold). Xbox Live Free is included with all SKUs of the console. It allows users to create a user profile, join on message boards, and access Microsoft 's Xbox Live Arcade and Marketplace and talk to other members. A Live Free account does not generally support multiplayer gaming; however, some games that have rather limited online functions already, (such as Viva Piñata) or games that feature their own subscription service (e.g. EA Sports games) can be played with a Free account. Xbox Live also supports voice the latter a feature possible with the Xbox Live Vision. Xbox Live Gold includes the same features as Free and includes integrated online game playing capabilities outside of third - party subscriptions. Microsoft has allowed previous Xbox Live subscribers to maintain their profile information, friends list, and games history when they make the transition to Xbox Live Gold. To transfer an Xbox Live account to the new system, users need to link a Windows Live ID to their gamertag on Xbox.com. When users add an Xbox Live enabled profile to their console, they are required to provide the console with their passport account information and the last four digits of their credit card number, which is used for verification purposes and billing. An Xbox Live Gold account has an annual cost of US $59.99, C $ 59.99, NZ $ 90.00, GB £ 39.99, or € 59.99. As of January 5, 2011, Xbox Live has over 30 million subscribers. The Xbox Live Marketplace is a virtual market designed for the console that allows Xbox Live users to download purchased or promotional content. The service offers movie and game trailers, game demos, Xbox Live Arcade games and Xbox 360 Dashboard themes as well as add - on game content (items, costumes, levels etc.). These features are available to both Free and Gold members on Xbox Live. A hard drive or memory unit is required to store products purchased from Xbox Live Marketplace. In order to download priced content, users are required to purchase Microsoft Points for use as scrip; though some products (such as trailers and demos) are free to download. Microsoft Points can be obtained through prepaid cards in 1,600 and 4,000 - point denominations. Microsoft Points can also be purchased through Xbox Live with a credit card in 500, 1,000, 2,000 and 5,000 - point denominations. Users are able to view items available to download on the service through a PC via the Xbox Live Marketplace website. An estimated seventy percent of Xbox Live users have downloaded items from the Marketplace. Xbox Live Arcade is an online service operated by Microsoft that is used to distribute downloadable video games to Xbox and Xbox 360 owners. In addition to classic arcade games such as Ms. Pac - Man, the service offers some new original games like Assault Heroes. The Xbox Live Arcade also features games from other consoles, such as the PlayStation game Castlevania: Symphony of the Night and PC games such as Zuma. The service was first launched on November 3, 2004, using a DVD to load, and offered games for about US $5 to $15. Items are purchased using Microsoft Points, a proprietary currency used to reduce credit card transaction charges. On November 22, 2005, Xbox Live Arcade was re-launched with the release of the Xbox 360, in which it was now integrated with the Xbox 360 's dashboard. The games are generally aimed toward more casual gamers; examples of the more popular games are Geometry Wars, Street Fighter II ' Hyper Fighting, and Uno. On March 24, 2010, Microsoft introduced the Game Room to Xbox Live. Game Room is a gaming service for Xbox 360 and Microsoft Windows that lets players compete in classic arcade and console games in a virtual arcade. On November 6, 2006, Microsoft announced the Xbox Video Marketplace, an exclusive video store accessible through the console. Launched in the United States on November 22, 2006, the first anniversary of the Xbox 360 's launch, the service allows users in the United States to download high - definition and standard - definition television shows and movies onto an Xbox 360 console for viewing. With the exception of short clips, content is not currently available for streaming, and must be downloaded. Movies are also available for rental. They expire in 14 days after download or at the end of the first 24 hours after the movie has begun playing, whichever comes first. Television episodes can be purchased to own, and are transferable to an unlimited number of consoles. Downloaded files use 5.1 surround audio and are encoded using VC - 1 for video at 720p, with a bitrate of 6.8 Mbit / s. Television content is offered from MTV, VH1, Comedy Central, Turner Broadcasting, and CBS; and movie content is Warner Bros., Paramount, and Disney, along with other publishers. After the Spring 2007 update, the following video codecs are supported: As a late addition to the December Xbox 360 update, 25 movies were added to the European Xbox 360 video market place on the December 11, 2007 and cost 250 Microsoft points for the SD version on the movie and 380 points for the HD version of the movie. Xbox Live members in Canada featured the ability to go on the Xbox Live Marketplace also as of December 11, 2007 with around 30 movies to be downloaded for the same amount of Microsoft Points. On May 26, 2009, Microsoft announced the future release of the Zune HD (in the fall of 2009), the next addition to the Zune product range. This is of an impact on the Xbox Live Video Store as it was also announced that the Zune Video Marketplace and the Xbox Live Video Store will be merged to form the Zune Marketplace, which will be arriving on Xbox Live in 7 countries initially, the United Kingdom, the United States, France, Italy, Germany, Ireland and Spain. Further details were released at the Microsoft press conference at E3 2009. On October 16, 2012, Xbox Video and Xbox Music were released, replacing the Zune Marketplace. Xbox Video is a digital video service on that offers full HD movies and TV series for purchase or rental on Xbox 360, Windows 8, Windows RT PCs and tablets, and Windows Phones. On August 18, 2015, Microsoft rolled out an update renaming it Movies and TV similar to the Windows 10 App. Xbox Music provides 30 million music tracks available for purchase or access through subscription. It was announced at the Electronic Entertainment Expo 2012 and will integrate with Windows 8 and Windows Phone as well. In August 2015 Microsoft rolled out an update renaming it to Groove Music similar to the Windows 10 App. Xbox SmartGlass is one of the newer features that allows for integration between the Xbox 360 console and mobile devices such as tablets and smartphones. An app is available on Android, Windows Phone 8 and iOS. Users of the feature can view additional content to accompany the game they are playing, or the TV shows and movies they are watching. They can also use their mobile device as a remote to control the Xbox 360 console. PartnerNet, the developers - only alternative Xbox Live network used by developers to beta test game content downloads and games developed for Xbox Live Arcade, runs on Xbox 360 debug kits, which are used both by developers and by the gaming press. In a podcast released on February 12, 2007, a developer breached the PartnerNet non-disclosure agreement (NDA) by commenting that he had found a playable version of Alien Hominid and an unplayable version of Ikaruga on PartnerNet. A few video game journalists, misconstruing the breach of the NDA as an invalidation of the NDA, immediately began reporting on other games being tested via PartnerNet, including a remake of Jetpac. (Alien Hominid for the Xbox 360 was released on February 28 of that year, and Ikaruga was released over a year later on April 9, 2008. Jetpac was released for the Xbox 360 on March 28, 2007 as Jetpac Refuelled.) There have also been numerous video and screenshot leaks of game footage on PartnerNet, as well as a complete version of Sonic the Hedgehog 4: Episode I, which caused for the whole PartnerNet service to be shut down overnight on April 3, 2010. In the following days, Microsoft reminded developers and journalists that they were in breach of NDA by sharing information about PartnerNet content and asked websites to remove lists of games in development that were discovered on the service.. Sega used feedback from fans about the leaked version of Sonic the Hedgehog 4: Episode I to refine it before they eventually released it. Additionally, a pair of hackers played their modded Halo 3 games on PartnerNet in addition to using PartnerNet to scoop up unreleased and untested software. The duo passed their hacked Halo pics to their friends before they were eventually caught by Bungie engineers who left a message for the hackers on PartnerNet which read "Winners Do n't Break Into PartnerNet. ''. Other games that were leaked in the PartnerNet fiasco include Shenmue and Shenmue 2.
how many varieties of palm trees in florida
Arecaceae - wikipedia The Arecaceae are a botanical family of perennial climbers, shrubs, acaules and trees commonly known as palm trees (owing to historical usage, the family is alternatively called Palmae). They are flowering plants, a family in the monocot order Arecales. Currently 181 genera with around 2600 species are known, most of them restricted to tropical and subtropical climates. Most palms are distinguished by their large, compound, evergreen leaves, known as fronds, arranged at the top of an unbranched stem. However, palms exhibit an enormous diversity in physical characteristics and inhabit nearly every type of habitat within their range, from rainforests to deserts. Palms are among the best known and most extensively cultivated plant families. They have been important to humans throughout much of history. Many common products and foods are derived from palms, and palms are also widely used in landscaping, making them one of the most economically important plants. In many historical cultures, palms were symbols for such ideas as victory, peace, and fertility. For inhabitants of cooler climates today, palms symbolize the tropics and vacations. Whether as shrubs, trees, or vines, palms have two methods of growth: solitary or clustered. The common representation is that of a solitary shoot ending in a crown of leaves. This monopodial character may be exhibited by prostrate, trunkless, and trunk - forming members. Some common palms restricted to solitary growth include Washingtonia and Roystonea. Palms may instead grow in sparse though dense clusters. The trunk develops an axillary bud at a leaf node, usually near the base, from which a new shoot emerges. The new shoot, in turn, produces an axillary bud and a clustering habit results. Exclusively sympodial genera include many of the rattans, Guihaia, and Rhapis. Several palm genera have both solitary and clustering members. Palms which are usually solitary may grow in clusters and vice versa. These aberrations suggest the habit operates on a single gene. Palms have large, evergreen leaves that are either palmately (' fan - leaved ') or pinnately (' feather - leaved ') compound and spirally arranged at the top of the stem. The leaves have a tubular sheath at the base that usually splits open on one side at maturity. The inflorescence is a spadix or spike surrounded by one or more bracts or spathes that become woody at maturity. The flowers are generally small and white, radially symmetric, and can be either uni - or bisexual. The sepals and petals usually number three each, and may be distinct or joined at the base. The stamens generally number six, with filaments that may be separate, attached to each other, or attached to the pistil at the base. The fruit is usually a single - seeded drupe (sometimes berry - like) but some genera (e.g. Salacca) may contain two or more seeds in each fruit. Like all monocots, palms do not have the ability to increase the width of a stem (secondary growth) via the same kind of vascular cambium found in non-monocot woody plants. This explains the cylindrical shape of the trunk (almost constant diameter) that is often seen in palms, unlike in true trees. However, many palms, like some other monocots, do have secondary growth, although because it does not arise from a single vascular cambium producing xylem inwards and phloem outwards, it is often called "anomalous secondary growth ''. The Arecaceae are notable among monocots for their height and for the size of their seeds, leaves, and inflorescences. Ceroxylon quindiuense, Colombia 's national tree, is the tallest monocot in the world, reaching up to 60 m tall. The coco de mer (Lodoicea maldivica) has the largest seeds of any plant, 40 -- 50 cm in diameter and weighing 15 -- 30 kg each. Raffia palms (Raphia spp.) have the largest leaves of any plant, up to 25 m long and 3 m wide. The Corypha species have the largest inflorescence of any plant, up to 7.5 m tall and containing millions of small flowers. Calamus stems can reach 200 m in length. Most palms are native to tropical and subtropical climates. Palms thrive in moist and hot climates but can be found in a variety of different habitats. Their diversity is highest in wet, lowland forests. South America, the Caribbean, and areas of the south Pacific and southern Asia are regions of concentration. Colombia may have the highest number of palm species in one country. There are some palms that are also native to desert areas such as the Arabian peninsula and parts of northwestern Mexico. Only about 130 palm species naturally grow entirely beyond the tropics, mostly in humid lowland subtropical climates, in highlands in southern Asia, and along the rim lands of the Mediterranean Sea. The northernmost native palm is Chamaerops humilis, which reaches 44 ° N latitude in along the coast of southern France. In the southern hemisphere, the southernmost palm is the Rhopalostylis sapida, which reaches 44 ° S on the Chatham Islands where an oceanic climate prevails. Cultivation of palms is possible north of subtropical climates, and some higher latitude locals such as Ireland, Scotland, England, and the Pacific Northwest feature a few palms in protected locations. Palms inhabit a variety of ecosystems. More than two - thirds of palm species live in humid moist forests, where some species grow tall enough to form part of the canopy and shorter ones form part of the understory. Some species form pure stands in areas with poor drainage or regular flooding, including Raphia hookeri which is common in coastal freshwater swamps in West Africa. Other palms live in tropical mountain habitats above 1000 m, such as those in the genus Ceroxylon native to the Andes. Palms may also live in grasslands and scrublands, usually associated with a water source, and in desert oases such as the date palm. A few palms are adapted to extremely basic lime soils, while others are similarly adapted to extreme potassium deficiency and toxicity of heavy metals in serpentine soils. Palms are a monophyletic group of plants, meaning the group consists of a common ancestor and all its descendants. Extensive taxonomic research on palms began with botanist H.E. Moore, who organized palms into 15 major groups based mostly on general morphological characteristics. The following classification, proposed by N.W. Uhl and J. Dransfield in 1987, is a revision of Moore 's classification that organizes palms into six subfamilies. A few general traits of each subfamily are listed below. The Coryphoideae are the most diverse subfamily, and are a paraphyletic group, meaning all members of the group share a common ancestor, but the group does not include all the ancestor 's descendants. Most palms in this subfamily have palmately lobed leaves and solitary flowers with three, or sometimes four carpels. The fruit normally develops from only one carpel. Subfamily Calamoideae includes the climbing palms, such as rattans. The leaves are usually pinnate; derived characters (synapomorphies) include spines on various organs, organs specialized for climbing, an extension of the main stem of the leaf - bearing reflexed spines, and overlapping scales covering the fruit and ovary. Subfamily Nypoideae contains only one species, Nypa fruticans, which has large, pinnate leaves. The fruit is unusual in that it floats, and the stem is dichotomously branched, also unusual in palms. Subfamily Ceroxyloideae has small to medium - sized flowers, spirally arranged, with a gynoecium of three joined carpels. The Arecoideae are the largest subfamily, with six diverse tribes (Areceae, Caryoteae, Cocoeae, Geonomeae, Iriarteeae, and Podococceae) containing over 100 genera. All tribes have pinnate or bipinnate leaves and flowers arranged in groups of three, with a central pistillate and two staminate flowers. The Phytelephantoideae are a monoecious subfamily. Members of this group have distinct monopodial flower clusters. Other distinct features include a gynoecium with five to 10 joined carpels, and flowers with more than three parts per whorl. Fruits are multiple - seeded and have multiple parts. Currently, few extensive phylogenetic studies of the Arecaceae exist. In 1997, Baker et al. explored subfamily and tribe relationships using chloroplast DNA from 60 genera from all subfamilies and tribes. The results strongly showed the Calamoideae are monophyletic, and Ceroxyloideae and Coryphoideae are paraphyletic. The relationships of Arecoideae are uncertain, but they are possibly related to the Ceroxyloideae and Phytelephantoideae. Studies have suggested the lack of a fully resolved hypothesis for the relationships within the family is due to a variety of factors, including difficulties in selecting appropriate outgroups, homoplasy in morphological character states, slow rates of molecular evolution important for the use of standard DNA markers, and character polarization. However, hybridization has been observed among Orbignya and Phoenix species, and using chloroplast DNA in cladistic studies may produce inaccurate results due to maternal inheritance of the chloroplast DNA. Chemical and molecular data from non-organelle DNA, for example, could be more effective for studying palm phylogeny. See list of Arecaceae genera arranged by taxonomic groups or by alphabetical order for a complete listing of genera. The Arecaceae are the first modern family of monocots appearing in the fossil record around 80 million years ago (Mya), during the late Cretaceous period. The first modern species, such as Nypa fruticans and Acrocomia aculeata, appeared 94 Mya, confirmed by fossil Nypa pollen dated to 94 Mya. Palms appear to have undergone an early period of adaptive radiation. By 60 Mya, many of the modern, specialized genera of palms appeared and became widespread and common, much more widespread than their range today. Because palms separated from the monocots earlier than other families, they developed more intrafamilial specialization and diversity. By tracing back these diverse characteristics of palms to the basic structures of monocots, palms may be valuable in studying monocot evolution. Several species of palms have been identified from flowers preserved in amber, including Palaeoraphe dominicana and Roystonea palaea. Evidence can also be found in samples of petrified palmwood. Human use of palms is as old or older than human civilization itself, starting with the cultivation of the date palm by Mesopotamians and other Middle Eastern peoples 5000 years or more ago. Date wood, pits for storing dates, and other remains of the date palm have been found in Mesopotamian sites. The date palm had a tremendous effect on the history of the Middle East. W.H. Barreveld wrote: One could go as far as to say that, had the date palm not existed, the expansion of the human race into the hot and barren parts of the "old '' world would have been much more restricted. The date palm not only provided a concentrated energy food, which could be easily stored and carried along on long journeys across the deserts, it also created a more amenable habitat for the people to live in by providing shade and protection from the desert winds (Fig. 1). In addition, the date palm also yielded a variety of products for use in agricultural production and for domestic utensils, and practically all parts of the palm had a useful purpose. An indication of the importance of palms in ancient times is that they are mentioned more than 30 times in the Bible, and at least 22 times in the Quran. Arecaceae have great economic importance, including coconut products, oils, dates, palm syrup, ivory nuts, carnauba wax, rattan cane, raffia, and palm wood. Along with dates mentioned above, members of the palm family with human uses are numerous. Like many other plants, palms have been threatened by human intervention and exploitation. The greatest risk to palms is destruction of habitat, especially in the tropical forests, due to urbanization, wood - chipping, mining, and conversion to farmland. Palms rarely reproduce after such great changes in the habitat, and those with small habitat ranges are most vulnerable to them. The harvesting of heart of palm, a delicacy in salads, also poses a threat because it is derived from the palm 's apical meristem, a vital part of the palm that can not be regrown (except in domesticated varieties, e.g. of peach palm). The use of rattan palms in furniture has caused a major population decrease in these species that has negatively affected local and international markets, as well as biodiversity in the area. The sale of seeds to nurseries and collectors is another threat, as the seeds of popular palms are sometimes harvested directly from the wild. At least 100 palm species are currently endangered, and nine species have reportedly recently become extinct. However, several factors make palm conservation more difficult. Palms live in almost every type of warm habitat and have tremendous morphological diversity. Most palm seeds lose viability quickly, and they can not be preserved in low temperatures because the cold kills the embryo. Using botanical gardens for conservation also presents problems, since they can only house a few plants of any species or truly imitate the natural setting. Also, the risk of cross-pollination can lead to hybrid species. The Palm Specialist Group of the World Conservation Union (IUCN) began in 1984, and has performed a series of three studies to find basic information on the status of palms in the wild, use of wild palms, and palms under cultivation. Two projects on palm conservation and use supported by the World Wildlife Fund took place from 1985 to 1990 and 1986 -- 1991, in the American tropics and southeast Asia, respectively. Both studies produced copious new data and publications on palms. Preparation of a global action plan for palm conservation began in 1991, supported by the IUCN, and was published in 1996. The rarest palm known is Hyophorbe amaricaulis. The only living individual remains at the Botanic Gardens of Curepipe in Mauritius. Pests that attack a variety of species of palm trees include: The palm branch was a symbol of triumph and victory in pre-Christian times. The Romans rewarded champions of the games and celebrated military successes with palm branches. Early Christians used the palm branch to symbolize the victory of the faithful over enemies of the soul, as in the Palm Sunday festival celebrating the triumphal entry of Jesus into Jerusalem. In Judaism, the palm represents peace and plenty, and is one of the Four Species of Sukkot; the palm may also symbolize the Tree of Life in Kabbalah. Panaiveriyamman was an ancient Tamil tree deity related to fertility. Named after panai, the Tamil name for the Palmyra palm, she was also known as Taalavaasini, a name that further related her to all types of palms. Today, the palm, especially the coconut palm, remains a symbol of the tropical island paradise. Palms appear on the flags and seals of several places where they are native, including those of Haiti, Guam, Saudi Arabia, Florida, and South Carolina. Some species commonly called palms, though they are not true palms, include:
who made the following statement in wildness is the preservation of the world
Wildness - wikipedia Wildness, in its literal sense, is the quality of being wild or untamed. Beyond this, it has been defined as a quality produced in nature, as that which emerges from a forest, and as a level of achievement in nature. More recently, it has been defined as "a quality of interactive processing between organism and nature where the realities of base natures are met, allowing the construction of durable systems ''. A wilderness is a place where wildness occurs. People have explored the contrast of wildness versus tameness throughout recorded history. The earliest great work of literature, the Epic of Gilgamesh, tells a story of a wild man Enkidu in opposition to Gilgamesh who personifies civilization. In the story, Enkidu is defeated by Gilgamesh and becomes civilized. Cultures vary in their perception of the separation of humans from nature, with western civilization drawing a sharp contrast between the two while the traditions of many indigenous peoples have always seen humans as part of nature. The perception of man 's place in nature and civilization has also changed over time. In western civilization, for example, Darwinism and environmentalism have renewed the perception of humans as part of nature, rather than separate from it. Wildness is often mentioned in the writings of naturalists, such as John Muir and David Brower, where it is admired for its freshness and otherness. Henry David Thoreau wrote the famous phrase, "In wildness is the preservation of the world. '' Some artists and photographers such as Eliot Porter explore wildness in the themes of their works. The benefits of reconnecting with nature by seeing the achievements of wildness is an area being investigated by ecopsychology. Attempts to identify the characteristics of wildness are varied. One consideration sees wildness as that part of nature which is not controllable by humans. Nature retains a measure of autonomy, or wildness, apart from human constructions (Evanoff, 2005). Another version of this theme is that wildness produces things that are natural, while humans produce things that are artificial (man - made). Ambiguities about the distinction between the natural and the artificial animate much of art, literature and philosophy. There is the perception that naturally produced items have a greater elegance over artificial things. Modern zoos seek to improve the health and vigour of animals by simulating natural settings, in a move away from stark man - made structures. Another view of wildness is that it is a social construct (Callicott 1994), and that humans can not be considered innately ' unnatural '. As wildness is claimed to be a quality that builds from animals and ecosystems, it often fails to be considered within reductionist theories for nature. Meanwhile, an ecological perspective sees wildness as "(the degree of) subjection to natural selection pressures '', many of which emerge independently from the biosphere. Thus modern civilization - contrasted with all humanity -- can be seen as an ' unnatural ' force (lacking wildness) as it strongly insulates its population from many natural selection mechanisms, including interspecific competition such as predation and disease, as well as some intraspecific phenomena. The importance of maintaining wildness in animals is recognized in the management of Wilderness areas. Feeding wild animals in national parks for example, is usually discouraged because the animals may lose the skills they need to fend for themselves. Human interventions may also upset continued natural selection pressures upon the population, producing a version of domestication within wildlife (Peterson et al. 2005). Tameness implies a reduction in wildness, where animals become more easily handled by humans. Some animals are easier to tame than others, and are amenable to domestication. In a clinical setting, wildness has been used as a scale to rate the ease with which various strains of laboratory mice can be captured and handled (Wahlsten et al. 2003): In this sense, "wildness '' may be interpreted as "tendency to respond with anxiety to handling ''. That there is no necessary connection between this factor and the state of wildness per se, given that some animals in the wild may be handled with little or no cause of anxiety. However, this factor does clearly indicate an animal 's resistance to being handled. A classification system can be set out showing the spectrum from wild to domesticated animal states: This classification system does not account for several complicating factors: genetically modified organisms, feral populations, and hybridization. Many species that are farmed or ranched are now being genetically modified. This creates a unique category of them because it alters the organisms as a group but in ways unlike traditional domestication. Feral organisms are members of a population that was once raised under human control, but is now living and multiplying outside of human control. Examples include mustangs. Hybrids can be wild, domesticated, or both: a liger is a hybrid of two wild animals, a mule is a hybrid of two domesticated animals, and a beefalo is a cross between a wild and a domestic animal. The basic idea of ecopsychology is that while the human mind is shaped by the modern social world, it can be readily inspired and comforted by the wider natural world, because that is the arena in which it originally evolved. Mental health or unhealth can not be understood in the narrow context of only intrapsychic phenomena or social relations. One also has to include the relationship of humans to other species and ecosystems. These relations have a deep evolutionary history; reach a natural affinity within the structure of their brains and they have deep psychic significance in the present time, in spite of urbanization. Humans are dependent on healthy nature not only for their physical sustenance, but for mental health, too. The concept of a state of nature was first posited by the 17th century English philosopher Thomas Hobbes in Leviathan. Hobbes described the concept in the Latin phrase bellum omnium contra omnes, meaning "the war of all against all. '' In this state any person has a natural right to do anything to preserve their own liberty or safety. Famously, he believed that such a condition would lead to a "war of every man against every man '' and make life "solitary, poor, nasty, brutish, and short. '' Hobbes 's view was challenged in the eighteenth century by Jean - Jacques Rousseau, who claimed that Hobbes was taking socialized persons and simply imagining them living outside of the society they were raised in. He affirmed instead that people were born neither good nor bad; men knew neither vice nor virtue since they had almost no dealings with each other. Their bad habits are the products of civilization specifically social hierarchies, property, and markets. Another criticism put forth by Karl Marx is his concept of species - being, or the unique potential of humans for dynamic, creative, and cooperative relations between each other. For Marx and others in his line of critical theory, alienated and abstracted social relations prevent the fulfillment of this potential (see anomie). David Hume 's view brings together and challenges the theories of Rousseau and Hobbes. He posits that in the natural state we are born wicked and evil because of, for instance, the cry of the baby that demands attention. Like Rousseau, he believes that society shapes us, but that we are born evil and it is up to society to shape us into who we become. Thoreau made many statements on wildness: In Wildness is the preservation of the World. -- "Walking '' I wish to speak a word for Nature, for absolute Freedom and Wildness, as contrasted with a Freedom and Culture merely civil, -- to regard man as an inhabitant, or a part and parcel of Nature, rather than a member of society. -- "Walking '' I long for wildness, a nature which I can not put my foot through, woods where the wood thrush forever sings, where the hours are early morning ones, and there is dew on the grass, and the day is forever unproved, where I might have a fertile unknown for a soil about me. -- Journal, 22 June 1853 As I came home through the woods with my string of fish, trailing my pole, it being now quite dark, I caught a glimpse of a woodchuck stealing across my path, and felt a strange thrill of savage delight, and was strongly tempted to seize and devour him raw; not that I was hungry then, except for that wildness which he represented. -- Walden What we call wildness is a civilization other than our own. -- Journal, 16 February 1859 In Wildness is the preservation of the World. -- "Walking '' We need the tonic of wildness -- to wade sometimes in marshes where the bittern and the meadow - hen lurk, and hearing the booming of the snipe; to smell the whispering sedge where only some wilder and more solitary fowl builds her nest, and the mink crawls with its belly close o the ground. -- Walden It is in vain to dream of a wildness distant from ourselves. There is none such. -- Journal, 30 August 1856 The most alive is the wildest. -- "Walking '' Whatever has not come under the sway of man is wild. In this sense original and independent men are wild -- not tamed and broken by society. -- Journal, 3 September 1851
a child with secure attachment is one who
Secure attachment - wikipedia Secure attachment is classified by children who show some distress when their caregiver leaves but are able to compose themselves knowing that their caregiver will return. Children with secure attachment feel protected by their caregivers, and they know that they can depend on them to return. John Bowlby and Mary Ainsworth developed a theory known as attachment theory after inadvertently studying children who were patients in a hospital at which they were working. Attachment theory explains how the parent - child relationship emerges and provides influence on subsequent behaviors and relationships. Stemming from this theory, there are four main types of attachment: secure attachment, ambivalent attachment, avoidant attachment and disoriented attachment. Ambivalent attachment is defined by children who become very distressed when their caregiver leaves, and they are not able to soothe or compose themselves. These children can not depend on their caregiver (s) to be there for them. This is a relatively infrequent case with only a small percentage of children in the United States affected. Avoidant attachment is represented by children who avoid their caregiver, showing no distress when the caregiver leaves. These children react similarly to a stranger as do they with their caregiver. This attachment is often associated with abusive situations. Children who are reprimanded for going to their caregiver will stop seeking help in the future. Disoriented attachment is defined by children who have no consistent way to manage their separation from and reunion with the attachment figure. Sometimes these children appear to be clinically depressed. These children are often present in studies of high - risk samples of severely maltreated babies, but they also appear in other samples. Children who have a secure attachment to their primary caregiver will grow to have higher self - esteem as well as better self - reliance. Additionally, these children tend to be more independent and have lower reported instances of anxiety and depression. These children are also able to form better social relationships. Children who are securely attached typically are visibly upset as their caregivers leave, but they are happy upon their return. These children seek comfort from their parent or caregiver when frightened. In an instance when their parent or primary caregiver is not available, these children can be comforted to a degree by others, but they prefer their familiar parent or caregiver. Likewise, when parents with secure attachments reach out to their children, the children welcome the connection. Playing with children is more common when parents and children have a secure attachment. These parents react more quickly to their children 's needs and are typically more responsive to a child they are securely attached to than one of insecure attachment. Attachment carries on throughout the growth of the children. Studies support that secure attachments with primary caregivers lead to more mature and less aggressive children than those with avoidant or ambivalent attachment styles. The relationship type infants establish with their primary caregiver can predict the course of their relationships and connections throughout their lives. Those who are securely attached have high self - esteem, seek out social connection and support and are able to share their feelings with other people. They also tend to have long - term, trusting relationships. Secure attachment has been shown to act as a buffer to determinants of health among preschoolers, including stress and poverty. One study supports that women with a secure attachment style had more positive feelings with regard to their adult relationships than women with insecure attachment styles. Within an adult romantic relationship, secure attachment can mean both people engage in close, bodily contact, disclose information with one another, share discoveries with each other and feel safe when the other is nearby. The Strange Situation was an experimental procedure developed by Ainsworth to study the variety of attachment forms between one - to two - year - olds and their mothers. Mothers at the time were their primary caregivers. The sample was made up of 100 middle class American families. There was a room set up with one - way glass allowing the researcher to observe the interaction. Inside the room, there were some toys and a confederate, fulfilling the role of stranger. The Strange Situation had eight episodes lasting three minutes each. The behavior of the infant was observed during each phase. The mother, baby and experimenter were all together initially. This phase lasted less than one minute. Then the mother and baby were alone in the room. A stranger, confederate, joined the mother and infant. After a set time had passed, the mother would leave the room, leaving her child with the stranger. The children with a secure attachment to their mother would cry for a few minutes but were able to compose themselves and play with the toys. Once the mother returned, the children with secure attachments greeted them and returned to play. Sometimes, they would show their mothers the toys with which they had played. As the mother returned, the stranger left. Then the mother left and the infant was left alone. The stranger returned. Lastly, the mother returned and the stranger left. This strange situation became the basis of the attachment theory. J.R. Harris is one of the main critics of attachment theory. She suggests that people assume that honest and respectful parents will have honest and respectful children, et cetera. However, this may not be the case. Harris argues that children 's peers have more influence on one 's personality than their parents. The common example used is a child with immigrant parents. The children are able to continue to speak their parent 's original language whilst at home, but the children can also learn the new language and speak it without an accent, while the parents ' accent remains. Harris claims that children learn these things from their peers in an attempt to fit in with others. In the nature versus nurture debate within secure attachment, Harris takes a nature stance. She supports herself by stating that identical twins separated at birth showed more similarities in their hobbies and interests than twins raised in the same household. Aside from the nature argument, there are three additional criticisms. The attachment is assessed during momentary separations. Since these brief situations can be stressful, this is a limitation in the theory. A better demonstration of the child 's reaction might have come from a situation in which the mother left, but the child did not experience excessive stress. Another limitation with the attachment model is the assumption that the mother is the primary attachment figure. Attachment can be expressed differently with each figure. For example, children may cry when one figure leaves while they might have trouble sleeping when another leaves. Additionally, physiological changes can occur during this situation, and they were not accounted for.
where are can am side by sides made
Can - Am off - road - wikipedia Can - Am ATVs and side - by - side vehicles are manufactured by BRP / Bombardier Recreational Products a Canadian company, once part of Bombardier Inc. Founded in 1942 as L'Auto - Neige Bombardier Limitée (Bombardier Snow Car Limited) by Joseph - Armand Bombardier in Valcourt Quebec Canada. BRP owns manufacturing facilities in Canada, the United States, Mexico, Finland, and Austria BRP products including Can - Am all - terrain vehicles (ATV) and Side - by - Side (SxS, UTV, SSV) vehicles are distributed in over 100 countries by more than 4,200 dealers and distributors. BRP also employs more than 7,100 people around the globe. Can - Am Off - Road offers a full line of ATV and Side - by - Side vehicles that are designed for riders of all skill levels and age groups. The 2015 Can - Am ATV and Side - by - Side vehicle lineup includes: Maverick X3 Maverick X3 XDS Maverick X3 XRS Maverick MAX X ds Maverick MAX X rs DPS Maverick MAX Maverick X ds Maverick X mr Maverick X xc DPS Maverick X rs DPS Maverick Commander MAX LIMITED Commander MAX XT Commander MAX DPS Commander LIMITED Commander XT - P Commander XT Commander DPS Commander E LSV SE Commander E LSV Commander E XT Commander E Outlander MAX LIMITED Outlander MAX XT - P Outlander MAX XT Outlander MAX DPS Outlander MAX Outlander 1000 X mr Outlander 800R X mr Outlander 650 X mr Outlander 6X6 XT Outlander XT - P Outlander XT Outlander DPS Outlander Outlander L MAX DPS Outlander L MAX Outlander L DPS Outlander L Renegade X xc Renegade DS 450 X mx DS 450 X xc DS 250 DS X DS Can - Am was brought to life in 1972 when BRP created Can - Am as a motorcycle brand. Can - Am first manufactured high - performance motocross dirtbikes. In February 1998, BRP entered yet another market which was all - terrain vehicles (ATVs) by introducing a prototype of the Traxter - a utility based ATV. Just one year later, the BRP Traxter all - terrain vehicle was named "ATV of the Year '' by ATV Magazine. Later that year, BRP also began working on their second ATV which would be a pure sport ATV called the DS650 that was designed for experienced sports enthusiasts. BRP added an additional variation of the Traxter all - terrain vehicle in 2000 which was called the Traxter XL. The BRP Traxter XL was the first 4x4 all - terrain vehicle to feature a dumping box - bed. In 2002, BRP introduced another Traxter all - terrain vehicle called the Traxter MAX which featured two seats. The BRP Traxter MAX was the first manufacturer - approved ATV for two people. 2005 was a successful racing season for BRP. In January, Antoine Morel of France successfully completed and won what is arguably the hardest off - road race in the world, the Dakar Rally, racing on a BRP DS 650X. In late October, BRP wins its first GNCC Racing Series Championship in the Utility Modified ATV Class on an Outlander 800 ATV. BRP would earn a total of 12 GNCC Racing Championships in the following four seasons of GNCC Racing. In September 2005, BRP introduced the APACHE track kit - which was the first and only OEM ATV Track kit to fit most major all - terrain vehicles. May 2006 was a big year for Can - Am. It was announced that Bombardier ATV would become Can - Am ATV. Starting in 2007, BRP launched and re-branded its ATV segment to Can - Am. Just one year after announcing the re-branding, BRP inaugurates its new plant in Juarez, Mexico. The new plant would oversee manufacturing and assembly of the Can - Am Outlander and Renegade ATV model lineup including the Rotax engines that power these to all - terrain vehicles. In January 2006, the newly branded Can - Am ATV earns yet another Dakar Rally win along with taking rest of the three positions on the podium in the ATV category. Juan Manuel Gonzales, Antoine Morel and Alain Morel, spent 16 challenging days and traversed more than 9,000 kilometers from Lisbon, Portugal, to Dakar, Senegal to earn a spot on the podium in the 2006 Dakar Rally aboard Can - Am all - terrain vehicles (ATVs). 2010 was another milestone for Can - Am when they first introduced the Can - Am Commander - their first side - by - side vehicle. The 2011 Can - Am Commander model line up featured a total of six models with two different engine sizes to choose from. The Can - Am Commander 1000 features a 85 - hp Rotax 1000 V - Twin engine, an industry exclusive dual - level cargo box. Shortly after its introduction, the Can - Am Commander 1000 receives the "Best of the Best '' Award in the Side - by - Side vehicle category by Field & Stream magazine in 2011. Two years later, Can - Am announced another side - by - side vehicle the 2013 Can - Am Maverick. The Can - Am Maverick 1000R was designed to be a pure sport side - by - side and would compete against the Polaris RZR XP 1000 and the Arctic Cat Wildcat 1000 H.O. The Can - Am Maverick featured the highest horsepower from a manufacturer at the time of 101 horsepower with its 976cc Rotax V - Twin engine. In September 2014, Can - Am introduced the industry 's first side - by - side vehicle to come with a turbo straight from the manufacturer. The Can - Am Maverick X ds Turbo features an 121 - hp Rotax 1000R turbocharged option - which is currently the highest horsepower side - by - side vehicle on the market. Along with the introduction of the 2015 Can - Am Maverick X ds, Can - Am also expanded their line of recreational side - by - side vehicles by adding the Can - Am Commander MAX Limited Model.
what was the goal of the african national congress
African National Congress - wikipedia The African National Congress (ANC) is the Republic of South Africa 's governing political party. It has been the ruling party of post-apartheid South Africa on the national level, beginning with the election of Nelson Mandela in the 1994 election. Today, the ANC remains the dominant political party in South Africa, winning every election since 1994. Cyril Ramaphosa, the incumbent President of South Africa, has served as leader of the ANC since 18 December 2017. Founded on 8 January 1912 by John Langalibalele Dube in Bloemfontein as the South African Native National Congress (SANNC), its primary mission was to give voting rights to black and mixed - race Africans and, from the 1940s, to end apartheid. The ANC originally attempted to use nonviolent protests to end apartheid, however, the Sharpeville massacre resulted in the deaths of 69 black Africans and contributed to deteriorating relations with the South African government. On 8 April 1960, the administration of Charles Robberts Swart, banned the ANC and forced the party to leave South Africa. After the ban, the ANC formed the Umkhonto we Sizwe (Spear of the Nation) to fight against apartheid utilizing guerrilla warfare and sabotage. On 3 February 1990, State President F.W. de Klerk lifted the ban on the ANC and released Nelson Mandela on 11 February 1990. On 17 March 1992, the apartheid referendum was passed by the voters removing apartheid and allowing the ANC to run in the 1994 election. Since the 1994 election the ANC has performed better than 60 % in all general elections, including the most recent 2014 election. The founding of the SANNC was in direct response to injustice against black South Africans at the hands of the government then in power. It can be said that the SANNC had its origins in a pronouncement by Pixley ka Isaka Seme who said in 1911, "Forget all the past differences among Africans and unite in one national organisation. '' The SANNC was founded the following year on 8 January 1912. The government of the newly formed Union of South Africa began a systematic oppression of black people in South Africa. The Land Act was promulgated in 1913 forcing many black South Africans from their farms into the cities and towns to work, and to restrict their movement within South Africa. By 1919, the SANNC was leading a campaign against passes (an ID which black South Africans had to possess). However, it then became dormant in the mid-1920s. During that time, black people were also represented by the ICU and the previously white - only Communist party. In 1923, the organisation became the African National Congress, and in 1929 the ANC supported a militant mineworkers ' strike. By 1927, J.T. Gumede (president of the ANC) proposed co-operation with the Communists in a bid to revitalise the organisation, but he was voted out of power in the 1930s. This led to the ANC becoming largely ineffectual and inactive, until the mid-1940s when the ANC was remodelled as a mass movement. The ANC responded to attacks on the rights of black South Africans, as well as calling for strikes, boycotts, and defiance. This led to a later Defiance Campaign in the 1950s, a mass movement of resistance to apartheid. The government tried to stop the ANC by banning party leaders and enacting new laws to stop the ANC, however these measures ultimately proved to be ineffective. In 1955, the Congress of the People officially adopted the Freedom Charter, stating the core principles of the South African Congress Alliance, which consisted of the African National Congress and its allies the South African Communist Party (SACP), the South African Indian Congress, the South African Congress of Democrats (COD) and the Coloured People 's Congress. The government claimed that this was a communist document, and consequently leaders of the ANC and Congress were arrested. 1960 saw the Sharpeville massacre, in which 69 people were killed when police opened fire on anti-apartheid protesters. uMkhonto we Sizwe or MK, translated "The Spear of the Nation '', was the military wing of the ANC. Partly in response to the Sharpeville massacre of 1960, individual members of the ANC found it necessary to consider violence to combat what passive protest had failed to quell. In co-operation with the South African Communist Party, MK was founded in 1961. MK commenced the military struggle against apartheid with acts of sabotage aimed at the installations of the state, and in the early stages was reluctant to target civilian targets. MK was responsible for the deaths of both civilians and members of the military. Acts committed by MK include the Church Street bombing and the Magoo 's Bar bombing. It was integrated into the South African National Defence Force by 1994. The ANC and its members were officially removed from the US terrorism watch list in 2008. The ANC deems itself a force of national liberation in the post-apartheid era; it officially defines its agenda as the National Democratic Revolution. The ANC is a member of the Socialist International. It also sets forth the redressing of socio - economic differences stemming from colonial - and apartheid - era policies as a central focus of ANC policy. The National Democratic Revolution (NDR) is described as a process through which the National Democratic Society (NDS) is achieved; a society in which people are intellectually, socially, economically and politically empowered. The drivers of the NDR are also called the motive forces and are defined as the elements within society that gain from the success of the NDR. Using contour plots or concentric circles the centre represents the elements in society that gain the most out of the success of the NDR. Moving away from the centre results in the reduction of the gains that those elements derive. It is generally believed that the force that occupies the centre of those concentric circles in countries with low unemployment is the working class while in countries with higher levels of unemployment it is the unemployed. Some of the many theoreticians that have written about the NDR include Joe Slovo, Joel Netshitenzhe and Tshilidzi Marwala. In 2004, the ANC declared itself to be a social democratic party. The 53rd National Conference of the ANC, held in 2015, stated in its "Discussion Document '' that "China economic development trajectory remains a leading example of the triumph of humanity over adversity. The exemplary role of the collective leadership of the Communist Party of China in this regard should be a guiding lodestar of our own struggle. '' It went on to state that "The collapse of the Berlin Wall and socialism in the Soviet Union and Eastern European States influenced our transition towards the negotiated political settlement in our country. The cause of events in the world changed tremendously in favour of the US led imperialism. '' The ANC holds a historic alliance with the South African Communist Party (SACP) and Congress of South African Trade Unions (COSATU), known as the Tripartite Alliance. The SACP and COSATU have not contested any election in South Africa, but field candidates through the ANC, hold senior positions in the ANC, and influence party policy and dialogue. During Mbeki 's presidency, the government took a more pro-capitalist stance, often running counter to the demands of the SACP and COSATU. Following Zuma 's accession to the ANC leadership in 2007 and Mbeki 's resignation as president in 2008, a number of former ANC leaders led by Mosiuoa Lekota split away from the ANC to form the Congress of the People. On 20 December 2013, a special congress of the National Union of Metalworkers of South Africa (NUMSA), the country 's biggest trade union with 338,000 members, voted to withdraw support from the ANC and SACP, and form a socialist party to protect the interests of the working class. NUMSA secretary general Irvin Jim condemned the ANC and SACP 's support for big business and stated: "It is clear that the working class can not any longer see the ANC or the SACP as its class allies in any meaningful sense. '' The ANC flag comprises three equal horizontal stripes -- black, green and gold. Black symbolises the native people of South Africa, green represents the land and gold represents the mineral and other natural wealth of South Africa. This flag was also the battle flag of uMkhonto we Sizwe. The Grand Duchy of Saxe - Weimar - Eisenach used an unrelated but identical flag from 1813 to 1897. The black, green and gold tricolor was also used on the flag of the KwaZulu bantustan. Politicians in the party win a place in parliament by being on the Party List, which is drawn up before the elections and enumerates, in order, the party 's preferred MPs. The number of seats allocated is proportional to the popular national vote, and this determines the cut - off point. The ANC has also gained members through the controversial floor crossing process. Although most South African parties announced their candidate list for provincial premierships in the 2009 election, the ANC did not, as it is not required for parties to do so. In 2001, the ANC launched an online weekly web - based newsletter, ANC Today -- Online Voice of the African National Congress to offset the alleged bias of the press. It consists mainly of updates on current programmes and initiatives of the ANC. The ANC represented the main opposition to the government during apartheid and therefore they played a major role in resolving the conflict through participating in the peacemaking and peace - building processes. Initially intelligence agents of the National Party met in secret with ANC leaders, including Nelson Mandela, to judge whether conflict resolution was possible. Discussions and negotiations took place leading to the eventual unbanning of the ANC and other opposing political parties by then President de Klerk on 2 February 1990. The next official step towards rebuilding South Africa was the Groote Schuur Minute where the government and the ANC agreed on a common commitment towards the resolution of the existing climate of violence and intimidation, as well as a commitment to stability and to a peaceful process of negotiations. The ANC negotiated the release of political prisoners and the indemnity from prosecution for returning exiles and moreover channels of communication were established between the Government and the ANC. Later the Pretoria Minute represented another step towards resolution where agreements at Groote Schuur were reconsolidated and steps towards setting up an interim government and drafting a new constitution were established as well as suspension of the military wing of the ANC -- the Umkhonto we Sizwe. This step helped end much of the violence within South Africa. Another agreement that came out of the Pretoria Minute was that both parties would try and raise awareness that a new way of governance was being created for South Africa, and that further violence would only hinder this process. However, violence still continued in Kwazulu - Natal, which violated the trust between Mandela and de Klerk. Moreover, internal disputes in the ANC prolonged the war as consensus on peace was not reached. The next significant steps towards resolution were the Repeal of the Population Registration Act, the repeal of the Group Areas and the Native Land Acts and a catch - all Abolition of Racially Based Land Measures Act was passed. These measures ensured no one could claim, or be deprived of, any land rights on the basis of race. In December 1991 the Convention for a Democratic South Africa (CODESA) was held with the aim of establishing an interim government. However, a few months later in June 1992 the Boipatong massacre occurred and all negotiations crumbled as the ANC pulled out. After this negotiations proceeded between two agents, Cyril Ramaphosa of the ANC, and Roelf Meyer of the National Party. In over 40 sessions the two men discussed and negotiated over many issues including the nature of the future political system, the fate of over 40,000 government employees and if / how the country would be divided. The result of these negotiations was an interim constitution that meant the transition from apartheid to democracy was a constitutional continuation and that the rule of law and state sovereignty remained intact during the transition, which was vital for stability within the country. A date was set for the first democratic elections on 27 April 1994. The ANC won 62.5 % of the votes and has been in power ever since. The most prominent corruption case involving the ANC relates to a series of bribes paid to companies involved in the ongoing R55 billion Arms Deal saga, which resulted in a long term jail sentence to then Deputy President Jacob Zuma 's legal adviser Schabir Shaik. Zuma, the former South African President, was charged with fraud, bribery and corruption in the Arms Deal, but the charges were subsequently withdrawn by the National Prosecuting Authority of South Africa due to their delay in prosecution. The ANC has also been criticised for its subsequent abolition of the Scorpions, the multidisciplinary agency that investigated and prosecuted organised crime and corruption, and was heavily involved in the investigation into Zuma and Shaik. Tony Yengeni, in his position as chief whip of the ANC and head of the Parliaments defence committee has recently been named as being involved in bribing the German company ThyssenKrupp over the purchase of four corvettes for the SANDF. Other recent corruption issues include the sexual misconduct and criminal charges of Beaufort West municipal manager Truman Prince, and the Oilgate scandal, in which millions of Rand in funds from a state - owned company were funnelled into ANC coffers. The ANC has also been accused of using government and civil society to fight its political battles against opposition parties such as the Democratic Alliance. The result has been a number of complaints and allegations that none of the political parties truly represent the interests of the poor. This has resulted in the "No Land! No House! No Vote! '' Campaign which became very prominent during elections. In late 2011 the ANC was heavily criticised over the passage of the Protection of State Information Bill, which opponents claimed would improperly restrict the freedom of the press. Opposition to the bill included otherwise ANC - aligned groups such as COSATU. Notably, Nelson Mandela and other Nobel laureates Nadine Gordimer, Archbishop Desmond Tutu, and F.W. de Klerk have expressed disappointment with the bill for not meeting standards of constitutionality and aspirations for freedom of information and expression. The ANC have been criticised for its role in failing to prevent 16 August 2012 massacre of Lonmin miners at Marikana in the North West. Some allege that Police Commissioner Riah Phiyega and Police Minister Nathi Mthethwa, a close confidant of Jacob Zuma, may have given the go ahead for the police action against the miners on that day. Commissioner Phiyega of the ANC came under further criticism as being insensitive and uncaring when she was caught smiling and laughing during the Farlam Commission 's video playback of the ' massacre '. Archbishop Desmond Tutu has announced that he no longer can bring himself to exercise a vote for the ANC as it is no longer the party that he and Nelson Mandela fought for, and that the party has now lost its way, and is in danger of becoming a corrupt entity in power. The ANC has a growing list of constitutional failures. One of the most prominent relates to president of the ANC and of the Republic, Jacob Zuma, and his Nkandla homestead 's security upgrades, valued at around R250 million. His swimming pool, for example, was termed a ' fire pool ' and his amphitheatre an ' emergency meeting point ', thus leaving the taxpayer to carry the costs. After the Public Protector released her report (Secure in Comfort) which found that Zuma must pay back the money spent on the non-security features, he refused to do so. In 2016 the Constitutional Court ruled that Zuma, as well as the National Assembly, had "breached the Constitution '' and failed to uphold it. Zuma apologised to the nation as follows: "The matter has caused a lot of frustration and confusion for which I apologise on my behalf and on behalf of government. '' However he claimed not to have asked nor known about the non-security upgrades, despite the media reporting on them almost daily. There is also a growing trend for ANC members as well as those individuals appointed by the ANC to public positions of power to misrepresent their qualifications. The result of such lies typically lead to those appointed being unable to fulfill their obligations while being paid very large salaries, and typically cost the taxpayer large amount of money while attempting to defend themselves in court. A small selection follows: Carl Niehaus, who served as ANC speaker, claimed to have a B.A., Masters and Doctorate degrees; in reality he never received the Masters or Doctoral degrees. Pallo Jordan, who served as Minister of Arts and Culture claimed to be in possession of a PhD, when in reality he has no tertiary education at all. Daniel Mtimkulu, who was employed as chief engineer at Passenger Rail Agency of South Africa (Prasa) claimed to have a PhD in engineering, which was a lie; he was merely qualified as an engineering technician. Under Mtimkulu 's leadership, Prasa ordered 70 new locomotives, valued at R3. 5 billion. The first 13 Afro 4000 diesel locomotives to arrive, at a cost of R600 million, were too tall to be of use on their intended routes. Ellen Tshabalala, former chairperson of the South African Broadcasting Corporation (SABC), claimed to have a BComm degree. In reality her marks were so poor (13 % for one module and 35 % for another, amongst others) that she was not allowed to rewrite some of her exams. She later claimed that he degree certificate was stolen. Defending Tshabalala in court cost the SABC more than R1 million. Hlaudi Motsoeneng, former COO of the SABC lied about being in possession of a Matric certificate. By his own admission, he simply invented marks for himself. He was appointed by Ellen Tshabalala, and his various court cases have cost the SABC more than R1. 5 million. Further, under Motsoeneng 's reign, the broadcaster recorded a net loss of R411 million in the 2015 / 16 financial year. Dudu Myeni, chairperson of South African Airways (SAA) and good friend of Jacob Zuma, claimed to have a Bachelor 's degree in administration. This was proven false. Under her leadership "SAA 's losses for the 2014 / 15 financial year were R5. 6 - billion -- close to R1 - billion more than the expected amount of R4. 7 - billion ''. Sicelo Shiceka, Minister of Cooperative Governance and Traditional Affairs lied about being in possession of a Master 's degree. He used taxpayer 's money to fund a party for his mother and secured a government car for his girlfriend, whereafter he was appointed as a member of the inter-ministerial task team on corruption. A 2016 statement issued by Zizi Kodwa, the ANC National Spokesperson states that "(t) he ANC rejects these (racist) comments with the contempt they deserve and calls on all South Africans to join in the rejection of all racists in our country, wherever they are. It is sad that well meaning South Africans have to contend with this backward attitude. '' In support of this statement, the ANC has publicly called for legal action to be taken against whites who have publicly made racist comments against blacks, usually through social media. Penny Sparrow is one such high - profile case. She posted the following through her Facebook account: These monkeys that are allowed to be released on New Year 's eve and New Year 's day on to public beaches towns etc obviously have no education what so ever so to allow them loose is inviting huge dirt and troubles and discomfort to others. I 'm sorry to say that I was among the revellers and all I saw were black on black skins what a shame. I do know some wonderful and thoughtful black people. This lot of monkeys just do n't want to even try. But think they can voice opinions about statute and get their way oh dear. From now I shall address the blacks of South Africa as monkeys as I see the cute little wild monkeys do the same, pick drop and litter. Sparrow pleaded guilty to crimen injuria, and was presented with a choice of either paying a R5, 000 fine or 12 months in jail, in addition to paying the legal fees incurred by the ANC, who brought the matter to court. In a separate instance, she was also ordered to pay R150, 000 to the Oliver and Adelaide Tambo Trust. In contrast to the ANC 's swift and decisive action towards Sparrow and other white racists, they have mostly ignored racist comments voiced by blacks, in particular ANC members. For example, Kenny Barrel Nkosi, an ANC ward councillor (Govan Mbeki Municipality, Mpumalanga) posted the following on his Facebook account: "The first people that need to fokkof (fuck off) are whites, cubans never oppressed us. these are our true friends they were there in the times on needs. welcom cdes welcome (sic) '' The municipality issued the following statement: "The matter has been investigated and at the time of the comment, the ward councillor was not representing the views of either the ANC or the Govan Mbeki Municipality, but merely as a personal opinion. '' No further action was taken. At a Gupta family wedding held at Sun City in 2013, various incidents of racism occurred. The family made clear that they wanted only white workers, including waiters, security, bar staff and cleaning staff. Black workers were told to wash before they interacted with guests. These allegations were denied by the Gupta family. Nonetheless, in the Gupta e-mail leak of 2017 these allegations were shown to be correct. Moreover, the e-mails also make clear that a black worker was called a monkey by a member of the Gupta family. That the Gupta family is a large, vocal and powerful supporter of the ANC and a personal friend of Jacob Zuma, may explain why no action was taken against them with regards to racism. Lindiwe Sisulu, ANC member and Minister of Defence and Military Veterans (who demanded that the Estate Agency Affairs Board report to her regarding action taken against Sparrow) called the Democratic Alliance leader, Mmusi Maimane, a "hired native ''. Ironically -- due to the fact that Chris Hart, prominent economist and investment strategist at Standard Bank, was forced to resign for his racist tweet stating that "(m) ore than 25 years after Apartheid ended, the victims are increasing along with a sense of entitlement and hatred towards minorities.... '' -- Sisulu said the following, while discussing the 2.3 million housing backlog: "What makes an 18 - year - old think the state owes them a house? It 's a culture of entitlement... we ca n't continue with a dependency culture. '' No action has been taken against Sisulu. Lulu Xingwana, former ANC Minister of Women, Children and People with Disabilities, stated that "(y) oung Afrikaner men are brought up in the Calvinist religion believing that they own a woman, they own a child, they own everything and therefore they can take that life because they own it ''. The minister apologised, and no further action was taken against her. Jimmy Manyi, ANC director general of labour and later ANC spokesperson, is quotes as saying the following on a TV interview: "I think its very important for coloured people in this country to understand that South Africa belongs to them in totality, not just the Western Cape. So this over-concentration of coloureds in the Western Cape is not working for them. They should spread in the rest of the country... so they must stop this over-concentration situation because they are in over-supply where they are so you must look into the country and see where you can meet the supply. '' No action has been taken against Manyi. Julius Malema, former ANCYL leader and current EFF leader, stated at a political rally in 2016 that "We (the EFF) are not calling for the slaughter of white people ‚ at least for now ''. When asked for comment by a news agency, the ANC spokesperson, Zizi Kodwa stated that there will be no comment from the ANC, as "(h) e (Malema) was addressing his own party supporters. '' While still the ANCYL leader, Malema was taken to the Equality Court by AfriForum for repeatedly singing "dubul ' ibhunu '', which translate as "shoot the boer (white farmer) ''. The ANC supported Malema, though AfriForum and the ANC reached a settlement before the appeal case was due to be argued in the Supreme Court of Appeal. In partial response to the Penny Sparrow case, Velaphi Khumalo, while working for the Department of Sport, Arts, Culture and Recreation, posted the following on his Facebook account: "I want to cleans this country of all white people. we must act as Hitler did to the Jews. I do n't believe any more that the is a large number of not so racist whit people. I 'm starting to be sceptical even of those within our Movement the ANC. I will from today unfriend all white people I have as friends from today u must be put under the same blanket as any other racist white because secretly u all are a bunch of racist fuck heads. as we have already seen (all sic). '' He also posted: "Noo seriously though u oppressed us when u were a minority and then manje u call us monkeys and we supposed to let it slide. white people in south Africa deserve to be hacked and killed like Jews. U have the same venom moss. look at Palestine. noo u must be bushed alive and skinned and your off springs used as garden fertiliser (all sic) ''. The Department of Sports, Arts, Culture and Recreation responded with a statement wherein it "views the hateful post by Velaphi Khumalo in a serious light. Our key mandate is nation - building and social cohesion. His sentiments take our country backwards and do not reflect what the Gauteng provincial government stands for. '' Khumalo was suspended on full pay while an investigation was undertaken, was found to be guilty by an internal disciplinary procedure, and issued with a warning, whereafter he resumed his work at the department. Esethu Hasane, Media and Communication Manager for the Department of Sport and Recreation tweeted the following during the severe droughts in the Western cape in 2017: "Only Western Cape still has dry dams. Please God, we have black people there, choose another way of punishing white people. '' Despite calls for his dismissal, no action was taken.
when did the song we will rock you come out
We Will Rock You - wikipedia "We Will Rock You '' is a song written by Brian May and recorded by Queen for their 1977 album News of the World. Rolling Stone ranked it number 330 of "The 500 Greatest Songs of All Time '' in 2004, and it placed at number 146 on the Songs of the Century list in 2001. In 2009, "We Will Rock You '' was inducted into the Grammy Hall of Fame. Other than the last 30 seconds containing a guitar solo by May, the song is generally set in a cappella form, using only stomping and clapping as a rhythmic body percussion beat. In 1977, "We Will Rock You '' and "We Are the Champions '' were issued together as a worldwide top 10 single. Soon after the album was released, many radio stations began playing the songs consecutively and without interruption. Since its release, "We Will Rock You '' has been covered, remixed, sampled, parodied, referenced and used by multiple recording artists, TV shows, films and other media worldwide. Since its release, the song has become a staple at sports events around the world as a stadium anthem, mostly due to its simple rhythm. On 7 October 2017, Queen released a Raw Sessions version of the track to celebrate the 40th anniversary of the release of News of the World. It features a radically different approach to the guitar solo and includes May 's count - in immediately prior to the recording. "We Will Rock You '' and "We Are the Champions '' were written in response to an event that occurred during the A Day at the Races Tour. The band played at Stafford 's Bingley Hall, and, according to Brian May: We did an encore and then went off, and instead of just keeping clapping, they sang "You 'll Never Walk Alone '' to us, and we were just completely knocked out and taken aback -- it was quite an emotional experience really, and I think these chant things are in some way connected with that. One version was used as the opening track on 1977 's News of the World. This consists of a stomp - stomp - clap - pause beat, and a power chorus, being somewhat of an anthem. The stamping effects were created by the band overdubbing the sounds of themselves stomping and clapping many times and adding delay effects to create a sound like many people were participating. The durations of the delays were in the ratios of prime numbers, a technique now known as non-harmonic reverberation. A tape loop is used to repeat the last phrase of the guitar solo three times as opposed to Brian May playing it three separate times on the recording. The "stomp, stomp, clap '' sounds were later used in the Queen + Paul Rodgers song "Still Burnin ' ''. When performed live, the song is usually followed by "We Are the Champions '', as they were designed to run together. The songs are often paired on the radio and at sporting events, where they are frequently played. They were the last two songs Queen performed at Live Aid in 1985. Queen also performed an alternate version "We Will Rock You '' known as the "fast version '', featuring a faster - feeling tempo and a full band arrangement. The band would frequently use this version to open their live sets in the late 1970s and early 1980s, as heard on the albums Live Killers (1979), Queen on Fire - Live at the Bowl (2004), Queen Rock Montreal (2007), and the expanded edition of News of the World (2011). A studio recording of this version is also known to exist, recorded for John Peel 's show on BBC Radio 1 in 1977. It is part of a longer cut that starts with the original version. In 2002, the fast version was officially released on a promo single distributed by the tabloid The Sun. The "fast '' BBC studio version can also be found on The Best of King Biscuit Live Volume 4. Between the two versions, there is a brief cut of a woman discussing Brahmanism, used in a BBC Radio documentary. The fast version is also used as the curtain call music for the musical of the same title, after the finale, which is a pairing of the original "We Will Rock You '' and "We Are the Champions. '' Since its release, the song has become a staple at sporting events around the world as a stadium anthem. It was the most played song during the 2008 -- 2009 seasons of the National Football League, National Hockey League and Major League Baseball. In 2000, English boy band Five released a cover of "We Will Rock You ''. It was released from their second studio album, Invincible (1999). Released on 17 July 2000, the song features two members of Queen: Brian May on guitar and Roger Taylor on drums; however, they do not sing any vocals on the track. Freddie Mercury had died in November 1991, nearly a decade before this version 's release, and John Deacon had retired from public life three years before its release. The song charted at number one on the UK Singles Chart, making it Five 's second number - one single, and their ninth consecutive top - ten hit. sales + streaming figures based on certification alone 1990s 2000s 1990s 2000s 2010s 1990s 2000s 2010s
when was the song we are the world made
We Are the World - wikipedia "We Are the World '' is a song and charity single originally recorded by the supergroup United Support of Artists (USA) for Africa in 1985. It was written by Michael Jackson and Lionel Richie (with arrangements by Michael Omartian) and produced by Quincy Jones for the album We Are the World. With sales in excess of 20 million copies, it is one of the fewer than 30 all - time physical singles to have sold at least 10 million copies worldwide. Following Band Aid 's 1984 "Do They Know It 's Christmas? '' project in the United Kingdom, an idea for the creation of an American benefit single for African famine relief came from activist Harry Belafonte, who, along with fundraiser Ken Kragen, was instrumental in bringing the vision to reality. Several musicians were contacted by the pair, before Jackson and Richie were assigned the task of writing the song. The duo completed the writing of "We Are the World '' seven weeks after the release of "Do They Know It 's Christmas? '', and one night before the song 's first recording session, on January 21, 1985. The historic event brought together some of the most famous artists in the music industry at the time. The song was released on March 7, 1985, as the only single from the album. A worldwide commercial success, it topped music charts throughout the world and became the fastest - selling American pop single in history. The first ever single to be certified multi-platinum, "We Are the World '' received a Quadruple Platinum certification by the Recording Industry Association of America. Awarded numerous honors -- including three Grammy Awards, one American Music Award, and a People 's Choice Award -- the song was promoted with a critically received music video, a home video, a special edition magazine, a simulcast, and several books, posters, and shirts. The promotion and merchandise aided the success of "We Are the World '' and raised over $63 million (equivalent to $138 million today) for humanitarian aid in Africa and the US. Following the devastation caused by the magnitude 7.0 M earthquake in Haiti on January 12, 2010, a remake of the song by another all - star cast of singers was recorded on February 1, 2010. Entitled "We Are the World 25 for Haiti '', it was released as a single on February 12, 2010, and proceeds from the record aided survivors in the impoverished country. Before the writing of "We Are the World '', American entertainer and social activist Harry Belafonte had sought for some time to have a song recorded by the most famous artists in the music industry at the time. He planned to have the proceeds donated to a new organization called United Support of Artists for Africa (USA for Africa). The non-profit foundation would then feed and relieve starving people in Africa, specifically Ethiopia, where around one million people died during the country 's 1983 -- 85 famine. The idea followed Band Aid 's "Do They Know It 's Christmas? '' project in the UK, which Belafonte had heard about. In the activist 's plans, money would also be set aside to help eliminate hunger in the United States of America. Entertainment manager and fellow fundraiser Ken Kragen was contacted by Belafonte, who asked for singers Lionel Richie and Kenny Rogers -- Kragen 's clients -- to participate in Belafonte 's musical endeavor. Kragen and the two musicians agreed to help with Belafonte 's mission, and in turn, enlisted the cooperation of Stevie Wonder, to add more "name value '' to their project. Quincy Jones was drafted to co-produce the song, taking time out from his work on The Color Purple. Jones also telephoned Michael Jackson, who had just released the commercially successful Thriller album and had concluded a tour with his brothers. Jackson revealed to Richie that he not only wanted to sing the song, but to participate in its writing as well. To begin with, "We Are the World '' was to be written by Jackson, Richie, and Wonder. As Wonder had limited time to work on the project, Jackson and Richie proceeded to write "We Are the World '' themselves. They began creating the song at Hayvenhurst, the Jackson family home in Encino. For a week, the two spent every night working on lyrics and melodies in the singer 's bedroom. They knew that they wanted a song that would be easy to sing and memorable. The pair wanted to create an anthem. Jackson 's older sister La Toya watched the two work on the song, and later contended that Richie only wrote a few lines for the track. She stated that her younger brother wrote 99 percent of the lyrics, "but he 's never felt it necessary to say that ''. La Toya further commented on the song 's creation in an interview with the American celebrity news magazine People. "I 'd go into the room while they were writing and it would be very quiet, which is odd, since Michael 's usually very cheery when he works. It was very emotional for them. '' Richie had recorded two melodies for "We Are the World '', which Jackson took, adding music and words to the song in the same day. Jackson stated, "I love working quickly. I went ahead without even Lionel knowing, I could n't wait. I went in and came out the same night with the song completed -- drums, piano, strings, and words to the chorus. '' Jackson then presented his demo to Richie and Jones, who were both shocked; they did not expect the pop star to see the structure of the song so quickly. The next meetings between Jackson and Richie were unfruitful; the pair did not produce any additional vocals and got no work done. It was not until the night of January 21, 1985, that Richie and Jackson completed the lyrics and melody of "We Are the World '' within two and a half hours, one night before the song 's first recording session. The first night of recording, January 22, 1985, had tight security on hand, as Richie, Jackson, Wonder, and Jones started work on "We Are the World '' at Kenny Rogers ' Lion Share Recording Studio. The studio, on Beverly Boulevard in California, was filled with musicians, technicians, video crews, retinues, assistants, and organizers as the four musicians entered. To begin the night, a "vocal guide '' of "We Are the World '' was recorded by Richie and Jackson and duplicated on tape for each of the invited performers. The guide was recorded on the sixth take, as Quincy Jones felt that there was too much "thought '' in the previous versions. Following their work on the vocal guide, Jackson and Jones began thinking of alternatives for the line "There 's a chance we 're taking, we 're taking our own lives '': the pair was concerned that the latter part of the line would be considered a reference to suicide. As the group listened to a playback of the chorus, Richie declared that the last part of the line should be changed to "We 're ' saving ' our own lives '', which his fellow musicians agreed with. Producer Jones also suggested altering the former part of the line. "One thing we do n't want to do, especially with this group, is look like we 're patting ourselves on the back. So it 's really, ' There 's a choice we 're making. ' '' Around 1: 30 am, the four musicians ended the night by finishing a chorus of melodic vocalizations, including the sound "sha - lum sha - lin - gay ''. Jones told the group that they were not to add anything else to the tape. "If we get too good, someone 's gon na start playing it on the radio, '' he announced. On January 24, 1985, after a day of rest, Jones shipped Richie and Jackson 's vocal guide to all of the artists who would be involved in "We Are the World '' 's recording. Enclosed in the package was a letter from Jones, addressed to "My Fellow Artists '': The cassettes are numbered, and I ca n't express how important it is not to let this material out of your hands. Please do not make copies, and return this cassette the night of the 28th. In the years to come, when your children ask, ' What did mommy and daddy do for the war against world famine? ', you can say proudly, this was your contribution. Ken Kragen chaired a production meeting at a bungalow off Sunset Boulevard on January 25, 1985. There, Kragen and his team discussed where the recording sessions with the supergroup of musicians should take place. He stated, "The single most damaging piece of information is where we 're doing this. If that shows up anywhere, we 've got a chaotic situation that could totally destroy the project. The moment a Prince, a Michael Jackson, a Bob Dylan -- I guarantee you! -- drives up and sees a mob around that studio, he will never come in. '' On the same night, Quincy Jones ' associate producer and vocal arranger, Tom Bahler, was given the task of matching each solo line with the right voice. Bahler stated, "It 's like vocal arranging in a perfect world. '' Jones disagreed, stating that the task was like "putting a watermelon in a Coke bottle ''. The following evening, Lionel Richie held a "choreography '' session at his home, where it was decided who would stand where. The final night of recording was held on January 28, 1985, at A&M Recording Studios in Hollywood. Michael Jackson arrived at 9 pm, earlier than the other artists, to record his solo section and record a vocal chorus by himself. He was subsequently joined in the recording studio by the remaining USA for Africa artists, who included Ray Charles, Billy Joel, Diana Ross, Cyndi Lauper, Bruce Springsteen and Smokey Robinson. Also in attendance were five of Jackson 's siblings: Jackie, La Toya, Marlon, Randy and Tito. Many of the participants came straight from an American Music Award ceremony, which had been held that same night. Invited musician Prince, who would have had a part in which he and Michael Jackson sang to each other, did not attend the recording session. The reason given for his absence has varied. One newspaper claimed that the singer did not want to record with other acts. Another report, from the time of "We Are the World '' 's recording, suggested that the musician did not want to partake in the session because organizer Bob Geldof called him a "creep ''. Prince did, however, donate an exclusive track, "4 The Tears In Your Eyes '', for the We Are the World album. In all, more than 45 of America 's top musicians participated in the recording, and another 50 had to be turned away. Upon entering the recording studio, the musicians were greeted by a sign pinned to the door which read, "Please check your egos at the door. '' They were also greeted by Stevie Wonder, who proclaimed that if the recording was not completed in one take, he and Ray Charles, two blind men, would drive everybody home. Each of the performers took their position at around 10: 30 pm and began to sing. Several hours passed before Stevie Wonder announced that he would like to substitute a line in Swahili for the "sha - lum sha - lin - gay '' sound. At this point, Waylon Jennings left the recording studio for a short time when it was suggested by some that the song be sung in Swahili. A heated debate ensued, in which several artists also rejected the suggestion. The "sha - lum sha - lin - gay '' sound ran into opposition as well and was subsequently removed from the song. Jennings returned to the studio and participated in the recording, which bears his name in the end credits. The participants eventually decided to sing something meaningful in English. They chose to sing the new line "One world, Our children '', which most of the participants enjoyed. In the early hours of the morning, two Ethiopian women, guests of Stevie Wonder, were brought into the recording studio -- it had been decided that a portion of the proceeds raised would be used to bring aid to those affected by the recent famine in Ethiopia. They thanked the singers on behalf of their country, bringing several artists to tears, before being led from the room. Wonder attempted to lighten the mood, by joking that the recording session gave him a chance to "see '' fellow blind musician Ray Charles. "We just sort of bumped into each other! '' The solo parts of the song were recorded without any problems. The final version of "We Are the World '' was completed at 8 am. "We Are the World '' is sung from a first person viewpoint, allowing the audience to "internalize '' the message by singing the word we together. It has been described as "an appeal to human compassion ''. The first lines in the song 's repetitive chorus proclaim, "We are the world, we are the children, we are the ones who make a brighter day, so let 's start giving ''. "We Are the World '' opens with Lionel Richie, Stevie Wonder, Paul Simon, Kenny Rogers, James Ingram, Tina Turner and Billy Joel singing the first verse. Michael Jackson and Diana Ross follow, completing the first chorus together. Dionne Warwick, Willie Nelson and Al Jarreau sing the second verse, before Bruce Springsteen, Kenny Loggins, Steve Perry and Daryl Hall go through the second chorus. Co-writer Jackson, Huey Lewis, Cyndi Lauper and Kim Carnes follow with the song 's bridge. This structuring of the song is said to "create a sense of continuous surprise and emotional buildup ''. "We Are the World '' concludes with Bob Dylan and Ray Charles singing a full chorus, Wonder and Springsteen duetting, and ad libs from Charles and Ingram. On March 8, 1985, "We Are the World '' was released as a single, in both 7 '' and 12 '' format. The song was the only one released from the We Are the World album and became a chart success around the world. In the US, it was a number one hit on the R&B singles chart, the Hot Adult Contemporary Tracks chart and the Billboard Hot 100, where it remained for a month. The single had initially debuted at number 21 on the Hot 100, the highest entry since Michael Jackson 's "Thriller '' entered the charts at number 20 the year before. It took four weeks for the song to claim the number one spot -- half the time a single would normally have taken to reach its charting peak. On the Hot 100, the song moved from 21 to 5 to 2 and then number 1. "We Are the World '' might have reached the top of the Hot 100 chart sooner, if it were not for the success of Phil Collins ' "One More Night '', which received a significant level of support from both pop and rock listeners. "We Are the World '' also entered Billboard 's Top Rock Tracks and Hot Country Singles charts, where it peaked at numbers 27 and 76 respectively. The song became the first single since The Beatles ' "Let It Be '' to enter Billboard 's Top 5 within two weeks of release. Outside of the US, the single reached number one in Australia, France, Ireland, Italy, New Zealand, The Netherlands, Norway, Sweden, Switzerland and the UK. The song peaked at number 2 in only two countries: Germany and Austria. The single was also a commercial success; the initial shipment of 800,000 "We Are the World '' records sold out within three days of release. The record became the fastest - selling American pop single in history. At one Tower Records store on Sunset Boulevard in West Hollywood, 1,000 copies of the song were sold in two days. Store worker Richard Petitpas commented, "A number one single sells about 100 to 125 copies a week. This is absolutely unheard of. '' By the end of 1985, "We Are the World '' had become the best selling single of the year. Five years later it was revealed that the song had become the biggest single of the 1980s. "We Are the World '' was eventually cited as the biggest selling single in both US and pop music history. The song became the first - ever single to be certified multi-platinum; it received a 4 × certification by the Recording Industry Association of America. The estimated global sales of "We Are the World '' are said to be 20 million. Despite the song 's commercial success, "We Are the World '' received mixed reviews from journalists, music critics and the public following its release. American journalist Greil Marcus felt that the song sounded like a Pepsi jingle. He wrote, "... the constant repetition of ' There 's a choice we 're making ' conflates with Pepsi 's trademarked ' The choice of a new generation ' in a way that, on the part of Pepsi - contracted song writers Michael Jackson and Lionel Richie, is certainly not intentional, and even more certainly beyond the realm of serendipity. '' Marcus added, "In the realm of contextualization, ' We Are the World ' says less about Ethiopia than it does about Pepsi -- and the true result will likely be less that certain Ethiopian individuals will live, or anyway live a bit longer than they otherwise would have, than that Pepsi will get the catch phrase of its advertising campaign sung for free by Ray Charles, Stevie Wonder, Bruce Springsteen, and all the rest. '' Author Reebee Garofalo agreed, and expressed the opinion that the line "We 're saving our own lives '' was a "distasteful element of self - indulgence ''. He asserted that the artists of USA for Africa were proclaiming "their own salvation for singing about an issue they will never experience on behalf of a people most of them will never encounter ''. In contrast, Stephen Holden of The New York Times praised the phrase "There 's a choice we 're making, We 're saving our own lives ''. He commented that the line assumed "an extra emotional dimension when sung by people with superstar mystiques ''. Holden expressed that the song was "an artistic triumph that transcends its official nature ''. He noted that unlike Band Aid 's "Do They Know It 's Christmas '', the vocals on "We Are the World '' were "artfully interwoven '' and emphasized the individuality of each singer. Holden concluded that "We Are the World '' was "a simple, eloquent ballad '' and a "fully - realized pop statement that would sound outstanding even if it were n't recorded by stars ''. The song proved popular with both young and old listeners. The public enjoyed hearing a supergroup of musicians singing together on one track, and felt satisfied in buying the record, knowing that the money would go towards a good cause. People reported they bought more than one copy of the single, some buying up to five copies of the record. One mother from Columbia, Missouri purchased two copies of "We Are the World '', stating, "The record is excellent whether it 's for a cause or not. It 's fun trying to identify the different artists. It was a good feeling knowing that I was helping someone in need. '' According to music critic and Bruce Springsteen biographer Dave Marsh, "We Are the World '' was not widely accepted within the rock music community. The author revealed that the song was "despised '' for what it was not: "a rock record, a critique of the political policies that created the famine, a way of finding out how and why famines occur, an all - inclusive representation of the entire worldwide spectrum of post-Presley popular music ''. Marsh revealed that he felt some of the criticisms were right, while others were silly. He claimed that despite the sentimentality of the song, "We Are the World '' was a large - scale pop event with serious political overtones. "We Are the World '' was recognized with several awards following its release. At the 1986 Grammy Awards, the song and its accompanying music video won four awards: Record of the Year, Song of the Year, Best Pop Performance by a Duo or Group with Vocal and Best Music Video, Short Form. The music video was awarded two honors at the 1985 MTV Video Music Awards. It collected the awards for Best Group Video and Viewer 's Choice. People 's Choice Awards recognized "We Are the World '' with the Favorite New Song award in 1986. In the same year, the American Music Awards named "We Are the World '' "Song of the Year '', and honored organizer Harry Belafonte with the Award of Appreciation. Collecting his award, Belafonte thanked Ken Kragen, Quincy Jones, and "the two artists who, without their great gift would not have inspired us in quite the same way as we were inspired, Mr. Lionel Richie and Mr. Michael Jackson ''. Following the speech, the majority of USA for Africa reunited on stage, closing the ceremony with "We Are the World ''. "We Are the World '' was promoted with a music video, a video cassette, and several other items made available to the public, including books, posters, shirts and buttons. All proceeds from the sale of official USA for Africa merchandise went directly to the famine relief fund. All of the merchandise sold well; the video cassette -- entitled We Are the World: The Video Event -- documented the making of the song, and became the ninth best - selling home video of 1985. All of the video elements were produced by Howard G. Malley and Craig B. Golin along with April Lee Grebb as the production supervisor. The music video showed the recording of "We Are the World '', and drew criticism from some. Michael Jackson joked before filming, "People will know it 's me as soon as they see the socks. Try taking footage of Bruce Springsteen 's socks and see if anyone knows who they belong to. '' Jackson was also criticized for filming and recording his solo piece privately, away from the other artists. The song was also promoted with a special edition of the American magazine Life. The publication had been the only media outlet permitted inside A&M Recording Studios on the night of January 28, 1985. All other press organizations were barred from reporting the events leading up to and during "We Are the World '' 's recording. Life ran a cover story of the recording session in its April 1985 edition of the monthly magazine. Seven members of USA for Africa were pictured on the cover: Bob Dylan, Bruce Springsteen, Cyndi Lauper, Lionel Richie, Michael Jackson, Tina Turner and Willie Nelson. Inside the magazine were photographs of the "We Are the World '' participants working and taking breaks. "We Are the World '' received worldwide radio coverage in the form of an international simultaneous broadcast later that year. Upon spinning the song on their local stations, Georgia radio disc jockeys, Bob Wolf and Don Briscar came up with the idea for a worldwide simulcast. They called hundreds of radio and satellite stations asking them to participate. On the morning of April 5, 1985 (Good Friday of that year) at 10: 25 am, over 8000 radio stations simultaneously broadcast the song around the world. As the song was broadcast, hundreds of people sang along on the steps of St. Patrick 's Cathedral in New York. The simultaneous radio broadcast of "We Are the World '' was repeated again the following Good Friday. "We Are the World '' gained further promotion and coverage on May 25, 1986, when it was played during a major benefit event held throughout the US. Hands Across America -- USA for Africa 's follow - up project -- was an event in which millions of people formed a human chain across the US. The event was held to draw attention to hunger and homelessness in the United States. "We Are the World '' 's co-writer, Michael Jackson, had wanted his song to be the official theme for the event. The other board members of USA for Africa outvoted the singer, and it was instead decided that a new song would be created and released for the event, titled "Hands Across America ''. When released, the new song did not achieve the level of success that "We Are the World '' did, and the decision to use it as the official theme for the event led to Jackson -- who co-owned the publishing rights to "We Are the World '' -- resigning from the board of directors of USA for Africa. Four months after the release of "We Are the World '', USA for Africa had taken in almost $10.8 million (equivalent to $24 million today). The majority of the money came from record sales within the US. Members of the public also donated money -- almost $1.3 million within the same time period. In May 1985, USA for Africa officials estimated that they had sold between $45 million and $47 million worth of official merchandise around the world. Organizer Ken Kragen announced that they would not be distributing all of the money at once. Instead, he revealed that the foundation would be looking into finding a long - term solution for Africa 's problems. "We could go out and spend it all in one shot. Maybe we 'd save some lives in the short term but it would be like putting a Band - Aid over a serious wound. '' Kragen noted that experts had predicted that it would take at least 10 to 20 years to make a slight difference to Africa 's long - term problems. In June 1985, the first USA for Africa cargo jet carrying food, medicine and clothing departed for Ethiopia and Sudan. It stopped en route in New York, where 15,000 T - shirts were added to the cargo. Included in the supplies were high - protein biscuits, high - protein vitamins, medicine, tents, blankets and refrigeration equipment. Harry Belafonte, representing the USA for Africa musicians, visited Sudan in the same month. The trip was his last stop on a four - nation tour of Africa. Tanzanian Prime Minister Salim Ahmed Salim greeted and praised Belafonte, telling him, "I personally and the people of Tanzania are moved by this tremendous example of human solidarity. '' One year after the release of "We Are the World '', organizers noted that $44.5 million had been raised for USA for Africa 's humanitarian fund. They stated that they were confident that they would reach an initial set target of $50 million (equivalent to $109 million in 2017). By October 1986, it was revealed that their $50 million target had been met and exceeded; CBS Records gave USA for Africa a check for $2.5 million, drawing the total amount of money to $51.2 million. USA for Africa 's Hands Across America event had also raised a significant amount of money -- approximately $24.5 million for the hungry in the US. Since its release, "We Are the World '' has raised over $63 million (equivalent to $138 million today) for humanitarian causes. Ninety percent of the money was pledged to African relief, both long and short term. The long - term initiative included efforts in birth control and food production. The remaining 10 percent of funds was earmarked for domestic hunger and homeless programs in the US. From the African fund, over 70 recovery and development projects were launched in seven African nations. Such projects included aid in agriculture, fishing, water management, manufacturing and reforestation. Training programs were also developed in the African countries of Mozambique, Senegal, Chad, Mauritania, Burkina Faso and Mali. Elias Kifle Maraim Beyene, a survivor from Ethiopia being asked about his memory of Michael Jackson after his death remembers: "I wo n't ever forget Michael Jackson because his contribution to the song We are the World had a very significant effect on my life. I am 50 now but 25 years ago I was living in Addis Ababa, Ethiopia, which at that time was suffering from a long drought and famine. It was a terrible situation. Lots of people became sick and many more died. Around one million people in all were killed by the famine. In 1984 Michael Jackson, along with a number of other leading musicians, made the song We are the World to raise money for Africa. We received a lot of aid from the world and I was one of those who directly benefitted from it. The wheat flour that was distributed to the famine victims was different to the usual cereal we bought at the market. We baked a special bread from it. The local people named the bread after the great artist and it became known as Michael Bread. It was soft and delicious. When you have been through such hard times you never forget events like this. If you speak to anyone who was in Addis Ababa at that time they will all know what Michael Bread is and I know I will remember it for the rest of my life. (...) '' "We Are the World '' has been performed live by members of USA for Africa on several occasions both together and individually. One of the earliest such performances came in 1985, during the rock music concert Live Aid, which ended with more than 100 musicians singing the song on stage. Harry Belafonte and Lionel Richie made surprise appearances for the live rendition of the song. Michael Jackson would have joined the artists, but was "working around the clock in the studio on a project that he 's made a major commitment to '', according to his press agent, Norman Winter. An inaugural celebration was held for US President - elect Bill Clinton in January 1993. The event was staged by Clinton 's Hollywood friends at the Lincoln Memorial and drew hundreds of thousands of people. Aretha Franklin, LL Cool J, Michael Bolton and Tony Bennett were among some of the musicians in attendance. Said Jones, "I 've never seen so many great performers come together with so much love and selflessness. '' The celebration included a performance of "We Are the World '', which involved Clinton, his daughter Chelsea, and his wife Hillary singing the song along with USA for Africa 's Kenny Rogers, Diana Ross and Michael Jackson. The New York Times ' Edward Rothstein commented on the event, stating, "The most enduring image may be of Mr. Clinton singing along in ' We Are the World ', the first President to aspire, however futilely, to hipness. '' As a prelude to his song "Heal the World '', "We Are the World '' was performed as an interlude during two of Michael Jackson 's tours, the Dangerous World Tour from 1992 to 1993 and the HIStory World Tour from 1996 to 1997. Jackson briefly perform the song with a chorus at the 2006 World Music Award in London, in his last live public performance. Jackson planned to use the song for his This Is It comeback concerts at The O2 Arena in London from 2009 to 2010, but the shows were cancelled due to his sudden death. Michael Jackson died in June 2009, after suffering a cardiac arrest. His memorial service was held several days later on July 7, and was reported to have been viewed by more than three billion people. The finale of the event featured group renditions of the Jackson anthems "We Are the World '' and "Heal the World ''. The singalong of "We Are the World '' was led by Darryl Phinnessee, who had worked with Jackson since the late 1980s. It also featured co-writer Lionel Richie and Jackson 's family, including his children. Following the performance, "We Are the World '' re-entered the US charts for the first time since its 1985 release. The song debuted at number 50 on Billboard 's Hot Digital Songs chart. On January 12, 2010, Haiti was struck by a magnitude - 7.0 earthquake, the country 's most severe earthquake in over 200 years. The epicenter of the quake was just outside the Haitian capital Port - au - Prince. Over 230,000 civilians have been confirmed dead by the Haitian government because of the disaster and around 300,000 have been injured. Approximately 1.2 million people are homeless and it has been reported that the lack of temporary shelter may lead to the outbreak of disease. To raise money for earthquake victims, a new celebrity version of "We Are the World '' was recorded on February 1, 2010, and released on February 12, 2010. Over 75 musicians were involved in the remake, which was recorded in the same studio as the 1985 original. The new version features revised lyrics as well as a rap segment pertaining to Haiti. Michael Jackson 's younger sister Janet duets with her late brother on the track, as per a request from their mother Katherine. In the video and on the track, archive material of Michael Jackson is used from the original 1985 recording. On February 20, 2010, a non-celebrity remake, "We Are the World 25 for Haiti (YouTube Edition) '', was posted to the video sharing website YouTube. Internet personality and singer - songwriter Lisa Lavie conceived and organized the Internet collaboration of 57 unsigned or independent YouTube musicians geographically distributed around the world. Lavie 's 2010 YouTube version, a cover of the 1985 original, excludes the rap segment and minimizes the Auto - tune that characterizes the 2010 celebrity remake. Another 2010 remake of the original is the Spanish - language "Somos El Mundo ''. It was written by Emilio Estefan and his wife Gloria Estefan, and produced by Emilio, Quincy Jones and Univision Communications, the company that funded the project. "We Are the World '' has been recognized as a politically important song, which "affected an international focus on Africa that was simply unprecedented ''. It has been credited with creating a climate in which musicians from around the world felt inclined to follow. According to The New York Times ' Stephen Holden, since the release of "We Are the World '', it has been noted that movement has been made within popular music to create songs that address humanitarian concerns. "We Are the World '' was also influential in subverting the way music and meaning were produced, showing that musically and racially diverse musicians could work together both productively and creatively. Ebony described the January 28 recording session, in which Quincy Jones brought together a multi-racial group, as being "a major moment in world music that showed we can change the world ''. "We Are the World '', along with Live Aid and Farm Aid, demonstrated that rock music had become more than entertainment, but a political and social movement. Journalist Robert Palmer noted that such songs and events had the ability to reach people around the world, send them a message, and then get results. Since the release of "We Are the World '', and the Band Aid single that influenced it, numerous songs have been recorded in a similar fashion, with the intent to aid disaster victims throughout the world. One such example involved a supergroup of Latin musicians billed as "Hermanos del Tercer Mundo '', or "Brothers of the Third World ''. Among the supergroup of 62 recording artists were Julio Iglesias, José Feliciano and Sérgio Mendes. Their famine relief song was recorded in the same studio as "We Are the World ''. Half of the profits raised from the charity single was pledged to USA for Africa. The rest of the money was to be used for impoverished Latin American countries. Another notable example is the 1989 cover of the Deep Purple song "Smoke on the Water '' by a supergroup of hard rock, prog rock, and heavy metal musicians collaborating as Rock Aid Armenia to raise money for victims of the devastating 1988 Armenian earthquake. The 20th anniversary of "We Are the World '' was celebrated in 2005. Radio stations around the world paid homage to USA for Africa 's creation by simultaneously broadcasting the charity song. In addition to the simulcast, the milestone was marked by the release of a two - disc DVD called We Are the World: The Story Behind the Song. Ken Kragen asserted that the reason behind the simulcast and DVD release was not for USA for Africa to praise themselves for doing a good job, but to "use it to do some more good (for the original charity). That 's all we care about accomplishing. '' Harry Belafonte also commented on the 20th anniversary of the song. The entertainer acknowledged that "We Are the World '' had "stood the test of time ''; anyone old enough to remember it can still at least hum along. sales figures based on certification alone shipments figures based on certification alone
a picture is worth a thousand words meaning in tamil
A picture is worth a thousand words - wikipedia "A picture is worth a thousand words '' is an English language - idiom. It refers to the notion that a complex idea can be conveyed with just a single still image or that an image of a subject conveys its meaning or essence more effectively than a description does. The expression "Use a picture. It 's worth a thousand words. '' appears in a 1911 newspaper article quoting newspaper editor Tess Flanders discussing journalism and publicity. A similar phrase, "One Look Is Worth A Thousand Words '', appears in a 1913 newspaper advertisement for the Piqua Auto Supply House of Piqua, Ohio. An early use of the exact phrase appears in a 1918 newspaper advertisement for the San Antonio Light, which says: One of the Nation 's Greatest Editors Says: One Picture is Worth a Thousand Words The San Antonio Light 's Pictorial Magazine of the War Exemplifies the truth of the above statement -- judging from the warm It is believed by some that the modern use of the phrase stems from an article by Fred R. Barnard in the advertising trade journal Printers ' Ink, promoting the use of images in advertisements that appeared on the sides of streetcars. The December 8, 1921, issue carries an ad entitled, "One Look is Worth A Thousand Words. '' Another ad by Barnard appears in the March 10, 1927, issue with the phrase "One Picture Worth Ten Thousand Words '', where it is labeled a Chinese proverb. The Home Book of Proverbs, Maxims, and Familiar Phrases quotes Barnard as saying he called it "a Chinese proverb, so that people would take it seriously. '' Nonetheless, the proverb soon after became popularly attributed to Confucius. The actual Chinese expression "Hearing something a hundred times is n't better than seeing it once '' (百 闻 不 如 一 见, p bǎi wén bù rú yī jiàn) is sometimes introduced as an equivalent, as Watts 's "One showing is worth a hundred sayings ''. This was published as early as 1966 discussing persuasion and selling in a book on engineering design. In March 1911, in the Syracuse Advertising Men 's Club, Arthur Brisbane wrote: "Use a picture. It 's worth a thousand words. '' Despite this modern origin of the popular phrase, the sentiment has been expressed by earlier writers. For example, the Russian writer Ivan Turgenev wrote (in Fathers and Sons in 1861), "The drawing shows me at one glance what might be spread over ten pages in a book. '' The quote is sometimes attributed to Napoleon Bonaparte, who said "A good sketch is better than a long speech '' (French: Un bon croquis vaut mieux qu'un long discours). While this is sometimes translated today as "A picture is worth a thousand words, '' this translation does not predate the phrase 's common use in English. The phrase has been spoofed by computer scientist John McCarthy, to make the opposite point: "As the Chinese say, 1001 words is worth more than a picture. ''
who made the phrase curiosity killed the cat
Curiosity killed the cat - wikipedia "Curiosity killed the cat '' is a proverb used to warn of the dangers of unnecessary investigation or experimentation. A less frequently - seen rejoinder to "curiosity killed the cat '' is "but satisfaction brought it back ''. The original form of the proverb, now little used, was "Care killed the cat ''. In this instance, "care '' was defined as "worry '' or "sorrow. '' The earliest printed reference to the original proverb is attributed to the British playwright Ben Jonson in his 1598 play, Every Man in His Humour, which was performed first by William Shakespeare. ... Helter skelter, hang sorrow, care will kill a cat, up - tails all, and a pox on the hangman. Shakespeare used a similar quote in his circa 1599 play, Much Ado About Nothing: What, courage man! what though care killed a cat, thou hast mettle enough in thee to kill care. The proverb remained the same until at least 1898. Ebenezer Cobham Brewer included this definition in his Dictionary of Phrase and Fable: Care killed the Cat. It is said that "a cat has nine lives, '' yet care would wear them all out. The origin of the modern variation is unknown. The earliest known printed reference to the actual phrase "Curiosity killed the cat '' is in James Allan Mair 's 1873 compendium A handbook of proverbs: English, Scottish, Irish, American, Shakesperean, and scriptural; and family mottoes, where it is listed as an Irish proverb on page 34. In the 1902 edition of Proverbs: Maxims and Phrases, by John Hendricks Bechtel, the phrase "Curiosity killed the cat '' is the lone entry under the topic "Curiosity '' on page 100. O. Henry 's 1909 short story "Schools and Schools '' includes a mention that suggests knowledge of the proverb had become widespread by that time: Curiosity can do more things than kill a cat; and if emotions, well recognized as feminine, are inimical to feline life, then jealousy would soon leave the whole world catless. The actual phrase appeared as the headline to a story in The Washington Post on 4 March 1916 (page 6): CURIOSITY KILLED THE CAT. Four Departments of New York City Government Summoned to Rescue Feline. From the New York World. Curiosity, as you may recall -- On the fifth floor of the apartment house at 203 West 130th street lives Miss Mable Godfrey. When she came to the house about seven months ago she brought Blackie, a cat of several years ' experience of life. The cat seldom left the apartment. He was a hearth cat, not a fence cat, and did not dearly love to sing. In other respects he was normal and hence curious. Last Tuesday afternoon when Miss Godfrey was out Blackie skipped into the grate fireplace in a rear room. He had done this many times before. But he had not climbed up the flue to the chimney. This he did Tuesday. Blackie there remained, perched on the top of the screen separating the apartment flue from the main chimney, crying for assistance. Miss Godfrey, returning, tried to induce her pet to come down. If you are experienced in felinity, you know that Blackie did n't come down. On Wednesday the cat, curiosity unsatisfied, tried to climb higher -- and fell to the first floor. His cries could still be heard by Miss Godfrey; who, to effect Blackie 's rescue, communicated with the following departments: 1. Police department. 2. Fire department. 3. Health department. 4. Building department. 5. Washington Heights court. Among them they lowered a rope to Blackie. But it availed neither the cat nor them anything. Thursday morning, just before noon, a plumber opened the rear wall back of the chimney. Blackie was taken out. His fall had injured his back. Ten minutes later Blackie died. Despite these earlier appearances, the proverb has been wrongly attributed to Eugene O'Neill, who included the variation, "Curiosity killed a cat! '' in his play Diff'rent from 1920: BENNY -- (with a wink) Curiosity killed a cat! Ask me no questions and I 'll tell you no lies.
when does autumn start in the northern hemisphere
Autumn - wikipedia Autumn, also known as fall in American and Canadian English, is one of the four temperate seasons. Autumn marks the transition from summer to winter, in September (Northern Hemisphere) or March (Southern Hemisphere), when the duration of daylight becomes noticeably shorter and the temperature cools down considerably. One of its main features is the shedding of leaves from deciduous trees. Some cultures regard the autumnal equinox as "mid-autumn '', while others with a longer temperature lag treat it as the start of autumn. Meteorologists (and most of the temperate countries in the southern hemisphere) use a definition based on Gregorian calendar months, with autumn being September, October, and November in the northern hemisphere, and March, April, and May in the southern hemisphere. In North America, autumn is usually considered to start with the September equinox (21 to 24 September) and end with the winter solstice (21 or 22 December). Popular culture in North America associates Labor Day, the first Monday in September, as the end of summer and the start of autumn; certain summer traditions, such as wearing white, are discouraged after that date. As daytime and nighttime temperatures decrease, trees shed their leaves. In traditional East Asian solar term, autumn starts on or around 8 August and ends on or about 7 November. In Ireland, the autumn months according to the national meteorological service, Met Éireann, are September, October and November. However, according to the Irish Calendar, which is based on ancient Gaelic traditions, autumn lasts throughout the months of August, September and October, or possibly a few days later, depending on tradition. In Australia and New Zealand, autumn officially begins on 1 March and ends on 31 May. The word autumn comes from the ancient Etruscan root autu - and has within it connotations of the passing of the year. It was borrowed by the neighbouring Romans, and became the Latin word autumnus. After the Roman era, the word continued to be used as the Old French word autompne (automne in modern French) or autumpne in Middle English, and was later normalised to the original Latin. In the Medieval period, there are rare examples of its use as early as the 12th century, but by the 16th century, it was in common use. Before the 16th century, harvest was the term usually used to refer to the season, as it is common in other West Germanic languages to this day (cf. Dutch herfst, German Herbst and Scots hairst). However, as more people gradually moved from working the land to living in towns, the word harvest lost its reference to the time of year and came to refer only to the actual activity of reaping, and autumn, as well as fall, began to replace it as a reference to the season. The alternative word fall for the season traces its origins to old Germanic languages. The exact derivation is unclear, with the Old English fiæll or feallan and the Old Norse fall all being possible candidates. However, these words all have the meaning "to fall from a height '' and are clearly derived either from a common root or from each other. The term came to denote the season in 16th - century England, a contraction of Middle English expressions like "fall of the leaf '' and "fall of the year ''. During the 17th century, English emigration to the British colonies in North America was at its peak, and the new settlers took the English language with them. While the term fall gradually became obsolete in Britain, it became the more common term in North America. The name backend, a once common name for the season in Northern England, has today been largely replaced by the name autumn. Association with the transition from warm to cold weather, and its related status as the season of the primary harvest, has dominated its themes and popular images. In Western cultures, personifications of autumn are usually pretty, well - fed females adorned with fruits, vegetables and grains that ripen at this time. Many cultures feature autumnal harvest festivals, often the most important on their calendars. Still extant echoes of these celebrations are found in the autumn Thanksgiving holiday of the United States and Canada, and the Jewish Sukkot holiday with its roots as a full - moon harvest festival of "tabernacles '' (living in outdoor huts around the time of harvest). There are also the many North American Indian festivals tied to harvest of ripe foods gathered in the wild, the Chinese Mid-Autumn or Moon festival, and many others. The predominant mood of these autumnal celebrations is a gladness for the fruits of the earth mixed with a certain melancholy linked to the imminent arrival of harsh weather. This view is presented in English poet John Keats ' poem To Autumn, where he describes the season as a time of bounteous fecundity, a time of ' mellow fruitfulness '. While most foods are harvested during the autumn, foods particularly associated with the season include pumpkins (which are integral parts of both Thanksgiving and Halloween) and apples, which are used to make the seasonal beverage apple cider. Autumn, especially in poetry, has often been associated with melancholia. The possibilities of summer are gone, and the chill of winter is on the horizon. Skies turn grey, the amount of usable daylight drops rapidly, and many people turn inward, both physically and mentally. It has been referred to as an unhealthy season. Similar examples may be found in Irish poet William Butler Yeats ' poem The Wild Swans at Coole where the maturing season that the poet observes symbolically represents his own ageing self. Like the natural world that he observes, he too has reached his prime and now must look forward to the inevitability of old age and death. French poet Paul Verlaine 's "Chanson d'automne '' ("Autumn Song '') is likewise characterised by strong, painful feelings of sorrow. Keats ' To Autumn, written in September 1819, echoes this sense of melancholic reflection, but also emphasises the lush abundance of the season. Autumn is associated with Halloween (influenced by Samhain, a Celtic autumn festival), and with it a widespread marketing campaign that promotes it. Halloween is in autumn in the northern hemisphere. The television, film, book, costume, home decoration, and confectionery industries use this time of year to promote products closely associated with such a holiday, with promotions going from early September to 31 October, since their themes rapidly lose strength once the holiday ends, and advertising starts concentrating on Christmas. Autumn also has a strong association with the end of summer holiday and the start of a new school year, particularly for children in primary and secondary education. "Back to School '' advertising and preparations usually occurs in the weeks leading to the beginning of autumn. Easter is in autumn in the southern hemisphere. Thanksgiving Day is a national holiday celebrated in Canada, in the United States, in some of the Caribbean islands and in Liberia. Thanksgiving is celebrated on the second Monday of October in Canada and on the fourth Thursday of November in the United States, and around the same part of the year in other places. Similarly named festival holidays occur in Germany and Japan. Television stations and networks, particularly in North America, traditionally begin their regular seasons in autumn, with new series and new episodes of existing series debuting mostly during late September or early October (series that debut outside the fall season are usually known as mid-season replacements). A sweeps period takes place in November to measure Nielsen Ratings. American football is played almost exclusively in the autumn months; at the high school level, seasons run through September and October, with some playoff games and holiday rivalry contests being played as late as Thanksgiving. College football 's regular season runs from September through November, while the main professional circuit, the National Football League, plays from September through December. Summer sports, such as stock car racing and Major League Baseball, wrap up their seasons in early autumn; MLB 's championship World Series is known popularly as the "Fall Classic ''. (Amateur baseball is usually finished by August.) Likewise, professional winter sports, such as professional ice hockey, basketball and most leagues of soccer football in Europe, are in the early stages of their seasons during autumn; American college basketball and college ice hockey play teams outside their athletic conferences during the late autumn before their in - conference schedules begin in winter. The Christian religious holidays of All Saints ' Day and All Souls ' Day are observed in autumn in the Northern hemisphere. Since 1997, Autumn has been one of the top 100 names for girls in the United States. In Indian mythology, autumn is considered to be the preferred season for the goddess of learning Saraswati, who is also known by the name of "goddess of autumn '' (Sharada). In Asian mysticism, Autumn is associated with the element of metal, and subsequently with the colour white, the White Tiger of the West, and death and mourning. In the United States, Labor Day is a public holiday celebrated on the first Monday in September. Although colour change in leaves occurs wherever deciduous trees are found, coloured autumn foliage is noted in various regions of the world: most of North America, Eastern Asia (including China, Korea, and Japan), Europe, the forest of Patagonia, eastern Australia and New Zealand 's South Island. Eastern Canada and New England are famous for their autumnal foliage, and this attracts major tourism (worth billions of US dollars) for the regions. Otoño, Frederic Edwin Church, 1875. Museo Thyssen - Bornemisza John Everett Millais, "Autumn Leaves '' Early Autumn, Qian Xuan, 13th century, depiction of decaying lotus leaves and dragonflies hovering over stagnant water Autumn, Giuseppe Arcimboldo, 1573 Herbst (Autumn), Meinolf Wewel Autumn (1896) by Art Nouveau artist Alphonse Mucha The personification of Autumn from an 1871 Currier and Ives print Autumn landscape in Rybiniszki, Latvia, watercolor by Stanisław Masłowski, 1902 (National Museum in Warsaw, Poland) This 1905 print by Maxfield Frederick Parrish illustrated Keats ' poem ' Autumn '
who plays the squid man in pirates of the caribbean
Davy Jones (Pirates of the Caribbean) - wikipedia Edinburgh Trader Black Pearl Empress Various unnamed ships Davy Jones is a fictional character in the Pirates of the Caribbean film series, portrayed by Bill Nighy. He appears in the second film Dead Man 's Chest and returns in the third film At World 's End. He is the captain of the Flying Dutchman (based on the ghost ship of the same name). The computer - generated imagery used to complete Jones was named by Entertainment Weekly as the tenth favorite computer generated film character in film history, behind King Kong in 2007. The work on Davy Jones by Industrial Light and Magic earned them the 2006 Academy Award for Visual Effects for Dead Man 's Chest. The character is based on the superstition of Davy Jones ' Locker. Before officially casting Bill Nighy, producers also met with Jim Broadbent, Iain Glen and Richard E. Grant for the role. Like the entire crew of the Flying Dutchman 's (except "Bootstrap Bill ''), Davy Jones 's physical appearance is completely computer - generated. Nighy 's performance was recorded using motion capture during actual filming on the set, with Nighy wearing several markers in both a grey suit and his face, rather than in a studio during post-production. Nighy also wore make - up around his eyes, since the original plan was to use his real eyes, if necessary to get the proper lighting, in the digital character; he also wore make - up on his lips and around his mouth, to assist in the motion capture of his character 's Scottish accent. Briefly during the third film, Jones appears as a human for a single scene, played by Nighy in costume. Several reviewers have in fact mistakenly identified Nighy as wearing prosthetic makeup or a latex mask due to the computer - generated character 's photorealism. Davy Jones ' physique was designed by the films ' producers to be a mixture of various aquatic flora and fauna features. Jones ' most striking feature is his cephalopod - like head, with octopus - like appendages giving the illusion of a thick beard. The major features of the Davy Jones ' physique bear strong resemblance to the mythical Cthulhu created by H.P. Lovecraft. In Lovecraft 's short story "The Call of Cthulhu '' he describes the creature as "... a monster of vaguely anthropoid outline, but with an octopus - like head whose face was a mass of feelers, a scaly, rubbery - looking body, prodigious claws on hind and fore feet... '' It is revealed in the bonus features of the Special Edition DVD that the face 's color was partly inspired by a coffee - stained styrofoam cup which was then scanned into ILM 's computers to be used as the skin. The character of Davy Jones has also a crustacean - style claw for his left arm, a long tentacle in place of the index finger on his right hand, and the right leg of a crab (resembling a pegleg). He also speaks with a clearly distinguishable, albeit thick, Scottish accent that 's slightly altered to account for his lack of a nose, and presumably, a nasal cavity and / or sinuses. Originally, director Gore Verbinski wanted Jones to be Dutch, as he is the captain of the "Dutch - man ''. Nighy however responded, "I do n't do Dutch. So I decided on Scottish. '' Nighy later revealed that Scottish sitcom Still Game influenced his choice of accent, stating: "I had to find an accent no one else had. Although Alex Norton is Scottish, mine was slightly different. We wanted something that was distinctive and authoritative... I have seen Still Game and I am a fan. The sort of extremity of the accent was inspired in that area. '' Davy Jones, a human and a great sailor, fell in love with Calypso, a sea goddess. She entrusted him with the task of ferrying the souls of those who died at sea to the next world. Calypso gave him the Flying Dutchman to accomplish this task. She swore that after ten years, she would meet him and they would spend one day together before he returned to his duties. However, when Jones returned to shore after ten years, Calypso failed to appear. Believing Calypso had betrayed him, a heartbroken and enraged Davy Jones turned the Pirate Brethren against her, saying that if she were removed from the world, they would be able to claim the seas for themselves. They assembled in the First Brethren Court and Jones taught them how to imprison her into her human form. Despite betraying her, Jones still loved Calypso, and in despair and guilt for what he had done, he carved out his own heart from his chest and placed it in the "Dead Man 's Chest ''. The Chest was sealed and placed within a larger wooden chest, along with Jones ' numerous love letters to Calypso and all other items having to do with her, except his matching musical locket. The chest was then buried on Isla Cruces. Jones kept the chest 's key with him at all times. With Calypso gone, Jones abandoned his duties and returned to the Seven Seas. As a result of this, Jones gradually became monstrous, his physical appearance merging with various aquatic fauna. Sailors everywhere would fear him to the death, for Davy Jones had turned fierce and cruel, with an insatiable taste for all things brutal. Jones recruits dying sailors by promising them a reprieve from death in exchange for 100 years of service aboard the Dutchman. He comes to command the Kraken, a feared mythological sea monster. In the book series about Jack Sparrow 's earlier adventures, Davy Jones shows interest in the Sword of Cortes, also sought by Jack. He is a minor character, but appears in the seventh book as Jack and his crew encounter the Flying Dutchman. Jones also appears in the prequel book about Jack 's first years as a captain. He helps the Brethren Court to identify the traitor among them, who turns out to be Borya Palachnik, the Pirate Lord of the Caspian Sea. Before the events of the first film, Davy Jones approaches Sparrow with a deal: Jones will raise the Black Pearl back from Davy Jones ' Locker, allowing Sparrow to be captain for 13 years if Sparrow agrees to serve on the Dutchman for 100 years. This event, referenced in the films, also appears in the book series. Davy Jones first appears in the second film, Dead Man 's Chest, in which he attempts to collect on his bargain with Jack Sparrow: Davy Jones raised the Black Pearl from the sea for Sparrow. In exchange for being captain for 13 years, Sparrow promised Jones his soul. Sparrow argues that he was only captain for two years before Hector Barbossa committed mutiny. Jones rejects this explanation, explaining that despite the mutiny, Jack still gave himself the title "Captain ''. Sparrow then attempts to escape the deal by providing Will Turner as a substitute for himself. Jack strikes a new deal with Jones; Jack will be spared enslavement on the Dutchman if he brings Jones one hundred souls to replace his own within the next three days. Jones accepts, removes the black spot from Jack 's hand, and retains Will, keeping him as a "good faith payment. '' While on the Dutchman, Will challenges Jones at a game of liar 's dice. They wager Will 's soul for an eternity of service against the key to the Dead Man 's Chest. Bootstrap Bill joins the game and purposefully loses to save Will. During the game, Will learns where Jones keeps the key. The next morning, Jones realizes the key is gone and summons the Kraken to destroy the ship carrying Turner, who actually survives. The Dutchman then sails to Isla Cruces to stop Sparrow from getting the Chest. Arriving, Jones sends his crew to retrieve the Chest; they return to him with it. The Dutchman then chases after the Black Pearl, but is outrun. Jones summons the Kraken, which drags Jack Sparrow and the Pearl to Davy Jones 's Locker. He afterwards opens the Chest only to find his heart missing; it having been taken by James Norrington, who gives it to Cutler Beckett of the East India Trading Company. In the third film At World 's End, Jones is under the control of Cutler Beckett and the East India Trading Company. Beckett possesses the heart, and threatens to have soldiers shoot it if Jones disobeys. Beckett orders Jones to sink pirate ships, but is infuriated when Jones leaves no survivors; Beckett wants prisoners to interrogate about the Brethren Court. Beckett orders Jones to kill the Kraken. Later, he orders Jones attack the Pirate Lord Sao Feng; Jones subsequently kills Sao and captures Elizabeth Swann, who had been named captain by Sao Feng upon his death. When Admiral James Norrington dies on board the Dutchman helping Elizabeth escape, Jones claims Norrington 's sword (originally crafted by Will Turner). Jones then attempts mutiny against the EITC. However, Mercer successfully defends the Chest, forcing Jones to continue under Beckett 's service. Beckett later summons Jones to his ship, the Endeavour. Jones confronts Will Turner and divulges his past with Calypso, while learning of Jack Sparrow 's escape from the Locker. The three men then arrive at Shipwreck Cove. Jones confronts Calypso, locked in the brig of the Black Pearl. The two former lovers discuss Calypso 's betrayal and Jones 's curse. Calypso temporarily lifts his curse, allowing him to be seen briefly in his original human form. Jones tells her that his heart will always belong to her. Calypso, unaware that Jones betrayed her to the first Brethren Court, says that after her release, she will fully give her love to him. Jones participates in a parley in which the EITC trades Turner for Sparrow. After Calypso is freed, Will reveals that Jones betrayed her. She escapes, refusing to aid either the pirates or Jones. Her fury creates a monstrous maelstrom. The Dutchman and the Pearl enter it and battle. During the battle Jones kills Mercer and retrieves the key to the Chest. Sparrow and Jones fight for control of the chest in the rigging of the Dutchman. Jack acquires both the Chest and the key while Jones battles Will and Elizabeth. Jones quickly overpowers Elizabeth, and is subsequently impaled through the back by Will. Jones, unharmed, holds Will at sword - point. Jack threatens to stab the heart, and Jones cruelly stabs Will. Remembering Will as his son, Bootstrap Bill briefly fights and overpowers Jones, but is quickly defeated. Jones attempts to kill Bootstrap, but Jack helps Will stab the heart. Jones then calls out for Calypso, before tumbling to his death in the maelstrom. In the post-credits scene of the fifth film Dead Men Tell No Tales, Will (no longer bound to the Flying Dutchman after the destruction of the Trident of Poseidon) and Elizabeth are sleeping in their bed together, when their room is entered by the silhouette of an apparently resurrected Davy Jones. Just as Jones raises his clawed arm to strike at the couple, Will awakens and the room is empty. Assuming Jones 's appearance to be a nightmare, Will goes back to sleep, oblivious to the presence of barnacles on the floor amid a small puddle of seawater, revealing it was no dream and Davy Jones is really alive. In the films, Jones possesses a locket that plays a distinct melody, and he is known to play the same melody on his pipe organ. This melody is also his character 's theme, and can be heard throughout the film 's score. It comes in two variations: The soundtrack version and the film version. The melody of the soundtrack version is heard only in Dead Man 's Chest. The film version is played in both films multiple times, and is heard last during the climax of the film. Because Jones and Calypso own matching locket lockets, Tia Dalma 's theme is similar to that of Davy Jones, albeit in a different arrangement. The theme is also heard briefly after Jones ' appearance in Dead Men Tell No Tales. Davy Jones possesses a large number of supernatural abilities. Jones is capable of teleportation on board the Flying Dutchman and the Black Pearl, and he can pass through solid objects. Jones is immortal, capable of surviving injuries that would be fatal to mortals. However, he is not impervious to pain, as demonstrated when Jack was able to cut off some of his facial tentacles during their battle. Jones can also track any soul that is owed to him using the black spot, with which any member of his crew can mark a victim. Jones has also the power to control and call forth the Kraken, a sea monster which can destroy ships upon command. In a physical confrontation, Jones ' physiology also gives him various advantages; his facial tentacles allow him to manipulate objects while leaving his hands free, such that he is able to restrain Mercer 's arms with his hands while smothering him with his tentacles. His tentacle finger allows him to exert a much stronger grip and control his sword more quickly and precisely than a normal hand could, and his crab claw hand possesses enough strength to bend or sever sword blades. He also demonstrates more general superhuman strength when he throws Jack off the crossbeam using only one arm. Davy Jones was part of Series One of the Pirates of the Caribbean: Dead Man 's Chest action figure set produced by NECA. Although the initial run of figures had a sticker on the box that proclaimed that the figure came with the Dead Man 's Chest and Jones ' heart, both props (as well as the key) were released with the Bootstrap Bill figure in Series Two. Jones also made an appearance as a smaller figure with crew members Angler, Wheelback and Penrod. Jones was issued as a plush toy as part of Sega 's "Dead Man 's Chest '' plush assortment. Jones was also part of a 3 figure pack as a 3.75 inch figure with Hector Barbossa and a limited edition gold Jack Sparrow for At World 's End. Davy Jones and his ship, the Flying Dutchman, were produced as a Mega Blocks set for the movies Dead Man 's Chest and Pirates of the Caribbean: At World 's End. Although his minifigure counterpart in the Dead Man 's Chest set has more bluish tentacles then his counterpart in the At World 's End set, which has more greenish tentacles. He was made as a Lego minifigure in November 2011, with 4184 Black Pearl. A children 's and adult Halloween costumes were released for Halloween 2007. Davy Jones was released as a PEZ dispenser, along with Jack Sparrow and Will Turner. Hot Toys also announced plans to make a 1: 6 version of Davy Jones which became available Q2 2008, and is widely regarded as more detailed than those produced by NECA.
what is the active ingredient in australian dream cream
Histamine dihydrochloride - wikipedia Histamine dihydrochloride (INN, trade name Ceplene) is a salt of histamine which is used as a drug for the prevention of relapse in patients diagnosed with acute myeloid leukemia (AML). It is a FDA - approved active ingredient for topical analgesic use and is available in such products as Australian Dream Cream, which is used for the temporary relief of minor aches and pains of muscles and joints associated with arthritis, simple backache, bruises, sprains, and strains. Histamine dihydrochloride is administered in conjunction with low doses of the immune - activating cytokine interleukin - 2 (IL - 2) in the post-remission phase of AML, i.e. when patients have completed the initial chemotherapy. This combination has been reported to significantly reduce the risk of relapse in AML. The effect is particularly pronounced in patients in their first remission who are below the age of 60. The combination of histamine dihydrochloride and interleukin - 2 was approved for use in AML patients within the European Union in October 2008 and will be marketed in the EU by the Swedish pharmaceutical company Meda. The drug is also available through a named patient program in several other countries (excluding the US). Histamine dihydrochloride acts by improving the immune - enhancing properties of IL - 2, and laboratory studies have shown that this combination can induce immune - mediated killing of leukemic cells. The treatment (in the form of subcutaneous injections) is given in 3 - week cycles by the patients at home for 18 months, thus coinciding with the period of highest relapse risk. The side - effects include transient flush and headache, whereas IL - 2 may induce low - grade fever and inflammation at the site of injection. Histamine dihydrochloride has been developed by researchers at the University of Gothenburg, Sweden.
who was found guilty in the boston massacre trial
Boston Massacre - wikipedia The Boston Massacre, known as the Incident on King Street by the British, was an incident on March 5, 1770, in which British Army soldiers shot and killed five people while under attack by a mob. The incident was heavily publicized by leading Patriots, such as Paul Revere and Samuel Adams, to encourage rebellion against the British authorities. British troops had been stationed in Boston, capital of the Province of Massachusetts Bay, since 1768 in order to protect and support crown - appointed colonial officials attempting to enforce unpopular Parliamentary legislation. Amid ongoing tense relations between the population and the soldiers, a mob formed around a British sentry, who was subjected to verbal abuse and harassment. He was eventually supported by eight additional soldiers, who were subjected to verbal threats and repeatedly hit by clubs, stones and snowballs. They fired into the crowd, without orders, instantly killing three people and wounding others. Two more people died later of wounds sustained in the incident. The crowd eventually dispersed after Acting Governor Thomas Hutchinson promised an inquiry, but the crowd re-formed the next day, prompting the withdrawal of the troops to Castle Island. Eight soldiers, one officer, and four civilians were arrested and charged with murder. Defended by lawyer and future American president John Adams, six of the soldiers were acquitted, while the other two were convicted of manslaughter and given reduced sentences. The men found guilty of manslaughter were sentenced to branding on their hand. Depictions, reports, and propaganda about the event, notably the colored engraving produced by Paul Revere (shown at top - right), further heightened tensions throughout the Thirteen Colonies. Boston, the capital of the Province of Massachusetts Bay and an important shipping town, was a major center of resistance to unpopular acts of taxation by the British Parliament in the 1760s. In 1768, the Townshend Acts were placed upon the colonists, by which a variety of common items that were manufactured in Britain and exported to the colonies were subjected to import tariffs. Colonists objected that the Townshend Acts were a violation of the natural, charter, and constitutional rights of British subjects in the colonies. The Massachusetts House of Representatives began a campaign against the Townshend Acts by sending a petition to King George III asking for the repeal of the Townshend Revenue Act. The House also sent what became known as the Massachusetts Circular Letter to other colonial assemblies, asking them to join the resistance movement, and called for a boycott of merchants importing the affected goods. In Great Britain, Lord Hillsborough, who had recently been appointed to the newly created office of Colonial Secretary, was alarmed by the actions of the Massachusetts House. In April 1768 he sent a letter to the colonial governors in America, instructing them to dissolve the colonial assemblies if they responded to the Massachusetts Circular Letter. He also ordered Massachusetts Governor Francis Bernard to direct the Massachusetts House to rescind the letter. The house refused to comply. Boston 's chief customs officer, Charles Paxton, wrote to Hillsborough, asking for military support because "the Government is as much in the hands of the people as it was in the time of the Stamp Act. '' Commodore Samuel Hood responded by sending the fifty - gun warship HMS Romney, which arrived in Boston Harbor in May 1768. On June 10, 1768, customs officials seized Liberty, a sloop owned by leading Boston merchant John Hancock, on allegations that the ship had been involved in smuggling. Bostonians, already angry because the captain of Romney had been impressing local sailors, began to riot. Customs officials fled to Castle William for protection. -- Excerpt from A Short Narrative, suggesting the soldiers were contemplating violence against the colonists Given the unstable state of affairs in Massachusetts, Hillsborough instructed General Thomas Gage, Commander - in - Chief, North America, to send "such Force as You shall think necessary to Boston ''. On October 1, 1768, the first of four British Army regiments began disembarking in Boston. Two regiments were removed from Boston in 1769, but the 14th and the 29th Regiments of Foot remained. The Journal of Occurrences, an anonymously written series of newspaper articles, chronicled clashes between civilians and soldiers while troops were stationed in Boston, feeding tensions with its sometimes exaggerated accounts of the events. Tensions rose markedly after Christopher Seider, "a young lad about eleven Years of Age '', was killed by a customs employee on February 22, 1770. Seider 's death was glorified in the Boston Gazette, and his funeral was described as one of the largest of the time in Boston. The killing and subsequent propaganda inflamed tensions, with gangs of colonists looking for soldiers to harass, and soldiers also on occasion looking for confrontation. On the evening of March 5, Private Hugh White, a British soldier, stood on guard duty outside the Custom house on King Street, today known as State Street. A young wigmaker 's apprentice named Edward Garrick called out to a British officer, Captain - Lieutenant John Goldfinch, that Goldfinch had not paid a bill due to Garrick 's master. Goldfinch had in fact settled his account and ignored the insult. Private White called out to Garrick that he should be more respectful of the officer. Garrick exchanged insults with Private White. Then, after Garrick started poking the officer in the chest with his finger, the officer left his post, challenged the boy, and struck him on the side of the head with his musket. As Garrick cried in pain, one of his companions, Bartholomew Broaders, began to argue with White. This attracted a larger crowd. Henry Knox, a 19 - year - old bookseller (who would later serve as a general in the revolution), came upon the scene and warned White, "if he fired he must die for it. '' As the evening progressed, the crowd around Private White grew larger and more boisterous. Church bells were rung, which usually signified a fire, bringing more people out. Over fifty Bostonians pressed around White, led by a mixed - race former slave named Crispus Attucks, throwing objects at the sentry and challenging him to fire his weapon. White, who had taken up a somewhat safer position on the steps of the Custom House, sought assistance. Runners alerted the nearby barracks and Captain Thomas Preston, the officer of the watch. According to his report, Preston dispatched a non-commissioned officer and six privates of the 29th Regiment of Foot, with fixed bayonets, to relieve White. The soldiers Preston sent were Corporal William Wemms, Hugh Montgomery, John Carroll, William McCauley, William Warren, and Matthew Kilroy. Accompanied by Preston, they pushed their way through the crowd. En route, Henry Knox, again trying to reduce tensions, warned Preston, "For God 's sake, take care of your men. If they fire, you must die. '' Captain Preston responded "I am aware of it. '' When they reached Private White on the custom house stairs, the soldiers loaded their muskets, and arrayed themselves in a semicircular formation. Preston shouted at the crowd, estimated to number between three and four hundred, to disperse. The crowd continued to press around the soldiers, taunting them by yelling, "Fire! '', by spitting at and throwing snowballs and other small objects at them. Richard Palmes, a local innkeeper who was carrying a cudgel, came up to Preston and asked if the soldiers ' weapons were loaded. Preston assured him they were, but that they would not fire unless he ordered it, and (according to his own deposition) that he was unlikely to do so, since he was standing in front of them. A thrown object then struck Private Montgomery, knocking him down and causing him to drop his musket. He recovered his weapon, and was thought to angrily shout "Damn you, fire! '', then discharged it into the crowd although no command was given. Palmes swung his cudgel first at Montgomery, hitting his arm, and then at Preston. He narrowly missed Preston 's head, striking him on the arm instead. There was a pause of uncertain length (eyewitness estimates ranged from several seconds to two minutes), after which the soldiers fired into the crowd. Rather than a disciplined volley (Preston gave no orders to fire), a ragged series of shots was fired, which hit eleven men. Three Americans -- ropemaker Samuel Gray, mariner James Caldwell, and Attucks -- died instantly. Samuel Maverick, an apprentice ivory turner of seventeen, was struck by a ricocheting musket ball at the back of the crowd, and died a few hours later, in the early morning of the next day. An Irish immigrant, Patrick Carr, died two weeks later. Christopher Monk, another apprentice, was one of those seriously wounded in the attack. Although he recovered to some extent, he was crippled and eventually died in 1780, purportedly due to the injuries he had sustained in the attack a decade earlier. The crowd moved away from the immediate area of the custom house, but continued to grow in nearby streets. Captain Preston immediately called out most of the 29th Regiment, which adopted defensive positions in front of the state house. Acting Governor Thomas Hutchinson was summoned to the scene, and was forced by the movement of the crowd into the council chamber of the state house. From its balcony he was able to minimally restore order, promising there would be a fair inquiry into the shootings if the crowd dispersed. Hutchinson immediately began investigating the affair, and by morning, Preston and the eight soldiers had been arrested. In a meeting of the governor 's council held late the morning after the shootings, Boston 's selectmen asked Hutchinson to order the removal of troops from the city to Castle William on Castle Island, while a town meeting at Faneuil Hall met to discuss the affair. The governor 's council was at first opposed to ordering the troop withdrawal, with Hutchinson correctly claiming he did not have the authority to order the troops to move. Lieutenant Colonel William Dalrymple, commander of the troops, did not offer to move them. The town meeting, however, became more restive when it learned of this. Under an imminent threat of further violence, the council changed its position, and unanimously ("under duress '', according to Hutchinson 's report) agreed to request the troops ' removal. Secretary of State Andrew Oliver reported that, had the troops not been removed, "that they would probably be destroyed by the people -- should it be called rebellion, should it incur the loss of our charter, or be the consequence what it would. '' This decision left the governor without effective means to police the town. The 14th was transferred to Castle Island without incident about a week later, with the 29th following shortly after. The first four victims were buried amid great ceremony on March 8; Patrick Carr, the fifth and final victim, died on March 14 and was buried with them on March 17. -- Excerpt from A Fair Account, suggesting the colonists planned the attack on the soldiers On March 27 the eight soldiers, Captain Preston, and four civilians who were in the Customs House and were alleged to have fired shots, were all indicted for murder. Bostonians continued to be hostile to the troops and their dependents. General Gage, convinced the troops were doing more harm than good, ordered the 29th Regiment out of the province in May. Governor Hutchinson took advantage of the ongoing high tensions to orchestrate delays of the trials until later in the year. In the days and weeks following the incident, a propaganda battle was waged between Boston 's radicals and supporters of the government. Both sides published pamphlets that told strikingly different stories, which were principally published in London in a bid to influence opinion there. The Boston Gazette 's version of events, for example, characterized the massacre as part of an ongoing scheme to "quell a Spirit of Liberty '', and harped on the negative consequences of quartering troops in the city. A young Boston artist, Henry Pelham, half - brother of the celebrated portrait painter John Singleton Copley, depicted the event. Silversmith and engraver Paul Revere closely copied Pelham 's image, and is often credited as its originator. In order to further public outrage, the engraving contained several inflammatory details. Captain Preston is shown ordering his men to fire, and a musket is seen shooting out of the window of the customs office, which is labeled "Butcher 's Hall. '' Artist Christian Remick hand - colored some prints. Some copies of the print show a man with two chest wounds and a somewhat darker face, matching descriptions of Attucks; others show no victim as a person of color. The image was published in the Boston Gazette, circulating widely, and became an effective piece of anti-British propaganda. The image of bright red "lobster backs '' and wounded men with red blood was hung in farmhouses across New England. Anonymous pamphlets were published describing the event from significantly different perspectives. A Short Narrative of the Horrid Massacre, published under the auspices of the Boston town meeting, was principally written by James Bowdoin, a member of the governor 's council and a vocal opponent of British colonial policy, along with Samuel Pemberton, and Joseph Warren. It described the shooting and other lesser incidents that took place in the days before as unprovoked attacks on peaceful, law - abiding inhabitants, and was, according to historian Neal Langley York, probably the most influential description of the event. The account it provided was drawn from more than 90 depositions taken after the event, and it included accusations that the soldiers sent by Captain Preston had been deployed with the intention of causing harm. In the interest of minimizing impact on the jury pool, city leaders held back local distribution of the pamphlet, but sent copies to other colonies and to London, where they knew depositions collected by Governor Hutchinson were en route. A second pamphlet, Additional Observations on the Short Narrative, furthered the attack on crown officials by complaining that customs officials (one of whom had left Boston to carry Hutchinson 's gathered depositions to London) were abandoning their posts under the pretense that it was too dangerous for them to do their duties. The depositions that Hutchinson collected and sent to London were eventually published in a pamphlet entitled A Fair Account of the Late Unhappy Disturbance in Boston. Drawn mainly from depositions by soldiers, its account of affairs sought to blame selfish Bostonians for denying the validity of Parliamentary laws. It also blamed the city 's hoodlums and gangs for the lawlessness preceding the event, and claimed that they set up an ambush of the soldiers. However, as it was not published until well after the first pamphlet had arrived in London, it ended up having a much smaller impact on the public debate there. The Part I took in Defence of Cptn. Preston and the Soldiers, procured me Anxiety, and Obloquy enough. It was, however, one of the most gallant, generous, manly and disinterested Actions of my whole Life, and one of the best Pieces of Service I ever rendered my Country. Judgment of Death against those Soldiers would have been as foul a Stain upon this Country as the Executions of the Quakers or Witches, anciently. As the Evidence was, the Verdict of the Jury was exactly right. This however is no Reason why the Town should not call the Action of that Night a Massacre, nor is it any Argument in favour of the Governor or Minister, who caused them to be sent here. But it is the strongest Proofs of the Danger of Standing Armies. The government was determined to give the soldiers a fair trial so there could be no grounds for retaliation from the British and so moderates would not be alienated from the Patriot cause. After several lawyers with Loyalist leanings refused to defend him, Preston sent a request to John Adams, pleading for him to work on the case. Adams, who was already a leading Patriot and who was contemplating a run for public office, agreed to help, in the interest of ensuring a fair trial. Adams was joined by Josiah Quincy II after the latter was assured that the Sons of Liberty would not oppose his appointment, and by Robert Auchmuty, a Loyalist. They were assisted by Sampson Salter Blowers, whose chief duty was to investigate the jury pool, and Paul Revere, who drew a detailed map of the bodies to be used in the trial of the British soldiers held responsible. Massachusetts Solicitor General Samuel Quincy and private attorney Robert Treat Paine, hired by the town of Boston, handled the prosecution. Preston was tried separately in late October 1770. He was acquitted after the jury was convinced that he had not ordered the troops to fire. The trial of the eight soldiers opened on November 27, 1770. Adams told the jury to look beyond the fact the soldiers were British. He argued that if the soldiers were endangered by the mob, which he called "a motley rabble of saucy boys, negroes, and molattoes, Irish teagues and outlandish jack tarrs (i.e. sailors) '', they had the legal right to fight back, and so were innocent. If they were provoked but not endangered, he argued, they were at most guilty of manslaughter. The jury agreed with Adams and acquitted six of the soldiers after two and one - half hours ' deliberation. Two of the soldiers were found guilty of manslaughter because there was overwhelming evidence that they had fired directly into the crowd. The jury 's decisions suggest that they believed the soldiers had felt threatened by the crowd, but should have delayed firing. Patrick Carr, the fifth victim, corroborated this with deathbed testimony delivered to his doctor. The convicted soldiers were granted reduced sentences by pleading benefit of clergy, which reduced their punishment from a death sentence to branding of the thumb in open court. The four civilians were tried on December 13. The principal prosecution witness, a servant of one of the accused, made claims that were easily rebutted by defense witnesses. In the face of this weak testimony, as well as waning public interest, the prosecution allegedly failed to press its case very hard. The civilians were all acquitted, and the servant was eventually convicted of perjury, whipped, and banished from the province. The Boston Massacre is considered one of the most important events that turned colonial sentiment against King George III and British Parliamentary authority. Howard Zinn argues that Boston was full of class anger. He reports that in 1763, the Boston Gazette published that "a few persons in power '' were promoting political projects "for keeping the people poor in order to make them humble. '' John Adams wrote that the "foundation of American independence was laid '' on March 5, 1770, and Samuel Adams and other Patriots used annual commemorations (Massacre Day) of the event to fulminate against British rule. Christopher Monk, the boy who was wounded in the attack and died in 1780, was paraded before the crowds as a reminder of British hostility. Later events, such as the Boston Tea Party, further illustrated the crumbling relationship between Britain and its colonies. Although five years passed between the massacre and outright revolution, and direct connections between the massacre and the later war are (according to historian Neil Langley York) somewhat tenuous, it is widely perceived as a significant event leading to the violent rebellion that followed. The massacre was remembered in 1858 in a celebration organized by William Cooper Nell, an African American abolitionist who saw the death of Crispus Attucks as an opportunity to demonstrate the role of African Americans in the Revolutionary War. Partly because of this activism, artwork commemorating the massacre was produced that changed the color of a victim 's skin to black (which in Revere 's original engraving was white) to emphasize Attucks ' claimed martyrdom. In 1888, a monument was erected on the Boston Common to the men killed in the massacre, and the five victims, along with Christopher Seider, were reinterred in a prominent grave in the Granary Burying Ground. The massacre is reenacted annually on March 5 under the auspices of the Bostonian Society. The Old State House, the massacre site, and the Granary Burying Ground are all part of Boston 's Freedom Trail, connecting sites important in the city 's revolutionary - era history.
is eleven on stranger things a boy or girl
Eleven (Stranger Things) - wikipedia Jane Hopper, also known as Eleven, El or Jane Ives, is a fictional character from the Netflix series Stranger Things. Portrayed by Millie Bobby Brown, she is a girl with psychokinetic and telepathic abilities, and a limited vocabulary. Eleven is the daughter of Teresa "Terry '' Ives, a participant in the Project MKUltra experiments conducted by the United States Central Intelligence Agency (CIA). Eleven was taken from her mother at birth by Dr. Martin Brenner and was raised in the Hawkins National Laboratory in Hawkins, Indiana as a test subject to develop her psychokinetic skills. When placed in a sensory deprivation tank she can engage in astral projection and access other dimensions, primarily for the purposes of international espionage. Eleven encounters a creature living in the Upside Down dimension, and on the evening of November 6, 1983, she finally makes contact with it. In efforts to make contact, Eleven opens a gate between the Hawkins Laboratory and the Upside Down dimension and the creature gains the ability to travel between the human world and the Upside Down. She then escapes from Hawkins Laboratory and attempts to steal food from a local restaurant but the owner calls social services. The responding social worker was actually a Hawkins Laboratory CIA agent who kills the owner, causing Eleven to flee. She is then found by Mike Wheeler, Lucas Sinclair and Dustin Henderson, who are looking for their missing friend, Will Byers. Mike allows Eleven to live in his basement. Eleven, fearing capture, asks Mike not to tell any adults about her. Eleven helps to locate Will using her supernatural abilities, and determines that he is trapped in the Upside Down. However, the use of her abilities temporarily weakens her and gives her nosebleeds. The group sets out to find Will using their compasses, but Eleven interferes with their search when she realizes that they are being led towards the laboratory. Lucas, noticing her deception, becomes angry at her (having mistrusted her from the beginning). She flees, steals boxes of frozen waffles from a store, and consumes them in a forest. When Mike and Dustin are threatened by bullies, Eleven returns and saves them. Mike, Dustin, and Eleven reunite with Lucas and make amends, and they travel to their middle school with Dr. Brenner and his associates in close pursuit. During the chase, Eleven uses her powers to cause a laboratory van to flip through the air. The group, aided by Joyce Byers, Chief Jim Hopper, Nancy Wheeler, and Jonathan Byers, produce a makeshift isolation tank with a pool and bags of salt. Eleven accesses the Upside Down and confirms that Will is alive. As laboratory personnel close in on the school, Mike tells Eleven that she can be a part of his family and asks her to the school dance. He then kisses her after struggling to explain his feelings towards her. Eleven helps the group escape by using her powers to kill most of the agents, although doing so leaves her drained. The monster from the Upside Down makes its way into their dimension, and Eleven seemingly sacrifices herself to destroy the creature and save her friends. One month later, after a Christmas party, Hopper leaves the police station and drives to the woods. There, he leaves waffles in a concealed box. The whereabouts or condition of Eleven are left ambiguous. It is revealed that Eleven woke up in the Upside Down shortly after having destroyed the creature, and returned to the human world; however, with the government forces still searching for her, she was forced to flee into the forest, where she struggled to survive. She found the eggos Hopper left for her, and followed them to him. After this, the two move into a cabin in the woods where he forbids her to leave, fearing for her safety. Hopper hides Eleven for almost a year, telling no one of her whereabouts. During this time, Eleven has managed to gain better control over her powers; she is less weakened by the use of her telekinesis, and she is now able to project her mind into other dimensions without the use of a sensory deprivation tank. She uses the latter ability to listen to Mike 's attempts to contact her, though she is increasingly frustrated at her inability to reply. Eleven also significantly expands her vocabulary during this time, learning from Hopper and television. Eleven becomes restless and longs to reunite with Mike, and this causes tension between her and Hopper. She runs away from home one day and travels to Hawkins Middle School. When she finds Mike, he is with Max, a new student. Eleven mistakenly thinks that Max is Mike 's new girlfriend and, out of spite, uses her powers to knock her off of her skateboard before leaving. Upon returning to the cabin, she gets into a heated argument with Hopper over her leaving the cabin, ending with her using her powers to damage the cabin out of anger. The next day, cleaning the mess she made of the cabin, she discovers by looking through records in the cabin 's basement that her birth mother is alive, a contradiction to what Hopper had told her. She then goes to meet her birth mother, Terry, and her aunt, Becky, and discovers what happened to Terry at Hawkins Laboratory. While there, she realizes Terry is trying to talk with her, but due to her catatonic state, is unable to. Eleven, using her powers, views a flashback from her mother. Terry, after her daughter was taken from her, attempted to force her way into Hawkins Lab to save her. As a response, Brenner and his assistants captured her and subjected her to electroshock therapy, resulting in her current condition. Eleven then uses her abilities to find out she has a "sister '' -- another gifted girl taken by Dr. Brenner for experimentations -- and sets out to find her. Using her abilities once again, Eleven locates her sister and discovers that she is an older girl named Kali ("Eight ''), with the ability to cause people to have visual hallucinations. Eleven stays with Kali and her friends -- runaways who are determined to take revenge on people who have hurt them. Kali tells Eleven that while using her powers, she needs to think about what makes her most angry. When they go to get revenge on Brenner 's assistant who helped hurt Eleven 's mother, Eleven begins to choke him to death but stops upon realizing that he has two daughters. She then stops Kali from shooting him. While they are about to flee, Eleven realizes she needs to go back to her friends because they need her help. Eleven goes back to Hawkins and is reunited with Mike and her friends after saving them from a Demodog. The group realizes that she needs to close the gate to the Upside Down, and she and Hopper head to Hawkins Laboratory. There, Eleven uses Kali 's advice and focuses her anger into her powers, closing the gate and draining her powers. Afterward, it is discovered that Dr. Owens has forged a birth certificate allowing for Hopper to become her legal adoptive father as a way to help keep Eleven in hiding. Eleven 's new legal name is Jane Hopper. Though she is still not allowed out for her safety, Hopper lets her attend the Snow Ball at Hawkins Middle School. She meets up with Mike, dances with him, and they share a second kiss. The character and Brown 's performance has received highly positive reviews. Alice Vincent of The Telegraph wrote that "Millie Bobby Brown continues to be the star of the show: she has inspired fan art and tattoos, a worldwide acknowlegment of Eggos, the waffles her character devours and given a whole new life to the phrase ' mouth breather ' ''. Ashley Hoffman of TIME magazine recommended Eleven as a mascot for National Waffle Day. However, Lenika Cruz of The Atlantic stated that "despite a rich backstory, Eleven is the show 's most thinly sketched protagonist ''. In 2017, Brown was nominated for the Primetime Emmy Award for Outstanding Supporting Actress in a Drama Series and Screen Actors Guild Award for Outstanding Performance by a Female Actor in a Drama Series, and won the MTV Movie & TV Award for Best Actor in a Show and Saturn Award for Best Performance by a Younger Actor in a Television Series for her performance.
when does a man start growing facial hair
Facial hair - wikipedia Facial hair is hair grown on the face, usually on the chin, cheeks, and upper lip region. It is typically a secondary sex characteristic of human males. Men typically start developing facial hair in the later years of puberty or adolescence, between seventeen and twenty years of age, and most do not finish developing a full adult beard until their early twenties or later. This varies, as boys may first develop facial hair between fourteen and sixteen years of age, and boys as young as eleven have been known to develop facial hair. Women are also capable of developing facial hair, especially after menopause, though typically significantly less than men. Men may style their facial hair into beards, moustaches, goatees or sideburns; others completely shave their facial hair. The term whiskers, when used to refer to human facial hair, indicates the hair on the chin and cheeks. The moustache forms its own stage in the development of facial hair in adolescent males. Facial hair in males does not always appear in a specific order during puberty and varies among some individuals but may follow this process: Depending on the periods and countries, facial hair was prohibited in the army or, on the contrary, an integral part of the uniform. Many religious male figures are recorded to have had facial hair; for example, all the prophets mentioned in the Abrahamic religions (Judaism, Christianity and Islam) were known to grow their beards. Other religions, such as Sikhism, encourage growing beards. Amish men grow beards after marriage, but continue to shave their moustaches in order to avoid historical associations with military facial hair due to their pacifistic beliefs. There are various hadiths that describe the necessary beard as its entirety, such as Sunan Abu Dawood 33; 4183, which says, "The Prophet saw a boy with part of his head shaved and part left unshaven. He forbade them to do that, saying: Shave it all or leave it all. '' Therefore, most non-taqlid Muslims such as the ghair muqallids, Salafis and Ahle Hadith view its growing as wajib and fardh. The reasoning for the command was reportedly to differ the Muslims from non-Muslims deriving from Sahih Bukhari, "Do otherwise than those who ascribe partners to Allah (the mushriks). '' Women typically have little hair on the face, apart from eyebrows and the vellus hair that covers most of the body. However, in some cases, women have noticeable facial hair growth, most commonly after menopause. Excessive hairiness (especially facially) is known as hirsutism and is usually an indication of atypical hormonal variation. In contemporary Western culture, many women depilate facial hair that appears, as considerable social stigma is associated with facial hair on women, and freak shows and circuses have historically displayed bearded women. Many women globally choose to totally remove their facial hair by professional laser treatment. Great apes such as orangutans seem to have facial hair as well. When in orangutans it is obvious, in chimpanzees it is not as conspicuous, unless it has gone grey.
who played roger coleridge on ryan's hope
List of Ryan 's Hope characters - wikipedia This is a list of characters that appear on the ABC soap opera Ryan 's Hope from 1975 to 1989.
where is rule of law located in the constitution
Supremacy Clause - wikipedia The Supremacy Clause of the United States Constitution (Article VI, Clause 2) establishes that the Constitution, federal laws made pursuant to it, and treaties made under its authority, constitute the supreme law of the land. It provides that state courts are bound by the supreme law; in case of conflict between federal and state law, the federal law must be applied. Even state constitutions are subordinate to federal law. In essence, it is a conflict - of - laws rule specifying that certain federal acts take priority over any state acts that conflict with federal law. In this respect, the Supremacy Clause follows the lead of Article XIII of the Articles of Confederation, which provided that "Every State shall abide by the determination of the United States in Congress Assembled, on all questions which by this confederation are submitted to them. '' A constitutional provision announcing the supremacy of federal law, the Supremacy Clause assumes the underlying priority of federal authority, at least when that authority is expressed in the Constitution itself. No matter what the federal government or the states might wish to do, they have to stay within the boundaries of the Constitution. This makes the Supremacy Clause the cornerstone of the whole American political structure. This Constitution, and the Laws of the United States which shall be made in Pursuance thereof; and all Treaties made, or which shall be made, under the Authority of the United States, shall be the supreme Law of the Land; and the Judges in every State shall be bound thereby, any Thing in the Constitution or Laws of any State to the Contrary notwithstanding. In Federalist No. 44, James Madison defends the Supremacy Clause as vital to the functioning of the nation. He noted that state legislatures were invested with all powers not specifically defined in the Constitution, but also said that having the federal government subservient to various state constitutions would be an inversion of the principles of government, concluding that if supremacy were not established "it would have seen the authority of the whole society everywhere subordinate to the authority of the parts; it would have seen a monster, in which the head was under the direction of the members ''. The constitutional principle derived from the Supremacy Clause is federal preemption. Preemption applies regardless of whether the conflicting laws come from legislatures, courts, administrative agencies, or constitutions. For example, the Voting Rights Act of 1965, an act of Congress, preempts state constitutions, and Food and Drug Administration regulations may preempt state court judgments in cases involving prescription drugs. Congress has preempted state regulation in many areas. In some cases, such as the 1976 Medical Device Regulation Act, Congress preempted all state regulation. In others, such as labels on prescription drugs, Congress allowed federal regulatory agencies to set federal minimum standards, but did not preempt state regulations imposing more stringent standards than those imposed by federal regulators. Where rules or regulations do not clearly state whether or not preemption should apply, the Supreme Court tries to follow lawmakers ' intent, and prefers interpretations that avoid preempting state laws. In Ware v. Hylton, 3 U.S. (3 Dall.) 199 (1796), the United States Supreme Court for the first time applied the Supremacy Clause to strike down a state statute. Virginia had passed a statute during the Revolutionary War allowing the state to confiscate debt payments by Virginia citizens to British creditors. The Supreme Court found that this Virginia statute was inconsistent with the Treaty of Paris with Britain, which protected the rights of British creditors. Relying on the Supremacy Clause, the Supreme Court held that the treaty superseded Virginia 's statute, and that it was the duty of the courts to declare Virginia 's statute "null and void ''. In Marbury v. Madison, 5 U.S. 137 (1803), the Supreme Court held that Congress can not pass laws that are contrary to the Constitution, and it is the role of the Judicial system to interpret what the Constitution permits. Citing the Supremacy Clause, the Court found Section 13 of the Judiciary Act of 1789 to be unconstitutional to the extent it purported to enlarge the original jurisdiction of the Supreme Court beyond that permitted by the Constitution. In Martin v. Hunter 's Lessee, 14 U.S. 304 (1816), and Cohens v. Virginia, 19 U.S. 264 (1821), the Supreme Court held that the Supremacy Clause and the judicial power granted in Article III give the Supreme Court the ultimate power to review state court decisions involving issues arising under the Constitution and laws of the United States. Therefore, the Supreme Court has the final say in matters involving federal law, including constitutional interpretation, and can overrule decisions by state courts. In McCulloch v. Maryland, 17 U.S. (4 Wheat.) 316 (1819), the Supreme Court reviewed a tax levied by Maryland on the federally incorporated Bank of the United States. The Court found that if a state had the power to tax a federally incorporated institution, then the state effectively had the power to destroy the federal institution, thereby thwarting the intent and purpose of Congress. This would make the states superior to the federal government. The Court found that this would be inconsistent with the Supremacy Clause, which makes federal law superior to state law. The Court therefore held that Maryland 's tax on the bank was unconstitutional because the tax violated the Supremacy Clause. In Ableman v. Booth, 62 U.S. 506 (1859), the Supreme Court held that state courts can not issue rulings that contradict the decisions of federal courts, citing the Supremacy Clause, and overturning a decision by the Supreme Court of Wisconsin. Specifically, the court found it was illegal for state officials to interfere with the work of U.S. Marshals enforcing the Fugitive Slave Act or to order the release of federal prisoners held for violation of that Act. The Supreme Court reasoned that because the Supremacy Clause established federal law as the law of the land, the Wisconsin courts could not nullify the judgments of a federal court. The Supreme Court held that under Article III of the Constitution, the federal courts have the final jurisdiction in all cases involving the Constitution and laws of the United States, and that the states therefore can not interfere with federal court judgments. In Pennsylvania v. Nelson, 350 U.S. 497 (1956) the Supreme Court struck down the Pennsylvania Sedition Act, which made advocating the forceful overthrow of the federal government a crime under Pennsylvania state law. The Supreme Court held that when federal interest in an area of law is sufficiently dominant, federal law must be assumed to preclude enforcement of state laws on the same subject; and a state law is not to be declared a help when state law goes farther than Congress has seen fit to go. In Reid v. Covert, 354 U.S. 1 (1957), the Supreme Court held that international treaties and laws made pursuant to them must comply with the Constitution. In Cooper v. Aaron, 358 U.S. 1 (1958), the Supreme Court rejected attempts by Arkansas to nullify the Court 's school desegregation decision, Brown v. Board of Education. The state of Arkansas, acting on a theory of states ' rights, had adopted several statutes designed to nullify the desegregation ruling. The Supreme Court relied on the Supremacy Clause to hold that the federal law controlled and could not be nullified by state statutes or officials. In Edgar v. MITE Corp., 457 U.S. 624 (1982), the Supreme Court ruled: "A state statute is void to the extent that it actually conflicts with a valid Federal statute ''. In effect, this means that a State law will be found to violate the Supremacy Clause when either of the following two conditions (or both) exist: In 1920, the Supreme Court applied the Supremacy Clause to international treaties, holding in the case of Missouri v. Holland, 252 U.S. 416, that the Federal government 's ability to make treaties is supreme over any state concerns that such treaties might abrogate states ' rights arising under the Tenth Amendment. The Supreme Court has also held that only specific, "unmistakable '' acts of Congress may be held to trigger the Supremacy Clause. Montana had imposed a 30 percent tax on most sub-bituminous coal mined there. The Commonwealth Edison Company and other utility companies argued, in part, that the Montana tax "frustrated '' the broad goals of the federal energy policy. However, in the case of Commonwealth Edison Co. v. Montana, 453 U.S. 609 (1981), the Supreme Court disagreed. Any appeal to claims about "national policy '', the Court said, were insufficient to overturn a state law under the Supremacy Clause unless "the nature of the regulated subject matter permits no other conclusion, or that the Congress has unmistakably so ordained ''. However, in the case of California v. ARC America Corp., 490 U.S. 93 (1989), the Supreme Court held that if Congress expressedly intended to act in an area, this would trigger the enforcement of the Supremacy Clause, and hence nullify the state action. The Supreme Court further found in Crosby v. National Foreign Trade Council, 530 U.S. 363 (2000), that even when a state law is not in direct conflict with a federal law, the state law could still be found unconstitutional under the Supremacy Clause if the "state law is an obstacle to the accomplishment and execution of Congress 's full purposes and objectives ''. Congress need not expressly assert any preemption over state laws either, because Congress may implicitly assume this preemption under the Constitution.
the political machine that ruled california in the 1880s was dominated by
Political machine - wikipedia A political machine is a political group in which an authoritative boss or small group commands the support of a corps of supporters and businesses (usually campaign workers), who receive rewards for their efforts. The machine 's power is based on the ability of the workers to get out the vote for their candidates on election day. Although these elements are common to most political parties and organizations, they are essential to political machines, which rely on hierarchy and rewards for political power, often enforced by a strong party whip structure. Machines sometimes have a political boss, often rely on patronage, the spoils system, "behind - the - scenes '' control, and longstanding political ties within the structure of a representative democracy. Machines typically are organized on a permanent basis instead of a single election or event. The term may have a pejorative sense referring to corrupt political machines. The term "political machine '' dates back to the 20th century in the United States, where such organizations have existed in some municipalities and states since the 18th century. Similar machines have been described in Latin America, where the system has been called clientelism or political clientelism (after the similar Clientela relationship in the Roman Republic), especially in rural areas, and also in some African states and other emerging democracies, like postcommunist Eastern European countries. Japan 's Liberal Democratic Party is often cited as another political machine, maintaining power in suburban and rural areas through its control of farm bureaus and road construction agencies. In Japan, the word jiban (literally "base '' or "foundation '') is the word used for political machines. The Encyclopædia Britannica defines "political machine '' as, "in U.S. politics, a party organization, headed by a single boss or small autocratic group, that commands enough votes to maintain political and administrative control of a city, county, or state ''. William Safire, in his Safire 's Political Dictionary, defines "machine politics '' as "the election of officials and the passage of legislation through the power of an organization created for political action ''. He notes that the term is generally considered pejorative, often implying corruption. Hierarchy and discipline are hallmarks of political machines. "It generally means strict organization '', according to Safire. Quoting Edward Flynn, a Bronx County Democratic leader who ran the borough from 1922 until his death in 1953, he wrote "(...) the so - called ' independent ' voter is foolish to assume that a political machine is run solely on good will, or patronage. For it is not only a machine; it is an army. And in any organization as in any army, there must be discipline. '' Political patronage, while often associated with political machines, is not essential to the definition for either Safire or Britannica. A political machine is a party organization that recruits its members by the use of tangible incentives -- money, political jobs -- and that is characterized by a high degree of leadership control over member activity. Political machines started as grass roots organizations to gain the patronage needed to win the modern election. Having strong patronage, these "clubs '' were the main driving force in gaining and getting out the "straight party vote '' in the election districts. In the late 19th century, large cities in the United States -- Boston, Chicago, Cleveland, Kansas City, New York City, Philadelphia, St. Louis -- were accused of using political machines. During this time "cities experienced rapid growth under inefficient government ''. Each city 's machine lived under a hierarchical system with a "boss '' who held the allegiance of local business leaders, elected officials and their appointees, and who knew the proverbial buttons to push to get things done. Benefits and problems both resulted from the rule of political machines. This system of political control -- known as "bossism '' -- emerged particularly in the Gilded Age. A single powerful figure (the boss) was at the center and was bound together to a complex organization of lesser figures (the political machine) by reciprocity in promoting financial and social self - interest. One of the most infamous of these political machines was Tammany Hall, the Democratic Party machine that played a major role in controlling New York City and New York politics and helping immigrants, most notably the Irish, rise up in American politics from the 1790s to the 1960s. From 1872, Tammany had an Irish "boss ''. However, Tammany Hall also served as an engine for graft and political corruption, perhaps most notoriously under William M. "Boss '' Tweed in the mid-19th century. Lord Bryce describes these political bosses saying: An army led by a council seldom conquers: It must have a commander - in - chief, who settles disputes, decides in emergencies, inspires fear or attachment. The head of the Ring is such a commander. He dispenses places, rewards the loyal, punishes the mutinous, concocts schemes, negotiates treaties. He generally avoids publicity, preferring the substance to the pomp of power, and is all the more dangerous because he sits, like a spider, hidden in the midst of his web. He is a Boss. When asked if he was a boss, James Pendergast said simply, I 've been called a boss. All there is to it is having friends, doing things for people, and then later on they 'll do things for you... You ca n't coerce people into doing things for you -- you ca n't make them vote for you. I never coerced anybody in my life. Wherever you see a man bulldozing anybody he do n't last long. Theodore Roosevelt, before he became president in 1901, was deeply involved in New York City politics. He explains how the machine worked: The organization of a party in our city is really much like that of an army. There is one great central boss, assisted by some trusted and able lieutenants; these communicate with the different district bosses, whom they alternately bully and assist. The district boss in turn has a number of half - subordinates, half - allies, under him; these latter choose the captains of the election districts, etc., and come into contact with the common heelers. Many machines formed in cities to serve immigrants to the U.S. in the late 19th century who viewed machines as a vehicle for political enfranchisement. Machine workers helped win elections by turning out large numbers of voters on election day. It was in the machine 's interests to only maintain a minimally winning amount of support. Once they were in the majority and could count on a win, there was less need to recruit new members, as this only meant a thinner spread of the patronage rewards to be spread among the party members. As such, later - arriving immigrants, such as Jews, Italians, and other immigrants from Southern and Eastern Europe between the 1880s and 1910s, saw fewer rewards from the machine system than the well - established Irish. At the same time, the machines ' staunchest opponents were members of the middle class, who were shocked at the malfeasance and did not need the financial help. The corruption of urban politics in the United States was denounced by private citizens. They achieved national and state civil - service reform and worked to replace local patronage systems with civil service. By Theodore Roosevelt 's time, the Progressive Era mobilized millions of private citizens to vote against the machines. In the 1930s, James A. Farley was the chief dispenser of the Democratic Party 's patronage system through the Post Office and the Works Progress Administration which eventually nationalized many of the job benefits machines provided. The New Deal allowed machines to recruit for the WPA and Civilian Conservation Corps, making Farley 's machine the most powerful. All patronage was screened through Farley, including presidential appointments. The New Deal machine fell apart after he left the administration over the third term in 1940. Those agencies were abolished in 1943 and the machines suddenly lost much of their patronage. The formerly poor immigrants who had benefited under Farley 's national machine had become assimilated and prosperous and no longer needed the informal or extralegal aides provided by machines. In the 1940s most of the big city machines collapsed, with the exception of Chicago. A local political machine in Tennessee was forcibly removed in what was known as the 1946 Battle of Athens. Smaller communities such as Parma, Ohio, in the post -- Cold War Era under Prosecutor Bill Mason 's "Good Old Boys '' and especially communities in the Deep South, where small - town machine politics are relatively common, also feature what might be classified as political machines, although these organizations do not have the power and influence of the larger boss networks listed in this article. For example, the "Cracker Party '' was a Democratic Party political machine that dominated city politics in Augusta, Georgia, for over half of the 20th century. Political machines also thrive on Native American reservations, where the veil of sovereignty is used as a shield against federal and state laws against the practice. The phrase is considered derogatory "because it suggests that the interest of the organization are placed before those of the general public '', according to Safire. Machines are criticized as undemocratic and inevitably encouraging corruption. Since the 1960s, some historians have reevaluated political machines, considering them corrupt but efficient. Machines were undemocratic but responsive. They were also able to contain the spending demands of special interests. In Mayors and Money, a comparison of municipal government in Chicago and New York, Ester R. Fuchs credited the Cook County Democratic Organization with giving Mayor Richard J. Daley the political power to deny labor union contracts that the city could not afford and to make the state government assume burdensome costs like welfare and courts. Describing New York, Fuchs wrote, "New York got reform, but it never got good government. '' At the same time, as Dennis R. Judd and Todd Swanstrom suggest in City Politics that this view accompanied the common belief that there were no viable alternatives. They go on to point out that this is a falsehood, since there are certainly examples of reform oriented, anti-machine leaders during this time. In his mid-2016 article "How American Politics Went Insane '' in The Atlantic, Jonathan Rauch argued that the political machines of the past had flaws but provided better governance than the alternatives. He wrote that political machines created positive incentives for politicians to work together and compromise -- as opposed to pursuing "naked self - interest '' the whole time.
list of submissions to the 87th academy awards for best foreign language film
List of submissions to the 87th Academy Awards for Best Foreign Language film - wikipedia The Academy of Motion Picture Arts and Sciences (AMPAS) has invited the film industries of a number of countries to submit their best film for the Academy Award for Best Foreign Language Film every year since the award was created in 1956. It is presented annually by the Academy to a feature - length film produced outside the United States with primarily non-English dialogue. The Foreign Language Film Award Committee oversees the process, reviewing all films submitted. For the 87th Academy Awards, held on 22 February 2015, a submitted motion picture must be released theatrically in its respective country between 1 October 2013 and 30 September 2014. Submission of a film does not automatically qualify it for the competition; AMPAS has the final word on eligibility, and has disqualified submissions in the past. One film was accepted from each country, with a deadline of 1 October 2014; the Academy published a list of eligible films eight days later. Eighty - three countries submitted films, with four countries entering for the first time. Mauritania submitted Timbuktu, directed by Abderrahmane Sissako; Panama entered the documentary Invasion, directed by Abner Benaim; Kosovo submitted Three Windows and a Hanging, directed by Isa Qosja; and Malta entered Simshar, directed by Rebecca Cremona. In May 2014, Nigeria announced that AMPAS had approved the first - ever Nigerian Oscar selection committee and they would make their first Oscar submission; however, they did not submit a film by the deadline. The Phase I committee, consisting of several hundred Los Angeles - based Academy members, viewed the original submissions between mid-October and 15 December 2014. The group 's top six choices, augmented by three selections by the Academy 's Foreign Language Film Award Executive Committee, constitute the shortlist. Seventy - six films were originally considered, and the nine finalists were shortlisted in mid-December. The list was narrowed down to five nominees by invited committees in New York, Los Angeles and (for the first time) London, who viewed three films a day from 9 to 11 January 2015 before casting their ballots. The list of nominees was announced on 15 January 2015 at the Samuel Goldwyn Theater in Los Angeles. They were Argentina 's Wild Tales, directed by Damián Szifron; Estonia 's Tangerines, directed by Zaza Urushadze; Mauritania 's Timbuktu, directed by Abderrahmane Sissako; Poland 's Ida, directed by Paweł Pawlikowski, and Russia 's Leviathan, directed by Andrey Zvyagintsev. For the first time, the director 's name would be engraved on the Oscar statuette in addition to the country name. The winner was Poland 's Ida, directed by Pawlikowski.
where is water mostly absorbed in the body
Absorption of water - wikipedia The absorption of water by well boys Active absorption refers to the absorption of water by roots with the help of ATP, generated by the root respiration: as the root cells actively take part in the process, it is called active absorption. According to Renner, active absorption takes place in low transpiring and well - watered plants, and 4 % of total water absorption is carried out in this process. The active absorption is carried out by two theories; active osmotic water absorption and Active non-osmotic water absorption. In this process energy is required. This theory was given by Pari (1910) and Priestley (1921). According to this theory, the root cells behave as an ideal osmotic pressure system through which water moves up from the soil solution to the root xylem along an increasing gradient of D.P.D. (suction pressure, which is the real force for water absorption). If solute concentration is high and water potential is low in the root cells, water can enter from soil to root cells through endosmosis. Mineral nutrients are absorbed actively by the root cells due to utilisation of adenosine triphosphate (ATP). As a result, the concentration of ions (osmotica) in the xylem vessels is more in comparison to the soil water. A concentration gradient is established between the root and the soil water. Hence, the solute potential of xylem water is more in comparison to that of soil and correspondingly water potential is low than the soil water. If stated, water potential is comparatively positive in the soil water. This gradient of water potential causes endosmosis. The endosmosis of water continues till the water potential both in the root and soil becomes equal. It is the absorption of minerals that utilise metabolic energy, but not water absorption. Hence, absorption of water is indirectly an active process in a plant 's life. Active transport is in an opposite direction to that of diffusion. This theory was given by Thimann (1951) and Kramer (1959). According to the theory, sometimes water is absorbed against a concentration gradient. This requires expenditure of metabolic energy released from respiration of root cells. There is no direct evidence, but some scientists suggest involvement of energy from respiration. In conclusion, it is said that, the evidences supporting active absorption of water are themselves poor. This mechanism is carried out without utilisation of metabolic energy. Here, only the roots act as an organ of absorption or passage. Hence, sometimes it is called water absorption ' through roots ', rather ' by ' roots. It occurs in rapidly transpiring plants during the daytime, because of the opening of stomata and the atmospheric conditions. The force for absorption of water is created at the leaf end i.e. the transpiration pull. The main cause behind this transpiration pull, water is lifted up in the plant axis like a bucket of water is lifted by a person from a well. Transpiration pull is responsible for dragging water at the leaf end, the pull or force is transmitted down to the root through column of water in the xylem elements. The continuity of the water column remains intact due to the cohesion between the molecules and it act as a rope. Roots simply act as a passive organ of absorption. As transpiration proceeds, water absorption occurs simultaneously to compensate the water loss from the leaf end. Most volume of water entering plants is by means of passive absorption. Passive transport is no different from diffusion, it requires no input of energy: there is free movement of molecules from their higher concentration to their lower concentration. The water will enter the plant via the root cells that can be found in the roots where mainly passive absorption occurs. Also, with the absorption of water, minerals and nutrients are also absorbed.
when was the last year pontiac was made
Pontiac - wikipedia Pontiac is a now - defunct car brand that was owned, made, and sold by General Motors. Introduced as a companion make for GM 's more expensive line of Oakland automobiles, Pontiac overtook Oakland in popularity and supplanted its parent brand entirely by 1933. Sold in the United States, Canada, and Mexico by GM, Pontiac was advertised as the performance division of General Motors from the 1960s onward. Amid late 2000s financial problems and restructuring efforts, GM announced in 2008 it would follow the same path with Pontiac as it had with Oldsmobile in 2004 and discontinued manufacturing and marketing vehicles under that brand by the end of 2010. The last Pontiac badged cars were built in December 2009, with one final vehicle in January, 2010. Franchise agreements for Pontiac dealers expired October 31, 2010, leaving GM to focus on its four remaining North American brands: Chevrolet, Buick, Cadillac, and GMC. The Pontiac brand was introduced by General Motors in 1926 as the companion marque to GM 's Oakland division, and shared the GM A platform. Purchased by General Motors in 1909, Oakland continued to produce modestly priced automobiles until 1931 when it was renamed Pontiac. It was named after the famous Ottawa chief who had also given his name to the city of Pontiac, Michigan where the car was produced. Within months of its introduction, Pontiac was outselling Oakland, which was essentially a 1920s Chevrolet with a six - cylinder engine installed. Body styles offered included a sedan with both two and four doors, Landau Coupe, with the Sport Phaeton, Sport Landau Sedan, Sport Cabriolet and Sport Roadster. As a result of Pontiac 's rising sales, versus Oakland 's declining sales, Pontiac became the only companion marque to survive its parent, with Oakland ceasing production in 1932. Pontiacs were also manufactured from knock - down kits at GM 's short - lived Japanese factory at Osaka Assembly in Osaka, Japan from 1927 - 1941. Pontiac produced cars offering 40 hp (30 kW; 41 PS) 186.7 cu in (3.1 L) (3.25 x3. 75 in, 82.5 x95mm) L - head straight 6 - cylinder engines in the Pontiac Chief of 1927; its stroke was the shortest of any American car in the industry at the time. The Chief sold 39,000 units within six months of its appearance at the 1926 New York Auto Salon, hitting 76,742 at twelve months. The next year, it became the top - selling six in the U.S., ranking seventh in overall sales. By 1933, it had moved up to producing the least expensive cars available with straight eight engines. This was done by using many components from the 6 - cylinder Chevrolet Master, such as the body, but installing a large chrome strip on the top and center of the front hood Pontiac called the "Silver Streak ''. Only eight cylinder engines were offered in 1933 and 1934, displacing 223.4 cubic inches for 77 HP. In the late 1930s, Pontiac used a Buick "torpedo '' body for one of its models, just prior to its being used by Chevrolet, earning some media attention for the marque. An unusual feature of the "torpedo '' - bodied exhibition car was that, with push of a button, the front half of the body would open showing the engine and the car 's front seat interior. 1937 was a year of major change for Pontiac, all models except the new station wagon now using the all steel B - body shared with Oldsmobile, LaSalle and small Buicks. New stronger X frame had Hotchkiss drive using a two part drive shaft. The eight - cylinder had a 122 - inch (3,099 mm) wheelbase, while the six - cylinder had a 117 - inch (2,972 mm) wheelbase. Both engines had increased displacements, the six going to 222.7 cubic inches for 85 HP, the eight to 248.9 for 100 HP. In 1940 & 42, Pontiac was built on three different bodies. The "A '' body with Chevrolet, the "B '' body shared with Oldsmobile and Buick and the "C '' body shared with the large Oldsmobile, Buick and the small Cadillac. The "C '' body for 1940 was called the Torpedo. In 1941 all Pontiac 's were called Torpedoes. On 2 February 1942, a Pontiac was the last civilian automobile manufactured in the United States during World War II, as all automobile factories converted to military production. For an extended period of time -- prewar through the early 1950s -- the Pontiac was a quiet, solid car, but not especially powerful. It came with a flathead straight eight. Straight 8s were slightly less expensive to produce than the increasingly popular V8s, but they were also heavier and longer. Additionally, the long crankshaft suffered from excessive flex, restricting straight 8s to a relatively low compression ratio with a modest redline. However, in this application, inexpensive (yet very quiet) flatheads were not a liability. From 1946 to 1948, all Pontiac models were essentially 1942 models with minor changes. The Hydra - matic automatic transmission was introduced in 1948 and helped Pontiac sales grow even though their cars, Torpedoes and Streamliners, were quickly becoming out of date. The first all - new Pontiac models appeared in 1949. They incorporated styling cues such as lower body lines and rear fenders that were integrated in the rear - end styling of the car. Along with new styling came a new model. Continuing the Native American theme of Pontiac, the Chieftain line was introduced to replace the Torpedo. These were built on the GM B - Body platform and featured different styling than the more conservative Streamliner. In 1950, the Catalina pillarless hardtop coupe was introduced as a "halo '' model, much like the Chevrolet Bel Air of the same year. In 1952, Pontiac discontinued the Streamliner and replaced it with additional models in the Chieftain line built on the GM A-body platform. This single model line continued until 1954 when the Star Chief was added. The Star Chief was created by adding an 11 - inch (280 mm) extension to the A-body platform creating a 124 - inch (3,100 mm) wheelbase. The 1953 models were the first to have one - piece windshields instead of the normal two - piece units. While the 1953 and 1954 models were heavily re-worked versions of the 1949 - 52 Chieftain models, they were engineered for the V8 engine that was supposed to be introduced on the 1953 models, however Buick division complained to corporate that the introduction might take sales away because Buick was introducing its new nailhead V - 8 in 1953. The corporation held Pontiac back until 1955. Completely new bodies and chassis were introduced for 1955. A new 173 hp (129 kW; 175 PS) overhead valve V8 engine was introduced. (see Engines section below). Sales increased. With the introduction of this V8, the six - cylinder engines were discontinued; a six - cylinder would not return to the full - size Pontiac line until the GM corporate downsizing of 1977. A four - cylinder engine was introduced in the Tempest model line in 1961, followed by an overhead - cam six - cylinder starting in 1966, as well as on the Firebird. It was the first popular - priced, mass - produced engine in America utilizing an overhead - camshaft configuration. In 1956, when 42 - year - old Semon "Bunkie '' Knudsen became general manager of Pontiac, alongside new heads of engineering, E.M. Estes and John DeLorean, Knudsen immediately began reworking the brand 's image. One of the first steps involved the removal of the famous trademark "silver streaks '' from the hood and deck lid of the 1957 models just weeks before the 1957s were introduced. Another step was introducing the first Bonneville -- a limited - edition Star Chief convertible that showcased Pontiac 's first fuel - injected engine. Approximately 630 Bonnevilles were built for 1957, each with a retail price of nearly $5,800. While new car buyers could buy a Cadillac for that price, the Bonneville raised new interest in what Pontiac now called "America 's No. 1 Road Car. '' The following year the Bonneville became its own line, built on the 122 - inch (3,100 mm) wheelbase of the A-body platform. A 1958 Tri power Bonneville was the pace car for that year 's Indianapolis 500. Also, 1958 was the last year Pontiac Motor Division would bear the "Indian '' motif throughout the vehicle. The exception would be the Indian head high beam indicator light in the instrument cluster. All 1958 models now featured Ball joint front suspension replacing the previous king pin design. With the 1959 model year, Pontiac came out with its "Arrowhead '' emblem, with the star design in the middle. The "Arrowhead '' design ran all the way up the hood from between the split grille, and on Star Chief Models, had eight chrome stars from the emblem design bolted to both sides of the vehicle as chrome trim. Knudsen saw to it that the car received a completely reworked chassis, body, and interior styling. Quad headlamps, as well as a longer, lower body were some of the styling changes. The Chieftain line was renamed Catalina; Star Chief was downgraded to replace the discontinued Super Chief series and for the first time did not have a two - door hardtop, only a two - door sedan along with a four - door hardtop and four - door sedan, in addition there was no Star Chief wagon. The Bonneville was now the top of the line, coming in three body styles of two - door hardtop, four - door vista and four - door wagon. The Star Chief 's four - door "Vista '' hardtop was also shared by the Bonneville. Catalina models included a two - door hardtop, two - door sedan, four - door sedan, four - door hardtop vista and two wagons, one a six - passenger and one a nine - passenger wagon. Bonneville and Star Chief were built on a 124 - inch (3,100 mm) wheelbase with the exception of the Bonneville wagon and all Catalina models and Bonneville wagon that rode on a 122 - inch wheelbase. Catalina was also seven inches shorter than Bonneville and Star Chief and weighed one hundred to two hundred pounds less than its long - wheelbase counterparts. All 1959 Pontiac engines were equipped with a 389 - cubic - inch engine with horsepower ratings from 215Hp economy engine to a conservative rated 345 hp Tri power carbureted engine. All automatics were four - speed Super-Hydra - Matics or, as Hydramatic Division who designed and built them called them, "Controlled coupling HydraMatic ''. A special note here is that Oldsmobile used this same transmission and called it Jetaway HydraMatic and Cadillac also used this transmission and Cadillac called it 315 or P 315 Hydramatic. A three - speed column - mounted stick shift was standard on all Pontiacs. This coincided with major body styling changes across all models that introduced increased glass area, twin V - shaped fins and lower hood profiles. Because of these changes, Motor Trend magazine picked the entire Pontiac line as 1959 Car of the Year. The ' 59s have a five - inch (127 mm) wider track, Front at 63 7 / 8 '' front track and 64.0 '' rear track because Knudsen noticed the new, wider bodies looked awkward on the carried - over 1958 frames. The new "Wide - Track '' Pontiacs not only had improved styling, but also handled better -- a bonus that tied into Pontiac 's resurgence in the marketplace. The 1960 models saw a complete re-skinning with the exception of the body 's canopies which remained identical to the 59 's, but removed the tail fins and the distinctive split grille (which Ford copied on the final Edsel models for 1960). The 1960 models standard engine all had a Horsepower gain of 3 hp due to a compression bump of. 25 to one over the 59 engine. The front track was now widened to the rear track at 64 ". Ventura was introduced, a more luxurious hardtop coupe and the Vista 4 - door hardtop now being built on the shorter 122 - inch (3,100 mm) wheelbase platform, with it falling between the Catalina and Star Chief models. The Ventura featured the luxury features of the Bonneville in the shorter, lighter Catalina body. Most of Pontiac 's models built during the 1960s and 1970s were either styled like, or were siblings to, other GM makes (except Cadillac). However, Pontiac retained its own front and rear end styling, interiors, and engines. The 1961 models were similarly reworked. The split grille returned, as well as all - new bodies and a new design of a perimeter - frame chassis for all its full - size models (something which would be adopted for all of GM 's intermediate - sized cars in 1964, and all its full - sized cars in 1965). These new chassis allowed for reduced weight and smaller body sizes. IThe similarly styled Chevrolet still used the radically different "X '' frame in the early 1960s. But a complete departure in 1961 was the new Tempest, one of the three BOP (Buick - Olds - Pontiac) "compacts '' introduced that year, the others being the Buick Special and Skylark and Oldsmobile F - 85 and Cutlass. Toward the end of the 1961 model year an upscale version of the Tempest called the "LeMans '' was introduced, named after the famous 24 Hours of Le Mans auto race in France. All three were uni-body cars, dubbed the "Y - body '' platform, combining the frame and body into a single construction, making them comparatively lighter and smaller. All three put into production new technology pushed by John DeLorean which GM had been working on for several years prior, but the Tempest was by far the most radical. A flexible steel shaft rotating at the speed of the engine delivered power from the front - mounted engine through a "torque tube '' to a rear - mounted trans - axle. This innovation not only delivered close to a 50 / 50 front - rear weight distribution that drastically improved handling, it enabled four - wheel independent suspension which enhanced it even more. It also all but eliminated the large floor "hump '' common to front - engined rear - drive cars. Though the Tempest 's transaxle was similar to the one in the Corvair, introduced the year before, it shared virtually no common parts. GM had planned to launch a Pontiac version of the Corvair (dubbed "Polaris ''), but "Bunkie '' Knudsen -- whose niece had been seriously injured in a Corvair crash -- successfully argued against the idea. The Polaris design apparently made it to full - scale clay before it was cancelled. Instead, DeLorean 's "rope - shaft '' design was green - lighted. Contemporary rumors of the rope - shaft 's demise due to reliability problems are unfounded; the rope - shaft 's durability and performance had been proven in tests in full - size Pontiacs and Cadillacs in 1959, and only adapted to a smaller car in 1960. The Tempest won the Motor Trend "Car of the Year '' award in 1961 -- for Pontiac, the second time in three years. DeLorean 's vision has been further vindicated by the adoption of similar designs in a slew of modern high - performance cars, including the Porsche 928, 924, and 944, the Corvette C5, and the Aston Martin DB9. Unless customers checked an option, the Tempest 's power - plant was a 194.5 Ci inline - slant - four - cylinder motor, derived from the right bank of the venerable Pontiac 389 V8, enabling it to be run down the same production line as the 389, saving costs for both the car 's customers and Pontiac. Pontiac engineers ran early tests of this motor by literally cutting off the left bank of pistons and adding counterweights to the crankshaft, and were surprised to find it easily maintained the heaviest Pontiacs at over 90 miles per hour (140 km / h). In production, the engine received a crankshaft designed for just four cylinders, but this did n't completely solve its balance issues. The engine gained the nickname "Hay Baler '' because of it tendency to kick violently, like the farm machine, when its timing was off. The aforementioned Buick 215 V8, was ordered by less than two percent of its customers in the two years it was available, 1961 and 1962. Today, the 215 cars are among the most sought - after of all Tempests. In 1963, Pontiac replaced the 215 with a "new '' 326 which was really a 336 with a bore of 3.78 and stroke of 3.75 (same stroke as the 389), an iron block mill that had the same external dimensions and shared parts with the 389, but an altered, reduced bore. The car 's body and suspension was also changed to be lower, longer and wider. The response was that more than half of the 1963 Tempests and LeMans (separate lines for that one year only) were ordered with the V8, a trend that did not go unnoticed by management. The next year, the 326 would become a true 326 with a new bore size of 3.72. The Tempest 's popularity helped move Pontiac into third place among American car brands in 1962, a position Pontiac would hold through 1970. The Buick 215 V8 would remain in production for more than thirty five years, being used by Britain 's Rover Group after it had bought the rights to it. GM attempted to buy the rights back, however, Rover wished, instead, to sell the engines directly. In November 1961, Knudsen had moved to Chevrolet. Pete Estes now became general manager of Pontiac and Delorean was promoted to Pontiac Chief engineer. Both continued Knudsen 's work of making Pontiac a performance - car brand. Pontiac capitalized on the emerging trend toward sportier bucket - seat coupes in 1962 by introducing the Grand Prix, taking the place of the Ventura which now became a trim option on the Catalina. Although GM officially ended factory support for all racing activities across all of its brands in January 1963, Pontiac continued to cater to performance car enthusiasts by making larger engines with more power available across all model lines. For 1963, the Grand Prix received the same styling changes as other full - sized Pontiacs such as vertical headlights and crisper body lines, but also received its own squared - off roof - line with a concave rear window, along with less chrome. This concave rear window would be duplicated on all Tempest / LeMans four - door intermediates in 1964 - 1965. For 1964, the Tempest and LeMans ' trans - axle design was dropped and the cars were redesigned under GM 's new A body platform; frame cars with a conventional front - engine, rear - wheel - drive layout. The most important of these is the GTO, short for "Gran Turismo Omologato, '' the Italian for "Grand Touring, Homologated '' used by Ferrari as a badge to announce a car 's official qualification for racing. In spite of a GM unwritten edict against engines larger than 330 Ci in intermediate cars, DeLorean (with support from Jim Wangers from Pontiac 's ad agency), came up with the idea to offer the GTO as an option package that included a 389 Ci engine rated at 325 or 348 horsepower (260 kW). The entire Pontiac lineup was honored as Motor Trend 's Car of the Year for 1965, the third time for Pontiac to receive such honors. The February, 1965 issue of Motor Trend was almost entirely devoted to Pontiac 's Car of the Year award and included feature stories on the division 's marketing, styling, engineering and performance efforts along with road tests of several models. Due to the popularity of the GTO option, it was split from being an option on the Tempest LeMans series to become the separate GTO series. On the technology front, 1966 saw the introduction of a completely new overhead camshaft 6 - cylinder engine in the Tempest, and in an industry first, plastic grilles were used on several models. The 1967 model year saw the introduction for the Pontiac Firebird pony car, a variant of the Chevrolet Camaro that was the brand 's answer to the hot - selling Ford Mustang. Intermediate sized cars (Tempest, LeMans, GTO) were mildly face - lifted but all full size cars and GTO lost their Tri-Power engine option though it did get a larger 400 cubic - inch V8 that replaced the previous 389. Full - sized cars got a major facelift with rounder wasp - waisted body lines, a name change for the mid-line series from Star Chief to Executive and a one - year - only Grand Prix convertible. 1968 introduced the Endura ' rubber ' front bumper on the GTO, the precursor to modern cars ' integrated bumpers, and the first of a series "Ram Air '' engines, which featured the induction of cold air to the carburetor (s) for more power, and took away some of the sting from deleting the famous Tri-Power multiple carburetion option from the engine line up. This Tri carburetor deletion came from the 14th floor of GM banning multiple carburetion and headed by GM president Ed Cole. The Ram Air V garnered much auto press publicity, but only a relative few were made available for sale. Full - sized cars and intermediates reverted from vertical to horizontal headlights while the sporty / performance 2 + 2 was dropped from the lineup. For 1969, Pontiac moved the Grand Prix from the full - sized lineup into a G - body model of its own based on the A-body intermediate four - door modified from 116 inches to 118 inches wheelbase chassis, but with distinctive styling and long hood / short deck proportions to create yet another niche product -- the intermediate - sized personal - luxury car that offered the luxury and styling of the higher priced personal cars such as the Buick Riviera and Ford Thunderbird and the old Grand Prix and Olds Starfire but for a much lower price tag. Pete Estes, who like Knudsen had moved to be general manager of Chevrolet in 1966 and Delorean, general manager of the Pontiac division, needed a car to take the place of the sagging sales of the full size Grand Prix, but the development cost of the car was too much of burden for Pontiac division alone, so Delorean went to his old boss at Chevrolet to gather support for the development cost of the new "G '' body Grand Prix. Estes agreed to share in the cost and allow Pontiac to have a one - year exclusivity on this new car, the next year Chevy would follow with its version which was called Chevrolet Monte Carlo. The new Grand Prix was such a sales success in 1969 as dealers moved 112,000 units - more than four times the number of Grand Prixs sold in 1968. Full - sized Pontiacs were also substantially restyled but retained the same basic under - body structure and chassis that debuted with the 1965 model - in fact the roof - lines for the four - door pillared sedans and Safari wagons were the same as the 1965 models, while the two - door semi-fastback design gave way to a squared - off notch - back style and four - door hardtop sedans were also more squared off than 1967 - 68 models. The GTOs and Firebirds received the Ram Air options, the GTO saw the addition of the "Judge '' performance / appearance package, and the Firebird also got the "Trans Am '' package. Although originally conceived as a 303 cubic inch model to compete directly in the Trans Am racing series, in a cost - saving move the Pontiac Trans Am debuted with the standard 400 - cubic - inch performance engines. This year also saw De Lorean leaving the post of general manager to accept a similar position at GM 's Chevrolet division. His replacement was F. James McDonald. Pontiacs built in the late 1960s conformed to new U.S. Federal Motor Vehicle Safety Standards. These included energy - absorbing interior parts such as steering columns, steering wheels, knobs and handles, dual - circuit hydraulic brake systems, shoulder belts, side marker lights, and headrests. The 1969 Firebirds received a heavy facelift but otherwise continued much the same as the original 1967 model. It was the final year for the overhead cam six - cylinder engine in Firebirds and intermediates, and the Firebird convertible (until 1991). Production of the 1969 Firebirds was extended into the first three months of the 1970 model year (all other 1970 Pontiacs debuted Sept. 18, 1969) due to a decision to delay the introduction of an all - new 1970 Firebird (and Chevrolet Camaro) until Feb. 26, 1970. In addition in the late - 1960s, GM directed their GM and Pontiac divisions to develop concept mini-cars called commuter car for urban drivers. GM developed a gasoline - electric drive hybrid the XP - 833 and the Pontiac X-4 a rear - wheel drive mid-engine car that was powered by a radical X-shaped aircraft type air - cooled two - stroke radial engine where the standard crankshaft was replaced by a unit called a Scotch yoke. While the GM car was fully tested the Pontiac concept was not. Neither was placed in production. Increasing insurance and fuel costs for owners coupled with looming Federal emissions and safety regulations would eventually put an end to the unrestricted, powerful engines of the 1960s. Safety, luxury and economy would become the new watch - words of this decade. Engine performance began declining in 1971 when GM issued a corporate edict mandating that all engines be capable of using lower - octane unleaded gasoline, which led to dramatic drops in compression ratios, along with performance and fuel economy. This, coupled with trying to build cars as plush as GM 's more luxurious Buicks and Oldsmobiles, contributed to the start of a slow decline of Pontiac in 1971. In mid-1971 Pontiac introduced the compact, budget - priced Ventura II (based on the third generation Chevrolet Nova). This same year, Pontiac completely restyled its full - sized cars, moved the Bonneville, and replaced it with a higher luxury model named the Grand Ville, while Safari wagons got a new clamshell tailgate that lowered into the body while the rear window raised into the roof. 1971 -- 1976 model full - size station wagons featured a ' Clamshell ' design where the rear power - operated glass slid up into the roof as the tailgate (manually or with power assist), dropped below the load floor. The power tailgate, the first in station wagon history, ultimately supplanted the manual tailgate, which required marked effort to lift from storage. The 1972 models saw the first wave of emissions reduction and safety equipment and updates. GTO was a now sub-series of the LeMans series. The Tempest, was dropped, after being renamed ' T - 37 ' and ' GT - 37 ' for 1971. The base 1972 mid-sized Pontiac was now called LeMans. James MacDonald left the post of general manager to be replaced by Martin J. Caserio in late 1972. Caserio was the first manager in over a decade to be more focused on marketing and sales than on performance. For 1973, Pontiac restyled its personal - luxury Grand Prix, mid-sized LeMans and compact Ventura models and introduced the all - new Grand Am as part of the LeMans line. All other models including the big cars and Firebirds received only minor updates. Again, power dropped across all engines as more emissions requirements came into effect. The 1973 Firebird Trans Am 's factory applied hood decal, a John Schinella restylized interpretation of the Native American fire bird, took up most of the available space on the hood. Also in 1973, the new Super Duty 455 engine ("Super Duty '' harkening back to Pontiac 's Racing Engines) was introduced. Although it was originally supposed to be available in GTOs and Firebirds, only a few SD 455 engines made it into Firebird Trans Ams that year. One so equipped was tested by ' Car and Driver ' magazine, who proclaimed it the last of the fast cars. But the pendulum had swung, and the SD 455 only hung on one more year in the Trans Am. All Federal emissions and safety regulations were required to be in full effect for 1974 causing the demise of two of the three iterations of the big 455 cubic inch engines after this year. The last version of the 455 would hang on for two more years before being discontinued. For 1975, Pontiac introduced the new sub-compact Astre, a version of the Chevrolet Vega. This was the brand 's entry into the fuel economy segment of the market. Astre had been sold exclusively in Canada from 1973. It was offered through the 1977 model year. 1975 would also see the end of Pontiac convertibles for the next decade. The 1976 models were the last of the traditional American large cars powered by mostly big block V8 engines. After this year, all GM models would go through "downsizing '' and shrink in length, width, weight and available engine size. The 1976 Sunbird, based on the Chevrolet Vega and Monza 's equivalent, joined the line. It was first offered as a Notchback, with a Hatchback body style added in 1977. The Vega Wagon body style was added in 1978, Sunbird Safari Wagon, replacing the Astre Safari Wagon. The Sunbird was offered in its rear - wheel - drive configuration through the 1980 model year. (Sunbird Safari wagon through 1979.) At mid-year 1977, Pontiac introduced the Phoenix, an upscale version of the Ventura which replaced the Ventura entirely after the end of the 1977 model year. Pontiac also introduced its 151 cubic inch "Iron Duke '' 4 - cylinder overhead valve engine. It was first used in the 1977 Astre, replacing Astre 's aluminum - block 140 cubic inch Vega engine. The ' Iron Duke ' engine would later go into many GM and non-GM automobiles into the early 1990s. The 151 cubic inch L4 and the 301 cubic inch V8 were the last two engines designed solely by Pontiac. Subsequent engine design would be accomplished by one central office with all designs being shared by each brand. For model year 1977, the full sized Pontiacs received the same "downsizing '' as GM 's other "B '' body cars. The new Bonnevilles and Catalinas continued to be best - sellers, although their styling similarity to the Chevrolet Caprice was seen by some buyers as a "cheapening '' of Pontiac 's image. In 1981, the full - size Bonneville was discontinued, the name reassigned to the "A '' body intermediate platform. That left the Catalina as the only big Pontiac, further reducing sales as buyers went for more plushness. The remainder of the 1970s and the early 1980s saw the continued rise of luxury, safety and economy as the key selling points in Pontiac products. Wire - spoked wheel covers returned for the first time since the 1930s. More station wagons than ever were being offered. Padded vinyl roofs were options on almost every model. Rear - wheel drive began its slow demise with the introduction of the first front - wheel drive Pontiac, the 1980 Phoenix (a version of the Chevrolet Citation). The Firebird continued to fly high on the success of the ' Smokey and the Bandit ' film, still offering Formula and Trans Am packages, plus a Pontiac first - a turbocharged V8, for the 1980 and 1981 model years. In addition to this, The Rockford Files, which lasted for 6 years used an Esprit Firebird. James Garner was given a new model each year, which was resprayed and painted. But he disapproved of the front facelift. The turn which he performed throughout the show were all his own stunts and came to be known as the Rockford turn or J turn. Introduced in 1982, the wedge shaped Firebird was the first major redesign of the venerable pony car since 1970. Partly due to the hugely successful NBC television series Knight Rider, it was an instant success and provided Pontiac with a foundation on which to build successively more performance oriented models over the next decade. The Trans Am also set a production aerodynamic mark of. 32 cd. The next step in Pontiac 's resurgence came in the form of its first convertible in nine years. GM adapted the J - body cars and the all - new for 1982 J2000 (later renamed Sunbird) had a convertible as part of its line. Next, the 1984 Fiero came. This was a major departure from anything Pontiac had produced in the past. A two - seat, mid-engined coupe, the Fiero was targeted straight at the same market that Semon Knudsen had been aiming for in the late 1950s: the young, affluent buyer who wanted sporting performance at a reasonable price. The Fiero was also an instant success and was partially responsible for Pontiac seeing its first increase in sales in four years. Pontiac also began to focus on technology. In 1984, a Special Touring Edition (STE) was added to the 6000 line as a competitor to European road cars such as the Mercedes 190. The STE sported digital instruments and other electronics as well as a more powerful V - 6 and retuned suspension. Later iterations would see some of the first introductions on Pontiacs of anti-lock brakes, steering wheel mounted radio controls and other advanced features. Full - size buyers, disappointed by the lack of an available big Bonneville, complained, resulting in Pontiac 's importing the Canadian - market Pontiac Parisienne, which featured the Bonneville 's deluxe trim. This car, although a Pontiac in name, was no more than a slightly re-trimmed Caprice. Despite this fact, the Parisienne sold in profitable numbers and this car continued in production until 1986 for the sedan, and 1989 for the Safari station wagon. With the exception of the Parisienne Safari, the Firebird and Fiero, beginning in 1988 all Pontiacs, with the exception of the Firebird, switched to front - wheel drive platforms. For the first time since 1970, Pontiac was the number three domestic car maker in America. Pontiac 's drive to bring in more youthful buyers was working as the median age of Pontiac owners dropped from 46 in 1981 to 38 in 1988. During this time, becoming standard on Pontiacs were: anti-lock brakes, GM 's Quad - 4 engine, airbags, and composite materials. In 1989, there was the end of Safari wagon production, thus it was the last V8 powered full sized rear wheel drive Pontiac until the 2009 G8. The 1990 model year saw the launch of Pontiac 's first minivan and light truck, the Trans Sport. In addition, the Grand Prix line added its first ever 4 door model, offered in LE and STE trims. At the end of the 1991 model year, the 6000 was discontinued in favor of the newly expanded Grand Prix line up and the new Trans Sport minivan, which replaced the 6000 station wagon. In 1992, a brand - new Bonneville was introduced. This full - size model featured aerodynamic styling, large expanses of curved glass, front - wheel drive, and the 3800 Series I V6 as standard equipment. A new sub model, the SSEi, was introduced in 1992 carrying all standard equipment from the SSE model, plus the 205 hp supercharged 3800 V6. For 1993 the Bonneville added a new option package (H4U) called the Sport Luxury Edition (SLE), which was available on the SE model. This package included leather bucket seats, specific grille, side trim, exhaust, dash trim, lace alloy wheels, as well as a spoiler, sport handling and suspension systems and anti lock brakes. An all new Firebird was introduced in 1993. It was powered by either a 3.4 L V6 with 160 hp (120 kW), or in Trans Am guise a 275 hp (205 kW) LT - 1, a 5.7 L (350c. i.) V8, and could be backed by a T - 56 six - speed manual. The Sunbird was replaced with the (still J - body) Sunfire in 1995. While a V6 engine was no longer available in the J - car, sedan, coupe, and convertible body styles did survive. For 1996 the Bonneville received updated front and rear fascias along with several other enhancements. The 3800 Series II V6 had become standard in 1995, featuring 200 hp. The updated supercharged 3800 Series II now featured 240 hp. Division Sales Figures 1996 was the last year for the 5th - generation Grand Prix. The Grand Prix debuted in 1997 with the "Wider is Better '' advertising campaign. The GTP trim level was added to the Grand Prix. It featured a supercharged 3.8 L V6 rated at 240 hp (180 kW) and 280 lb ⋅ ft (380 N ⋅ m) of torque. One design highlight of this generation Grand Prix is the sharing of the roof 's sheet metal between both coupe and sedan models. The 1999 model year saw the replacement of the Trans Sport with the larger Montana minivan. In 2000, the Bonneville got its first redesign since 1992 and was based on the G - Body, shared with the Oldsmobile Aurora and Buick LeSabre. In 1998 the Firebird was updated. The TransAm received the LS - 1 motor which produced 305 hp (227 kW). The WS6 option saw this number increase to 320 hp (240 kW) and the addition of Ram Air and stiffer springs. In 2002, both the Firebird / Trans Am and Camaro were discontinued as a result of declining sales and a saturated sport market. The coupe version of the Grand Prix was also discontinued. The 2003 Vibe arrived in spring 2002, a Toyota - based compact wagon built at the NUMMI joint - venture plant. Also, in 2003, it was announced that the Grand Prix would be in its last year of its generation, with an improved 7th generation on the way for 2004. It would also be Pontiac 's final year in NASCAR. Pontiac 's final victory in NASCAR would be achieved by Ricky Craven in one of the closest finishes in NASCAR history. A few surplus Pontiacs continued running in the Busch Series through 2005. In 2004 the re-introduction of the Pontiac GTO (based on the Holden Monaro from Australia). The GTO was also initially powered by the 350 HP LS - 1 V8 in 2004. It had an independent front and rear suspension and an upscale full leather interior. Sales did not reach the 18,000 units that GM predicted. The LS1 engine was retired in 2004. Pontiac added the drive - by - wire 400 HP LS2 V8 for 2005 -- 2006 model years at no additional cost. Additional upgrades also consisted of stainless steel dual exhaust outlets, larger Corvette sourced PBR brakes with EBD, larger front vented rotors with vented rear rotors, and functional heat extractor hood scoops. The Bonneville introduced the GXP trim level to replace the SSEi. The Bonneville GXP featured a 4.6 Northstar V8, borrowed from Cadillac, and replaced the Supercharged 3800 Series II. The redesigned Grand Prix made its appearance, and featured a GT and GTP trim level. The GTP 's new 3.8 L supercharged V6 now made 260 horsepower (190 kW), up 20 from the previous generation. TAPshift was also introduced as well as a Competition Group package (Comp G). Pontiac went through a complete product revamping through this period. The Grand Am was replaced with the mid-size G6 in 2004. The Grand Am coupe was produced for the 2005 model year to fill the gap until the new G6 coupe and convertible became available for the 2006 model year. The Bonneville ended production in 2005 after nearly 50 years of production. Although it was not directly replaced, the RWD G8 introduced for the 2009 model year did fill some of the market void. The Solstice concept shown in 2002 was approved for production as a roadster (2006 - 2009) and, for a few months, a hard - top coupe (2009), which is considered to be quite rare, as a total of only 1,266 coupes made it off the assembly line in Wilmington, DE before it was shut down due to the demise of Pontiac. This is in stark contrast to the over 64,000 Solstice Convertibles that were manufactured on that same line. The controversial and slow - selling Aztek was finally phased out and replaced by the Torrent, which was identical to the Chevrolet Equinox. In 2005 the Sunfire was discontinued and replaced by the new Pontiac Pursuit (later named G5 for the American market). Initially, Pontiac did not plan on offering the G5 in the United States, however dealer pressure to fill the gap left by the discontinuation of the Sunfire caused Pontiac to introduce only the coupe variation into the U.S. The 4 door sedan was available in Canada as the Pursuit throughout the model run. The high - performance GXP trim was introduced in the Grand Prix line in 2005, adding GM 's LS4 V8 engine that produced 303 horsepower and 323 lb. ft. of torque. This engine was built to give buyers a V8 sedan option until the all - new G8 arrived in 2008. In 2008, the Grand Prix ended production and the launch of the Australian - built RWD G8 commenced. The G8 gained positive reception for its high performance and low costs. Many noted the G8 as the poor man 's BMW M5, due to similar performance at a much cheaper price. The G8 GXP was the most powerful production car Pontiac had ever built, and is widely regarded as the best driver 's car ever to wear the Pontiac badge. The Holden Ute was scheduled to be launched as the G8 ST before it was cancelled in January 2009 due to GM 's financial situation. It was later announced that the G8 may not see a second generation. Towards the end of the decade many rumors began spreading that Pontiac would become completely reliant on RWD. Reports ranged from a compact sedan based on the Alpha platform to a new RWD G6 for the 2013 model year. Many reports suggested that the Trans Am / Firebird would return after GM confirmed the rebirth of the Camaro, however, no reports confirmed this. On December 2, 2008, General Motors announced that it was considering eliminating numerous brands, including Pontiac, in order to appease Congress in hope of receiving a $25 billion loan. On February 17, 2009, GM proposed the elimination of its Saturn division, the sale of Saab, and either the sale or elimination of Hummer, depending on whether a buyer could be found quickly. GM clarified that Pontiac would have begun to focus on "niche '' models aimed at the "youthful and sporty '' segment, but did not provide specifics. Pontiac was to trim its number of models to four, although there was talk of retaining only one model. By April 2009 several automotive websites and business publications were reporting that GM was doing a study suggesting it might eliminate the brand altogether, along with sister truck brand GMC. On April 23 a report was published stating the company would be dropping the Pontiac brand while preserving the GMC truck line, and the Chevrolet, Cadillac, and Buick brands. The decision to eliminate Pontiac was made primarily due to the increasing threat of a bankruptcy filing if the June 1 deadline could not be met. On April 27, 2009, GM announced that Pontiac would be dropped and that all of its remaining models would be phased out by the end of 2010. Though both production and franchise agreements ended in 2010, Pontiac remains a registered and active trademark of GM. General Motors would eliminate an additional 7,000 to 8,000 factory jobs in the United States and shed 2,600 dealers by 2010 under a revised business plan. GM Chief Executive Officer Fritz Henderson said the Pontiac brand would be closed by 2010, calling it an "extremely personal decision ''. In addition to speeding up decisions on Saturn, Saab and Hummer, GM would be left with four brands -- Chevrolet, Buick, GMC and Cadillac. In early May 2009, Jim Waldron, a Davison, Michigan, Pontiac dealer, announced that he was interested in purchasing the Pontiac brand and logos, and had found financing to purchase them and some soon - to - be shuttered GM plants in order to build cars. However, GM had already decided to retire the brand as it has begun to sell off its remaining inventory and said that, unlike Saturn, Hummer, and Saab, Pontiac was not for sale. The Pontiac brand was pulled after the 2009 model year in Mexico and the brand was renamed Matiz, selling only one vehicle, the Matiz G2 (Matiz 's logo is similar to Pontiac 's). The last Pontiac, a white 2010 model year G6 4 door sedan, was built at the Orion Township Assembly Line in January, 2010. Pontiac became the second brand General Motors had eliminated in six years. Oldsmobile met the same fate in 2004 after being more slowly phased out over four years. Pontiac also became the ninth North American automobile brand since 1987 to be phased out, after Merkur, Mercury, Passport, Asüna, Geo, Plymouth, American Motors (AMC) (renamed Eagle in 1988, and phased out in 1999), and Oldsmobile. A Native American headdress was used as a logo until 1956. This was updated to the Native American red arrowhead design for 1957 in all usage except the high - beam indicator lamp, which retained the original logo through 1970. The arrowhead logo is also known as the Dart. Besides the logo, another identifying feature of Pontiacs were their "Silver Streaks '' -- one or more narrow strips of stainless steel which extended from the grille down the center of the hood. Eventually they extended from the rear window to the rear bumper as well, and finally; along the tops of the fins. Although initially a single band, this stylistic trademark doubled to two for 1955 - 1956. The Streaks were discontinued the same year as the Indian Head emblems (1957). One long - familiar styling element was the split - grille design which was introduced in 1959 to complement the make 's new "wide track '' stance. The 1960 models, however, reverted to the full - width grille styling. The split - grille then returned for the 1961 model year and would remain as the marque 's trademark. Other styling cues were the pointed ' arrowhead ' nose (in the 1960s and 70s), and "grilled - over '' (in the 1960s), or multiple horizontal - striped taillights. This later feature originated with the 1963 Grand Prix, and although the ' 62 Grand Prix also had rear grillework, the taillight lenses were not behind it. Less longstanding but equally memorable is the ' cladding ' common on the doors and fenders of Pontiacs produced in the 1990s and 2000s. Rather than minimizing the side bumper, Pontiac designers put two troughs going along the length. Bumpers with this appearance were found on nearly all Pontiacs until the arrival of the G6. From 2004 onwards, new Pontiacs had cleaner, more premium styling, but retained the traditional split grille. In Canada, the post-World War II Pontiac brand sold well. General Motors of Canada offered a line of full - size Pontiac cars that were styled like U.S market models, but were actually Chevrolets under their skins. Model lineup during this period included the base Strato - Chief, mid-range Laurentian, and top - of - the - line Parisienne series. Under their exteriors, however, these cars featured Chevrolet frames, engines, and even dimensions. Interiors (except for instrument panels which were Pontiac - based) were a combination of Chevy and Pontiac styling. During the early 1960s, Pontiacs featured the controversial "X '' frame used on the big Chevys, as well as the complete Chevy lineup of OHV straight Sixes, small - block 283 and 327 cubic inch V8s, and the big - block 348 and 409 V8s. This scheme was used well into the 1980s, and the Caprice - based 1984 and later Parisienne made it into U.S. Pontiac showrooms to replace the recently discontinued Bonneville. This strategy helped keep the price of the cars to a minimum, as was needed in the then less - affluent Canadian marketplace. GM of Canada was already building Chevrolets in Ontario; they only needed to stamp Pontiac - styled body skins (these were styled like, but not interchangeable with, US Pontiac body parts) and import Pontiac - specific trim from the United States, to convert these Chevys to Pontiacs. It also reduced the cost of tariffs GM would have needed to pay, had they imported U.S. - market Pontiacs Up North. GM of Canada also executed right - hand drive versions of Pontiac for export. These cars were popular in Australia, where GM faced competition from the big Ford Galaxie and Dodge Phoenix. Pontiac dealers in Canada also sold smaller Chevrolet - based cars under the Acadian and Beaumont badges. These models are often referred to as Pontiacs, but in fact were never marketed as such, nor did they ever wear Pontiac badges (although the Acadian and Beaumont emblem was in fact, similar to the Pontiac Arrowhead). However, some Chevys were badged as Pontiacs later on in Canada. Such cars include the T1000 (based on the Chevette), the Astre (based on the Vega), and the Firefly (based on the Sprint). Pontiac engineer Clayton Leach designed the stamped steel valvetrain rocker arm, a simplified and reliable alternative to a bearing - equipped rocker. This design was subsequently picked up by nearly every OHV engine manufacturer at one point or another. Pontiac began work on a V8 configuration in 1946. This was initially intended to be an L - head engine, and 8 experimental units were built and extensively tested by the end of the 1940s. But testing comparisons to the OHV Oldsmobile V8 revealed the L - head could not compete performance-wise. So, in addition to building a new Pontiac Engineering building in 1949 -- 1951, the decision to re-direct the V8 to an OHV design delayed its introduction until the 1953 model year, however the Buick division was introducing its new engine (Nailhead V - 8) in 1953 and asked the corporation to hold back or delay Pontiac 's V - 8 introduction until the 1955 model year which it did. In mid-1956, Pontiac introduced a higher - powered version of its V8. Among other things, this version of the engine was equipped with a high - performance racing camshaft and dual 4 - barrel carburetors. This was the first in a series of NASCAR - ready pre - Super-Duty V8 engines and introduced the long line of multi-carburetor equipped engines that saw Pontiac become a major player during the muscle car and pony car era of the 1960s. The enlarged 1956 Pontiac V8 found its way into light - duty GMC pickup trucks. Pontiac 's second generation V8 engines shared numerous similarities, allowing many parts to interchange from its advent in 1959 to its demise in 1979. Sizes ranged from 287 cubic inch to 455 cubic inch. This similarity (except the 301 & 265) makes rebuilding these engines relatively easier. This feature also made it possible for Pontiac to invent the modern muscle car, by the relatively simple process of placing its second largest - displacement engine, the 389 cid, into its mid-size car, the Le Mans, creating the Pontiac LeMans GTO. From their inception in the 1950s until the early 1970s, Pontiac engines were known for their performance. The largest engine was a massive 455 cubic inch V8 that was available in most of their mid-size, full - size and sports car models. At the height of the horsepower era, Pontiac engines reached a powerful 390 rated horsepower (SAE gross), though other engines achieved considerably higher outputs in actuality. Federal emissions laws eventually brought the horsepower era to a close and resulted in a steady decline for Pontiac 's engines. One holdout to this industry - wide slide was the Super Duty 455 engine of 1973 -- 1974. Available only in the Firebird Formula and Trans Am models, this was rated at 310 hp (230 kW) net initially but after having issues passing EPA emissions tests, the camshaft was changed to the old RA III cam and with the change, came a 290 hp (220 kW) net rating. The engine was the pinnacle of Pontiac engine development and was a very strong performer that included a few race - specific features, such as provisions for dry - sump oiling. This engine and its legacy drive the SD Trans Ams and Formulas as one of the more, if not the most, desirable Pontiacs ever produced. The only non-traditional Pontiac V8 engines were the 301 cubic inch and the smaller displacement 265 cubic inch V8s. Produced from 1977 through 1981, these engines had the distinction of being the last V8s produced by Pontiac; GM merged its various brands ' engines into one collectively shared group in 1980, entitled General Motors Powertrain. The 301 had a 4 - inch (100 mm) bore and 3 - inch (76 mm) stroke, identical to the vaunted Chevrolet small - block engine and Ford Boss 302 engine. Pontiac engines were not available in Canada, however, but were replaced with Chevrolet engines of similar size and power, resulting in such models as the Beaumont SD - 396 with a Chevrolet big - block 396 cubic inch V8. PMD used Carter 1 - barrel carburetors for many years, but by the time of the second generation V8 engines had switched mostly to 2 - barrel offerings. These also were the basis for the Tri-Power setups on the engines. The Tri-Power setup included one center carburetor with idle control and two end carburetors that did not contribute until the throttle was opened more than half way. This was accomplished two ways, mechanically for the manual transmission models, and via a vacuum - switch on the automatics. This went through various permutations as it was only a factory installed option in from 1957 - 1966. PMD also had a square - bore 4 - barrel at the time, but this was rated at a lower power than the Tri-Power. This carburetor was later replaced by the Quadrajet, a spread bore. ' Spread - bore ' refers to the difference in sizes between the primaries and secondaries, using smaller primaries paired with larger secondaries for increased airflow at wider throttle with fuel delivery changes akin to the two - plus - four benefit of Tri-Power but with a single carburetor. It must be understood that the Q jet was not the only thing that gave the top GTO 400 '' engine and the 428 H-O engines the same H.P. as the 389 and 421. Aside of the displacement advantage the new engine had redesigned cylinder heads with different valve angles and larger ports. The different valve angles allowed for larger diameter intake and exhaust valves. There have been many test when a Tri-Power set - up was added to a 400 '' or 428 '' engine that they made even more H.P. than a Q - Jet. By the end of the muscle car era, the Quadrajet setup had become the nearly ubiquitous choice on PMD engines. The Quadrajet design continued until 1990 for Oldsmobile V8 applications, along with added computer controls to meet emissions and fuel economy standards. Pontiac New Series 6 - 28 8240 2 - door Sedan 1928 Pontiac Big Six Series 6 - 29 8930 4 - Door Landaulette 1929 Pontiac Series 603 34318 Convertible Coupé 1934 Pontiac De Luxe Series 26 2611 2 - door Touring Coach 1937 Pontiac De Luxe Series 26 2611 2 - door Touring Sedan 1938 Pontiac De Luxe Convertible Coupé 1939 Pontiac Station Wagon 1948 Pontiac Chieftain Catalina 1953 Pontiac Chieftain Catalina 1953 Pontiac Star Chief 1954 Pontiac Laurentian Convertible 1956 Pontiac Star Chief 1957 Pontiac Bonneville Convertible 1957 Pontiac 2119 Tempest 1961 Pontiac GTO 1966 Pontiac Fiero 1988 Pontiac Grand Am Sedan 1996 -- 1998 Pontiac Bonneville 2003 Pontiac Grand Prix GTP 2005 Pontiac GTO 2006 Pontiac G8 2008
who scored a hat-trick in the 1966 world cup final
1966 FIFA World Cup final - wikipedia The 1966 FIFA World Cup Final was the final match in the 1966 FIFA World Cup, the eighth football World Cup. The match was played by England and West Germany on 30 July 1966 at Wembley Stadium in London, and had an attendance of 96,924. The British television audience peaked at 32.3 million viewers, making it the most watched television event ever in the United Kingdom. England won 4 -- 2 after extra time to win the Jules Rimet Trophy. The England team became known as the "wingless wonders '', on account of their then - unconventional narrow attacking formation, described at the time as a 4 -- 4 -- 2. The match is remembered for England 's only World Cup trophy, Geoff Hurst 's hat - trick -- the first, and to date, only one ever scored in a FIFA World Cup Final -- and the controversial third goal awarded to England by referee Gottfried Dienst and linesman Tofiq Bahramov. Both teams were strong throughout the tournament. Each won two and drew one of their three matches in the group stages. England did not concede a goal until their semi-final against Portugal. England, managed by Alf Ramsey and captained by Bobby Moore, won the toss and elected to kick off. After 12 minutes, Sigfried Held sent a cross into the English penalty area which Ray Wilson misheaded to Helmut Haller, who got his shot on target. Jack Charlton and goalkeeper Gordon Banks failed to deal with the shot which went in making it 1 -- 0 to West Germany. In the 19th minute, Wolfgang Overath conceded a free kick, which Moore took immediately, floating a cross into the West German area, where Geoff Hurst rose unchallenged and levelled the scores with a downward glancing header. The teams were level at half - time, and after 77 minutes England won a corner. Alan Ball delivered the ball to Geoff Hurst whose deflected shot from the edge of the area found Martin Peters. He produced the final shot, beating the West German keeper from eight yards to make the score 2 -- 1 to England. Germany pressed for an equaliser in the closing moments, and in the 89th minute Jack Charlton conceded a free kick for climbing on Uwe Seeler as they both went up for a header. The kick was taken by Lothar Emmerich, who struck it into George Cohen in the wall; the rebound fell to Held, who shot across the face of goal and into the body of Karl - Heinz Schnellinger. The ball deflected across the England six - yard box, wrong - footing the England defence and allowing Wolfgang Weber to level the score at 2 -- 2 and force the match into extra time. Banks protested that the ball had struck Schnellinger on the arm, and reiterated the claim in his 2002 autobiography, but replays showed that it actually struck Schnellinger on the back. England pressed forward and created several chances. In particular, with five minutes gone, Bobby Charlton struck the post and sent another shot just wide. With 11 minutes of extra time gone, Alan Ball put in a cross and Geoff Hurst swivelled and shot from close range. The ball hit the underside of the crossbar, bounced down and was cleared. The referee Gottfried Dienst was uncertain if it had been a goal and consulted his linesman, Tofiq Bahramov from Azerbaijan in the USSR, who indicated that it was, and the Swiss referee awarded the goal to the home team. The crowd and the audience of 400 million television viewers were left arguing whether the goal should have been given or not. England 's third goal has remained controversial ever since the match. According to the Laws of the Game the definition of a goal is when "the whole of the ball passes over the goal line ''. English supporters cited the good position of the linesman and the statement of Roger Hunt, the nearest England player to the ball, who claimed it was a goal and that was why he wheeled away in celebration rather than attempting to tap the rebounding ball in. Modern studies using film analysis and computer simulation have shown that the ball never crossed the line -- both Duncan Gillies of the Visual Information Processing Group at Imperial College London and Ian Reid and Andrew Zisserman of the Department of Engineering Science at University of Oxford have stated that the ball would have needed to travel a further 2.5 -- 6.0 cm to fully cross the line. Some Germans cited possible bias of the Soviet linesman, especially as the USSR had just been defeated in the semi-finals by West Germany. Bahramov later stated in his memoirs that he believed the ball had bounced back not from the crossbar but from the net, and that he was not able to observe the rest of the scene, so it did not matter where the ball hit the ground anyway. One minute before the end of play, the West Germans sent their defenders forward in a desperate attempt to score a last - minute equaliser. Winning the ball, Bobby Moore picked out the unmarked Geoff Hurst with a long pass, which Hurst carried forward while some spectators began streaming onto the field and Hurst scored moments later. Hurst later admitted that his blistering shot was as much intended to send the ball as far into the Wembley stands as possible should it miss, in order to kill time on the clock. The final goal gave rise to one of the most famous sayings in English football, when BBC commentator Kenneth Wolstenholme described the situation as follows: "And here comes Hurst. He 's got... some people are on the pitch, they think it 's all over. It is now! It 's four! '' One of the balls from the final is on display in the National Football Museum in Manchester. Officials Match rules One of the enduring images of the celebrations in Wembley immediately after the game was the picture of the captain Bobby Moore holding the Jules Rimet Trophy aloft, on the shoulders of Geoff Hurst and Ray Wilson, together with Martin Peters. In recognition of Moore and other West Ham United players ' contribution to the win, the club and Newham Borough Council jointly commissioned a statue of this scene. On 28 April 2003 Prince Andrew as president of The Football Association, duly unveiled the World Cup Sculpture (also called The Champions) in a prominent place near West Ham 's ground, at the time, the Boleyn Ground, at the junction of Barking Road and Green Street. The 4 - metre (13 ft) - high bronze piece was sculpted by Philip Jackson and weighed 4 tonnes. The final is the most watched event ever on British television, as of July 2018, attracting 32.30 million viewers. In Germany, a goal resulting from a shot bouncing off the crossbar and hitting the line is called a Wembley - Tor (Wembley Goal) due to the controversial nature of Hurst 's second goal. This goal has been parodied a large number of times. Some of the most notable include: In August 1966 a special 4d stamp marked ENGLAND WINNERS was issued by the Royal Mail to celebrate the victory and which soared in value to up to 15 shillings each on the back of public enthusiasm for the victory before falling back in value when the public realised it was not rare. The 1991 BBC miniseries Sleepers, about a pair of deep - cover KGB agents placed in England in the mid-1960s and then forgotten includes a subplot about an archive film of the match recorded by Soviet agents and then placed in archives. A KGB officer who sees the film in the early 1990s is excited to discover it includes footage of the disputed goal and attempts to sell it to a contact at a television network (pointedly described as not the BBC). Sleepers is coy about what the film depicts and in the course of the story, the film is destroyed. Marking the 50th anniversary of England 's World Cup victory in July 2016, ITV broadcast 1966 -- A Nation Remembers, which was narrated by the actor Terence Stamp who attended every England game at the tournament. The World Cup win features in the song "Three Lions '' (known by its chorus "Football 's Coming Home ''), the unofficial anthem of the England football team. England 's win in the final also helped fans to create the "Two World Wars and One World Cup '' chant. The players and staff of England 's winning squad who did not get medals in 1966 received them on 10 June 2009 after a ceremony at 10 Downing Street in London. Initially, only the 11 players on the pitch at the end of the match received medals, but FIFA later awarded medals to every non-playing squad and staff member from every World Cup - winning country from 1930 to 1974.
which of the following elements is an example of a metalloid silicon sodium silver platinum
Metalloid - wikipedia Recognition status, as metalloids, of some elements in the p - block of the periodic table. Percentages are median appearance frequencies in the lists of metalloids. The staircase - shaped line is a typical example of the arbitrary metal -- nonmetal dividing line found on some periodic tables. A metalloid is any chemical element which has properties in between those of metals and nonmetals, or that has a mixture of them. There is neither a standard definition of a metalloid nor complete agreement on the elements appropriately classified as such. Despite the lack of specificity, the term remains in use in the literature of chemistry. The six commonly recognised metalloids are boron, silicon, germanium, arsenic, antimony, and tellurium. Five elements are less frequently so classified: carbon, aluminium, selenium, polonium, and astatine. On a standard periodic table, all eleven are in a diagonal area in the p - block extending from boron at the upper left to astatine at lower right, along the dividing line between metals and nonmetals shown on some periodic tables. Typical metalloids have a metallic appearance, but they are brittle and only fair conductors of electricity. Chemically, they behave mostly as nonmetals. They can form alloys with metals. Most of their other physical and chemical properties are intermediate in nature. Metalloids are usually too brittle to have any structural uses. They and their compounds are used in alloys, biological agents, catalysts, flame retardants, glasses, optical storage and optoelectronics, pyrotechnics, semiconductors, and electronics. The electrical properties of silicon and germanium enabled the establishment of the semiconductor industry in the 1950s and the development of solid - state electronics from the early 1960s. The term metalloid originally referred to nonmetals. Its more recent meaning, as a category of elements with intermediate or hybrid properties, became widespread in 1940 -- 1960. Metalloids are sometimes called semimetals, a practice that has been discouraged, as the term semimetal has a different meaning in physics than in chemistry. In chemistry, it specifically refers to the electronic band structure of a substance. A metalloid is an element with properties in between, or that are a mixture of, those of metals and nonmetals, and which is therefore hard to classify as either a metal or a nonmetal. This is a generic definition that draws on metalloid attributes consistently cited in the literature. Difficulty of categorisation is a key attribute. Most elements have a mixture of metallic and nonmetallic properties, and can be classified according to which set of properties is more pronounced. Only the elements at or near the margins, lacking a sufficiently clear preponderance of either metallic or nonmetallic properties, are classified as metalloids. Boron, silicon, germanium, arsenic, antimony, and tellurium are recognised commonly as metalloids. Depending on the author, one or more from selenium, polonium, or astatine are sometimes added to the list. Boron sometimes is excluded, by itself, or with silicon. Sometimes tellurium is not regarded as a metalloid. The inclusion of antimony, polonium, and astatine as metalloids also has been questioned. Other elements occasionally are classified as metalloids. These elements include hydrogen, beryllium, nitrogen, phosphorus, sulfur, zinc, gallium, tin, iodine, lead, bismuth, and radon. The term metalloid also has been used for elements that exhibit metallic lustre and electrical conductivity, and that are amphoteric, such as arsenic, antimony, vanadium, chromium, molybdenum, tungsten, tin, lead, and aluminium. The p - block metals, and nonmetals (such as carbon or nitrogen) that can form alloys with metals or modify their properties also have occasionally been considered as metalloids. No widely accepted definition of a metalloid exists, nor any division of the periodic table into metals, metalloids and nonmetals; Hawkes questioned the feasibility of establishing a specific definition, noting that anomalies can be found in several attempted constructs. Classifying an element as a metalloid has been described by Sharp as "arbitrary ''. The number and identities of metalloids depend on what classification criteria are used. Emsley recognised four metalloids (germanium, arsenic, antimony and tellurium); James et al. listed twelve (Emsley 's plus boron, carbon, silicon, selenium, bismuth, polonium, moscovium and livermorium). On average, seven elements are included in such lists; individual classification arrangements tend to share common ground and vary in the ill - defined margins. A single quantitative criterion such as electronegativity is commonly used, metalloids having electronegativity values from 1.8 or 1.9 to 2.2. Further examples include packing efficiency (the fraction of volume in a crystal structure occupied by atoms) and the Goldhammer - Herzfeld criterion ratio. The commonly recognised metalloids have packing efficiencies of between 34 % and 41 %. The Goldhammer - Herzfeld ratio, roughly equal to the cube of the atomic radius divided by the molar volume, is a simple measure of how metallic an element is, the recognised metalloids having ratios from around 0.85 to 1.1 and averaging 1.0. Other authors have relied on, for example, atomic conductance or bulk coordination number. Jones, writing on the role of classification in science, observed that "(classes) are usually defined by more than two attributes ''. Masterton and Slowinski used three criteria to describe the six elements commonly recognised as metalloids: metalloids have ionization energies around 200 kcal / mol (837 kJ / mol) and electronegativity values close to 2.0. They also said that metalloids are typically semiconductors, though antimony and arsenic (semimetals from a physics perspective) have electrical conductivities approaching those of metals. Selenium and polonium are suspected as not in this scheme, while astatine 's status is uncertain. Periodic table extract showing groups 1 -- 2 and 12 -- 18, and a dividing line between metals and nonmetals. Percentages are median appearance frequencies in the list of metalloid lists. Sporadically recognised elements show that the metalloid net is sometimes cast very widely; although they do not appear in the list of metalloid lists, isolated references to their designation as metalloids can be found in the literature (as cited in this article). Metalloids lie on either side of the dividing line between metals and nonmetals. This can be found, in varying configurations, on some periodic tables. Elements to the lower left of the line generally display increasing metallic behaviour; elements to the upper right display increasing nonmetallic behaviour. When presented as a regular stairstep, elements with the highest critical temperature for their groups (Li, Be, Al, Ge, Sb, Po) lie just below the line. The diagonal positioning of the metalloids represents an exception to the observation that elements with similar properties tend to occur in vertical groups. A related effect can be seen in other diagonal similarities between some elements and their lower right neighbours, specifically lithium - magnesium, beryllium - aluminium, and boron - silicon. Rayner - Canham has argued that these similarities extend to carbon - phosphorus, nitrogen - sulfur, and into three d - block series. This exception arises due to competing horizontal and vertical trends in the nuclear charge. Going along a period, the nuclear charge increases with atomic number as do the number of electrons. The additional pull on outer electrons as nuclear charge increases generally outweighs the screening effect of having more electrons. With some irregularities, atoms therefore become smaller, ionization energy increases, and there is a gradual change in character, across a period, from strongly metallic, to weakly metallic, to weakly nonmetallic, to strongly nonmetallic elements. Going down a main group, the effect of increasing nuclear charge is generally outweighed by the effect of additional electrons being further away from the nucleus. Atoms generally become larger, ionization energy falls, and metallic character increases. The net effect is that the location of the metal -- nonmetal transition zone shifts to the right in going down a group, and analogous diagonal similarities are seen elsewhere in the periodic table, as noted. Depictions of metalloids vary according to the author. Some do not classify elements bordering the metal -- nonmetal dividing line as metalloids, noting that a binary classification can facilitate the establishment of rules for determining bond types between metals and nonmetals. Metalloids are variously grouped with metals, regarded as nonmetals or treated as a sub-category of nonmetals. Other authors have suggested that classifying some elements as metalloids "emphasizes that properties change gradually rather than abruptly as one moves across or down the periodic table ''. Some periodic tables distinguish elements that are metalloids and display no formal dividing line between metals and nonmetals. Metalloids are shown as occurring in a diagonal band or diffuse region. Metalloids usually look like metals but behave largely like nonmetals. Physically, they are shiny, brittle solids with intermediate to relatively good electrical conductivity and the electronic band structure of a semimetal or semiconductor. Chemically, they mostly behave as (weak) nonmetals, have intermediate ionization energies and electronegativity values, and amphoteric or weakly acidic oxides. They can form alloys with metals. Most of their other physical and chemical properties are intermediate in nature. Characteristic properties of metals, metalloids and nonmetals are summarized in the table. Physical properties are listed in order of ease of determination; chemical properties run from general to specific, and then to descriptive. The above table reflects the hybrid nature of metalloids. The properties of form, appearance, and behaviour when mixed with metals are more like metals. Elasticity and general chemical behaviour are more like nonmetals. Electrical conductivity, band structure, ionization energy, electronegativity, and oxides are intermediate between the two. Metalloids are too brittle to have any structural uses in their pure forms. They and their compounds are used as (or in) alloying components, biological agents (toxicological, nutritional and medicinal), catalysts, flame retardants, glasses (oxide and metallic), optical storage media and optoelectronics, pyrotechnics, semiconductors and electronics. Writing early in the history of intermetallic compounds, the British metallurgist Cecil Desch observed that "certain non-metallic elements are capable of forming compounds of distinctly metallic character with metals, and these elements may therefore enter into the composition of alloys ''. He associated silicon, arsenic and tellurium, in particular, with the alloy - forming elements. Phillips and Williams suggested that compounds of silicon, germanium, arsenic and antimony with B metals, "are probably best classed as alloys ''. Among the lighter metalloids, alloys with transition metals are well - represented. Boron can form intermetallic compounds and alloys with such metals of the composition M B, if n > 2. Ferroboron (15 % boron) is used to introduce boron into steel; nickel - boron alloys are ingredients in welding alloys and case hardening compositions for the engineering industry. Alloys of silicon with iron and with aluminium are widely used by the steel and automotive industries, respectively. Germanium forms many alloys, most importantly with the coinage metals. The heavier metalloids continue the theme. Arsenic can form alloys with metals, including platinum and copper; it is also added to copper and its alloys to improve corrosion resistance and appears to confer the same benefit when added to magnesium. Antimony is well known as an alloy - former, including with the coinage metals. Its alloys include pewter (a tin alloy with up to 20 % antimony) and type metal (a lead alloy with up to 25 % antimony). Tellurium readily alloys with iron, as ferrotellurium (50 -- 58 % tellurium), and with copper, in the form of copper tellurium (40 -- 50 % tellurium). Ferrotellurium is used as a stabilizer for carbon in steel casting. Of the non-metallic elements less often recognised as metalloids, selenium -- in the form of ferroselenium (50 -- 58 % selenium) -- is used to improve the machinability of stainless steels. All six of the elements commonly recognised as metalloids have toxic, dietary or medicinal properties. Arsenic and antimony compounds are especially toxic; boron, silicon, and possibly arsenic, are essential trace elements. Boron, silicon, arsenic and antimony have medical applications, and germanium and tellurium are thought to have potential. Boron is used in insecticides and herbicides. It is an essential trace element. As boric acid, it has antiseptic, antifungal, and antiviral properties. Silicon is present in silatrane, a highly toxic rodenticide. Long - term inhalation of silica dust causes silicosis, a fatal disease of the lungs. Silicon is an essential trace element. Silicone gel can be applied to badly burned patients to reduce scarring. Salts of germanium are potentially harmful to humans and animals if ingested on a prolonged basis. There is interest in the pharmacological actions of germanium compounds but no licensed medicine as yet. Arsenic is notoriously poisonous and may also be an essential element in ultratrace amounts. During World War I, both sides used "arsenic - based sneezing and vomiting agents... to force enemy soldiers to remove their gas masks before firing mustard or phosgene at them in a second salvo. '' It has been used as a pharmaceutical agent since antiquity, including for the treatment of syphilis before the development of antibiotics. Arsenic is also a component of melarsoprol, a medicinal drug used in the treatment of human African trypanosomiasis or sleeping sickness. In 2003, arsenic trioxide (under the trade name Trisenox) was re-introduced for the treatment of acute promyelocytic leukaemia, a cancer of the blood and bone marrow. Arsenic in drinking water, which causes lung and bladder cancer, has been associated with a reduction in breast cancer mortality rates. Metallic antimony is relatively non-toxic, but most antimony compounds are poisonous. Two antimony compounds, sodium stibogluconate and stibophen, are used as antiparasitical drugs. Elemental tellurium is not considered particularly toxic; two grams of sodium tellurate, if administered, can be lethal. People exposed to small amounts of airborne tellurium exude a foul and persistent garlic - like odour. Tellurium dioxide has been used to treat seborrhoeic dermatitis; other tellurium compounds were used as antimicrobial agents before the development of antibiotics. In the future, such compounds may need to be substituted for antibiotics that have become ineffective due to bacterial resistance. Of the elements less often recognised as metalloids, beryllium and lead are noted for their toxicity; lead arsenate has been extensively used as an insecticide. Sulfur is one of the oldest of the fungicides and pesticides. Phosphorus, sulfur, zinc, selenium and iodine are essential nutrients, and aluminium, tin and lead may be. Sulfur, gallium, selenium, iodine and bismuth have medicinal applications. Sulfur is a constituent of sulfonamide drugs, still widely used for conditions such as acne and urinary tract infections. Gallium nitrate is used to treat the side effects of cancer; gallium citrate, a radiopharmaceutical, facilitates imaging of inflamed body areas. Selenium sulfide is used in medicinal shampoos and to treat skin infections such as tinea versicolor. Iodine is used as a disinfectant in various forms. Bismuth is an ingredient in some antibacterials. Boron trifluoride and trichloride are used as catalysts in organic synthesis and electronics; the tribromide is used in the manufacture of diborane. Non-toxic boron ligands could replace toxic phosphorus ligands in some transition metal catalysts. Silica sulfuric acid (SiO OSO H) is used in organic reactions. Germanium dioxide is sometimes used as a catalyst in the production of PET plastic for containers; cheaper antimony compounds, such as the trioxide or triacetate, are more commonly employed for the same purpose despite concerns about antimony contamination of food and drinks. Arsenic trioxide has been used in the production of natural gas, to boost the removal of carbon dioxide, as have selenous acid and tellurous acid. Selenium acts as a catalyst in some microorganisms. Tellurium, and its dioxide and tetrachloride, are strong catalysts for air oxidation of carbon above 500 ° C. Graphite oxide can be used as a catalyst in the synthesis of imines and their derivatives. Activated carbon and alumina have been used as catalysts for the removal of sulfur contaminants from natural gas. Titanium doped aluminium has been identified as a substitute for expensive noble metal catalysts used in the production of industrial chemicals. Compounds of boron, silicon, arsenic and antimony have been used as flame retardants. Boron, in the form of borax, has been used as a textile flame retardant since at least the 18th century. Silicon compounds such as silicones, silanes, silsesquioxane, silica and silicates, some of which were developed as alternatives to more toxic halogenated products, can considerably improve the flame retardancy of plastic materials. Arsenic compounds such as sodium arsenite or sodium arsenate are effective flame retardants for wood but have been less frequently used due to their toxicity. Antimony trioxide is a flame retardant. Aluminium hydroxide has been used as a wood - fibre, rubber, plastic and textile flame retardant since the 1890s. Apart from aluminium hydroxide, use of phosphorus based flame - retardants -- in the form of, for example, organophosphates -- now exceeds that of any of the other main retardant types. These employ boron, antimony or halogenated hydrocarbon compounds. The oxides B O, SiO, GeO, As O and Sb O readily form glasses. TeO forms a glass but this requires a "heroic quench rate '' or the addition of an impurity; otherwise the crystalline form results. These compounds are used in chemical, domestic and industrial glassware and optics. Boron trioxide is used as a glass fibre additive, and is also a component of borosilicate glass, widely used for laboratory glassware and domestic ovenware for its low thermal expansion. Most ordinary glassware is made from silicon dioxide. Germanium dioxide is used as a glass fibre additive, as well as in infrared optical systems. Arsenic trioxide is used in the glass industry as a decolourizing and fining agent (for the removal of bubbles), as is antimony trioxide. Tellurium dioxide finds application in laser and nonlinear optics. Amorphous metallic glasses are generally most easily prepared if one of the components is a metalloid or "near metalloid '' such as boron, carbon, silicon, phosphorus or germanium. Aside from thin films deposited at very low temperatures, the first known metallic glass was an alloy of composition Au Si reported in 1960. A metallic glass having a strength and toughness not previously seen, of composition Pd P Si Ge, was reported in 2011. Phosphorus, selenium and lead, which are less often recognised as metalloids, are also used in glasses. Phosphate glass has a substrate of phosphorus pentoxide (P O), rather than the silica (SiO) of conventional silicate glasses. It is used, for example, to make sodium lamps. Selenium compounds can be used both as decolourising agents and to add a red colour to glass. Decorative glassware made of traditional lead glass contains at least 30 % lead (II) oxide (PbO); lead glass used for radiation shielding may have up to 65 % PbO. Lead - based glasses have also been extensively used in electronics components; enamelling; sealing and glazing materials; and solar cells. Bismuth based oxide glasses have emerged as a less toxic replacement for lead in many of these applications. Varying compositions of GeSbTe ("GST alloys '') and Ag - and In - doped Sb Te ("AIST alloys ''), being examples of phase - change materials, are widely used in rewritable optical discs and phase - change memory devices. By applying heat, they can be switched between amorphous (glassy) and crystalline states. The change in optical and electrical properties can be used for information storage purposes. Future applications for GeSbTe may include, "ultrafast, entirely solid - state displays with nanometre - scale pixels, semi-transparent ' smart ' glasses, ' smart ' contact lenses and artificial retina devices. '' The recognised metalloids have either pyrotechnic applications or associated properties. Boron and silicon are commonly encountered; they act somewhat like metal fuels. Boron is used in pyrotechnic initiator compositions (for igniting other hard - to - start compositions), and in delay compositions that burn at a constant rate. Boron carbide has been identified as a possible replacement for more toxic barium or hexachloroethane mixtures in smoke munitions, signal flares and fireworks. Silicon, like boron, is a component of initiator and delay mixtures. Doped germanium can act as a variable speed thermite fuel. Arsenic trisulfide As S was used in old naval signal lights; in fireworks to make white stars; in yellow smoke screen mixtures; and in initiator compositions. Antimony trisulfide Sb S is found in white - light fireworks and in flash and sound mixtures. Tellurium has been used in delay mixtures and in blasting cap initiator compositions. Carbon, aluminium, phosphorus and selenium continue the theme. Carbon, in black powder, is a constituent of fireworks rocket propellants, bursting charges, and effects mixtures, and military delay fuses and igniters. Aluminium is a common pyrotechnic ingredient, and is widely employed for its capacity to generate light and heat, including in thermite mixtures. Phosphorus can be found in smoke and incendiary munitions, paper caps used in toy guns, and party poppers. Selenium has been used in the same way as tellurium. All the elements commonly recognised as metalloids (or their compounds) have been used in the semiconductor or solid - state electronic industries. Some properties of boron have limited its use as a semiconductor. It has a high melting point, single crystals are relatively hard to obtain, and introducing and retaining controlled impurities is difficult. Silicon is the leading commercial semiconductor; it forms the basis of modern electronics (including standard solar cells) and information and communication technologies. This was despite the study of semiconductors, early in the 20th century, having been regarded as the "physics of dirt '' and not deserving of close attention. Germanium has largely been replaced by silicon in semiconducting devices, being cheaper, more resilient at higher operating temperatures, and easier to work during the microelectronic fabrication process. Germanium is still a constituent of semiconducting silicon - germanium "alloys '' and these have been growing in use, particularly for wireless communication devices; such alloys exploit the higher carrier mobility of germanium. The synthesis of gram - scale quantities of semiconducting germanane was reported in 2013. This comprises one - atom thick sheets of hydrogen - terminated germanium atoms, analogous to graphane. It conducts electrons more than ten times faster than silicon and five times faster than germanium, and is thought to have potential for optoelectronic and sensing applications. The development of a germanium - wire based anode that more than doubles the capacity of lithium - ion batteries was reported in 2014. In the same year, Lee et al. reported that defect - free crystals of graphene large enough to have electronic uses could be grown on, and removed from, a germanium substrate. Arsenic and antimony are not semiconductors in their standard states. Both form type III - V semiconductors (such as GaAs, AlSb or GaInAsSb) in which the average number of valence electrons per atom is the same as that of Group 14 elements. These compounds are preferred for some special applications. Antimony nanocrystals may enable lithium - ion batteries to be replaced by more powerful sodium ion batteries. Tellurium, which is a semiconductor in its standard state, is used mainly as a component in type II / VI semiconducting - chalcogenides; these have applications in electro - optics and electronics. Cadmium telluride (CdTe) is used in solar modules for its high conversion efficiency, low manufacturing costs, and large band gap of 1.44 eV, letting it absorb a wide range of wavelengths. Bismuth telluride (Bi Te), alloyed with selenium and antimony, is a component of thermoelectric devices used for refrigeration or portable power generation. Five metalloids -- boron, silicon, germanium, arsenic and antimony -- can be found in cell phones (along with at least 39 other metals and nonmetals). Tellurium is expected to find such use. Of the less often recognised metalloids, phosphorus, gallium (in particular) and selenium have semiconductor applications. Phosphorus is used in trace amounts as a dopant for n - type semiconductors. The commercial use of gallium compounds is dominated by semiconductor applications -- in integrated circuits; cell phones; laser diodes; light - emitting diodes; photodetectors; and solar cells. Selenium is used in the production of solar cells and in high - energy surge protectors. Boron, silicon, germanium, antimony and tellurium, as well as heavier metals and metalloids such as Sm, Hg, Tl, Pb, Bi and Se, can be found in topological insulators. These are alloys or compounds which, at ultracold temperatures or room temperature (depending on their composition), are metallic conductors on their surfaces but insulators through their interiors. Cadmium arsenide Cd As, at about 1 K, is a Dirac - semimetal -- a bulk electronic analogue of graphene -- in which electrons travel effectively as massless particles. These two classes of material are thought to have potential quantum computing applications. The word metalloid comes from the Latin metallum ("metal '') and the Greek oeides ("resembling in form or appearance ''). Several names are sometimes used synonymously although some of these have other meanings that are not necessarily interchangeable: amphoteric element, boundary element, half - metal, half - way element, near metal, meta - metal, semiconductor, semimetal and submetal. "Amphoteric element '' is sometimes used more broadly to include transition metals capable of forming oxyanions, such as chromium and manganese. "Half - metal '' is used in physics to refer to a compound (such as chromium dioxide) or alloy that can act as a conductor and an insulator. "Meta - metal '' is sometimes used instead to refer to certain metals (Be, Zn, Cd, Hg, In, Tl, β - Sn, Pb) located just to the left of the metalloids on standard periodic tables. These metals are mostly diamagnetic and tend to have distorted crystalline structures, electrical conductivity values at the lower end of those of metals, and amphoteric (weakly basic) oxides. "Semimetal '' sometimes refers, loosely or explicitly, to metals with incomplete metallic character in crystalline structure, electrical conductivity or electronic structure. Examples include gallium, ytterbium, bismuth and neptunium. The names amphoteric element and semiconductor are problematic as some elements referred to as metalloids do not show marked amphoteric behaviour (bismuth, for example) or semiconductivity (polonium) in their most stable forms. The origin and usage of the term metalloid is convoluted. Its origin lies in attempts, dating from antiquity, to describe metals and to distinguish between typical and less typical forms. It was first applied in the early 19th century to metals that floated on water (sodium and potassium), and then more popularly to nonmetals. Earlier usage in mineralogy, to describe a mineral having a metallic appearance, can be sourced to as early as 1800. Since the mid-20th century it has been used to refer to intermediate or borderline chemical elements. The International Union of Pure and Applied Chemistry (IUPAC) previously recommended abandoning the term metalloid, and suggested using the term semimetal instead. Use of this latter term has more recently been discouraged by Atkins et al. as it has a different meaning in physics -- one that more specifically refers to the electronic band structure of a substance rather than the overall classification of an element. The most recent IUPAC publications on nomenclature and terminology do not include any recommendations on the usage of the terms metalloid or semimetal. Pure boron is a shiny, silver - grey crystalline solid. It is less dense than aluminium (2.34 vs. 2.70 g / cm), and is hard and brittle. It is barely reactive under normal conditions, except for attack by fluorine, and has a melting point of 2076 ° C (cf. steel ~ 1370 ° C). Boron is a semiconductor; its room temperature electrical conductivity is 1.5 × 10 S cm (about 200 times less than that of tap water) and it has a band gap of about 1.56 eV. The structural chemistry of boron is dominated by its small atomic size, and relatively high ionization energy. With only three valence electrons per boron atom, simple covalent bonding can not fulfil the octet rule. Metallic bonding is the usual result among the heavier congenors of boron but this generally requires low ionization energies. Instead, because of its small size and high ionization energies, the basic structural unit of boron (and nearly all of its allotropes) is the icosahedral B cluster. Of the 36 electrons associated with 12 boron atoms, 26 reside in 13 delocalized molecular orbitals; the other 10 electrons are used to form two - and three - centre covalent bonds between icosahedra. The same motif can be seen, as are deltahedral variants or fragments, in metal borides and hydride derivatives, and in some halides. The bonding in boron has been described as being characteristic of behaviour intermediate between metals and nonmetallic covalent network solids (such as diamond). The energy required to transform B, C, N, Si and P from nonmetallic to metallic states has been estimated as 30, 100, 240, 33 and 50 kJ / mol, respectively. This indicates the proximity of boron to the metal - nonmetal borderline. Most of the chemistry of boron is nonmetallic in nature. Unlike its heavier congeners, it is not known to form a simple B or hydrated (B (H O)) cation. The small size of the boron atom enables the preparation of many interstitial alloy - type borides. Analogies between boron and transition metals have been noted in the formation of complexes, and adducts (for example, BH + CO → BH CO and, similarly, Fe (CO) + CO → Fe (CO)), as well as in the geometric and electronic structures of cluster species such as (B H) and (Ru (CO)). The aqueous chemistry of boron is characterised by the formation of many different polyborate anions. Given its high charge - to - size ratio, boron bonds covalently in nearly all of its compounds; the exceptions are the borides as these include, depending on their composition, covalent, ionic and metallic bonding components. Simple binary compounds, such as boron trichloride are Lewis acids as the formation of three covalent bonds leaves a hole in the octet which can be filled by an electron - pair donated by a Lewis base. Boron has a strong affinity for oxygen and a duly extensive borate chemistry. The oxide B O is polymeric in structure, weakly acidic, and a glass former. Organometallic compounds of boron have been known since the 19th century (see organoboron chemistry). Silicon is a crystalline solid with a blue - grey metallic lustre. Like boron, it is less dense (at 2.33 g / cm) than aluminium, and is hard and brittle. It is a relatively unreactive element. According to Rochow, the massive crystalline form (especially if pure) is "remarkably inert to all acids, including hydrofluoric ''. Less pure silicon, and the powdered form, are variously susceptible to attack by strong or heated acids, as well as by steam and fluorine. Silicon dissolves in hot aqueous alkalis with the evolution of hydrogen, as do metals such as beryllium, aluminium, zinc, gallium or indium. It melts at 1414 ° C. Silicon is a semiconductor with an electrical conductivity of 10 S cm and a band gap of about 1.11 eV. When it melts, silicon becomes a reasonable metal with an electrical conductivity of 1.0 -- 1.3 × 10 S cm, similar to that of liquid mercury. The chemistry of silicon is generally nonmetallic (covalent) in nature. It is not known to form a cation. Silicon can form alloys with metals such as iron and copper. It shows fewer tendencies to anionic behaviour than ordinary nonmetals. Its solution chemistry is characterised by the formation of oxyanions. The high strength of the silicon - oxygen bond dominates the chemical behaviour of silicon. Polymeric silicates, built up by tetrahedral SiO units sharing their oxygen atoms, are the most abundant and important compounds of silicon. The polymeric borates, comprising linked trigonal and tetrahedral BO or BO units, are built on similar structural principles. The oxide SiO is polymeric in structure, weakly acidic, and a glass former. Traditional organometallic chemistry includes the carbon compounds of silicon (see organosilicon). Germanium is a shiny grey - white solid. It has a density of 5.323 g / cm and is hard and brittle. It is mostly unreactive at room temperature but is slowly attacked by hot concentrated sulfuric or nitric acid. Germanium also reacts with molten caustic soda to yield sodium germanate Na GeO and hydrogen gas. It melts at 938 ° C. Germanium is a semiconductor with an electrical conductivity of around 2 × 10 S cm and a band gap of 0.67 eV. Liquid germanium is a metallic conductor, with an electrical conductivity similar to that of liquid mercury. Most of the chemistry of germanium is characteristic of a nonmetal. Whether or not germanium forms a cation is unclear, aside from the reported existence of the Ge ion in a few esoteric compounds. It can form alloys with metals such as aluminium and gold. It shows fewer tendencies to anionic behaviour than ordinary nonmetals. Its solution chemistry is characterised by the formation of oxyanions. Germanium generally forms tetravalent (IV) compounds, and it can also form less stable divalent (II) compounds, in which it behaves more like a metal. Germanium analogues of all of the major types of silicates have been prepared. The metallic character of germanium is also suggested by the formation of various oxoacid salts. A phosphate ((HPO) Ge H O) and highly stable trifluoroacetate Ge (OCOCF) have been described, as have Ge (SO), Ge (ClO) and GeH (C O). The oxide GeO is polymeric, amphoteric, and a glass former. The dioxide is soluble in acidic solutions (the monoxide GeO, is even more so), and this is sometimes used to classify germanium as a metal. Up to the 1930s germanium was considered to be a poorly conducting metal; it has occasionally been classified as a metal by later writers. As with all the elements commonly recognised as metalloids, germanium has an established organometallic chemistry (see organogermanium chemistry). Arsenic is a grey, metallic looking solid. It has a density of 5.727 g / cm and is brittle, and moderately hard (more than aluminium; less than iron). It is stable in dry air but develops a golden bronze patina in moist air, which blackens on further exposure. Arsenic is attacked by nitric acid and concentrated sulfuric acid. It reacts with fused caustic soda to give the arsenate Na AsO and hydrogen gas. Arsenic sublimes at 615 ° C. The vapour is lemon - yellow and smells like garlic. Arsenic only melts under a pressure of 38.6 atm, at 817 ° C. It is a semimetal with an electrical conductivity of around 3.9 × 10 S cm and a band overlap of 0.5 eV. Liquid arsenic is a semiconductor with a band gap of 0.15 eV. The chemistry of arsenic is predominately nonmetallic. Whether or not arsenic forms a cation is unclear. Its many metal alloys are mostly brittle. It shows fewer tendencies to anionic behaviour than ordinary nonmetals. Its solution chemistry is characterised by the formation of oxyanions. Arsenic generally forms compounds in which it has an oxidation state of + 3 or + 5. The halides, and the oxides and their derivatives are illustrative examples. In the trivalent state, arsenic shows some incipient metallic properties. The halides are hydrolysed by water but these reactions, particularly those of the chloride, are reversible with the addition of a hydrohalic acid. The oxide is acidic but, as noted below, (weakly) amphoteric. The higher, less stable, pentavalent state has strongly acidic (nonmetallic) properties. Compared to phosphorus, the stronger metallic character of arsenic is indicated by the formation of oxoacid salts such as AsPO, As (SO) and arsenic acetate As (CH COO). The oxide As O is polymeric, amphoteric, and a glass former. Arsenic has an extensive organometallic chemistry (see organoarsenic chemistry). Antimony is a silver - white solid with a blue tint and a brilliant lustre. It has a density of 6.697 g / cm and is brittle, and moderately hard (more so than arsenic; less so than iron; about the same as copper). It is stable in air and moisture at room temperature. It is attacked by concentrated nitric acid, yielding the hydrated pentoxide Sb O. Aqua regia gives the pentachloride SbCl and hot concentrated sulfuric acid results in the sulfate Sb (SO). It is not affected by molten alkali. Antimony is capable of displacing hydrogen from water, when heated: 2 Sb + 3 H O → Sb O + 3 H. It melts at 631 ° C. Antimony is a semimetal with an electrical conductivity of around 3.1 × 10 S cm and a band overlap of 0.16 eV. Liquid antimony is a metallic conductor with an electrical conductivity of around 5.3 × 10 S cm. Most of the chemistry of antimony is characteristic of a nonmetal. Antimony has some definite cationic chemistry, SbO and Sb (OH) being present in acidic aqueous solution; the compound Sb (GaCl), which contains the homopolycation, Sb, was prepared in 2004. It can form alloys with one or more metals such as aluminium, iron, nickel, copper, zinc, tin, lead and bismuth. Antimony has fewer tendencies to anionic behaviour than ordinary nonmetals. Its solution chemistry is characterised by the formation of oxyanions. Like arsenic, antimony generally forms compounds in which it has an oxidation state of + 3 or + 5. The halides, and the oxides and their derivatives are illustrative examples. The + 5 state is less stable than the + 3, but relatively easier to attain than with arsenic. This is explained by the poor shielding afforded the arsenic nucleus by its 3d electrons. In comparison, the tendency of antimony (being a heavier atom) to oxidize more easily partially offsets the effect of its 4d shell. Tripositive antimony is amphoteric; pentapositive antimony is (predominately) acidic. Consistent with an increase in metallic character down group 15, antimony forms salts or salt - like compounds including a nitrate Sb (NO), phosphate SbPO, sulfate Sb (SO) and perchlorate Sb (ClO). The otherwise acidic pentoxide Sb O shows some basic (metallic) behaviour in that it can be dissolved in very acidic solutions, with the formation of the oxycation SbO. The oxide Sb O is polymeric, amphoteric, and a glass former. Antimony has an extensive organometallic chemistry (see organoantimony chemistry). Tellurium is a silvery - white shiny solid. It has a density of 6.24 g / cm, is brittle, and is the softest of the commonly recognised metalloids, being marginally harder than sulfur. Large pieces of tellurium are stable in air. The finely powdered form is oxidized by air in the presence of moisture. Tellurium reacts with boiling water, or when freshly precipitated even at 50 ° C, to give the dioxide and hydrogen: Te + 2 H O → TeO + 2 H. It reacts (to varying degrees) with nitric, sulfuric and hydrochloric acids to give compounds such as the sulfoxide TeSO or tellurous acid H TeO, the basic nitrate (Te O H) (NO), or the oxide sulfate Te O (SO). It dissolves in boiling alkalis, to give the tellurite and telluride: 3 Te + 6 KOH = K TeO + 2 K Te + 3 H O, a reaction that proceeds or is reversible with increasing or decreasing temperature. At higher temperatures tellurium is sufficiently plastic to extrude. It melts at 449.51 ° C. Crystalline tellurium has a structure consisting of parallel infinite spiral chains. The bonding between adjacent atoms in a chain is covalent, but there is evidence of a weak metallic interaction between the neighbouring atoms of different chains. Tellurium is a semiconductor with an electrical conductivity of around 1.0 S cm and a band gap of 0.32 to 0.38 eV. Liquid tellurium is a semiconductor, with an electrical conductivity, on melting, of around 1.9 × 10 S cm. Superheated liquid tellurium is a metallic conductor. Most of the chemistry of tellurium is characteristic of a nonmetal. It shows some cationic behaviour. The dioxide dissolves in acid to yield the trihydroxotellurium (IV) Te (OH) ion; the red Te and yellow - orange Te ions form when tellurium is oxidized in fluorosulfuric acid (HSO F), or liquid sulfur dioxide (SO), respectively. It can form alloys with aluminium, silver and tin. Tellurium shows fewer tendencies to anionic behaviour than ordinary nonmetals. Its solution chemistry is characterised by the formation of oxyanions. Tellurium generally forms compounds in which it has an oxidation state of − 2, + 4 or + 6. The + 4 state is the most stable. Tellurides of composition X Te are easily formed with most other elements and represent the most common tellurium minerals. Nonstoichiometry is pervasive, especially with transition metals. Many tellurides can be regarded as metallic alloys. The increase in metallic character evident in tellurium, as compared to the lighter chalcogens, is further reflected in the reported formation of various other oxyacid salts, such as a basic selenate 2TeO SeO and an analogous perchlorate and periodate 2TeO HXO. Tellurium forms a polymeric, amphoteric, glass - forming oxide TeO. It is a "conditional '' glass - forming oxide -- it forms a glass with a very small amount of additive. Tellurium has an extensive organometallic chemistry (see organotellurium chemistry). Carbon is ordinarily classified as a nonmetal but has some metallic properties and is occasionally classified as a metalloid. Hexagonal graphitic carbon (graphite) is the most thermodynamically stable allotrope of carbon under ambient conditions. It has a lustrous appearance and is a fairly good electrical conductor. Graphite has a layered structure. Each layer comprises carbon atoms bonded to three other carbon atoms in a honeycomb lattice arrangement. The layers are stacked together and held loosely by van der Waals forces and delocalized valence electrons. Like a metal, the conductivity of graphite in the direction of its planes decreases as the temperature is raised; it has the electronic band structure of a semimetal. The allotropes of carbon, including graphite, can accept foreign atoms or compounds into their structures via substitution, intercalation or doping. The resulting materials are referred to as "carbon alloys ''. Carbon can form ionic salts, including a hydrogen sulfate, perchlorate, and nitrate (C X. 2HX, where X = HSO, ClO; and C NO. 3HNO). In organic chemistry, carbon can form complex cations -- termed carbocations -- in which the positive charge is on the carbon atom; examples are CH and CH, and their derivatives. Carbon is brittle, and behaves as a semiconductor in a direction perpendicular to its planes. Most of its chemistry is nonmetallic; it has a relatively high ionization energy and, compared to most metals, a relatively high electronegativity. Carbon can form anions such as C (methanide), C (acetylide) and C (sesquicarbide or allylenide), in compounds with metals of main groups 1 -- 3, and with the lanthanides and actinides. Its oxide CO forms carbonic acid H CO. Aluminium is ordinarily classified as a metal. It is lustrous, malleable and ductile, and has high electrical and thermal conductivity. Like most metals it has a close - packed crystalline structure, and forms a cation in aqueous solution. It has some properties that are unusual for a metal; taken together, these are sometimes used as a basis to classify aluminium as a metalloid. Its crystalline structure shows some evidence of directional bonding. Aluminium bonds covalently in most compounds. The oxide Al O is amphoteric, and a conditional glass - former. Aluminium can form anionic aluminates, such behaviour being considered nonmetallic in character. Classifying aluminium as a metalloid has been disputed given its many metallic properties. It is therefore, arguably, an exception to the mnemonic that elements adjacent to the metal -- nonmetal dividing line are metalloids. Stott labels aluminium as a weak metal. It has the physical properties of a metal but some of the chemical properties of a nonmetal. Steele notes the paradoxical chemical behaviour of aluminium: "It resembles a weak metal in its amphoteric oxide and in the covalent character of many of its compounds... Yet it is a highly electropositive metal... (with) a high negative electrode potential ''. Moody says that, "aluminium is on the ' diagonal borderland ' between metals and non-metals in the chemical sense. '' Selenium shows borderline metalloid or nonmetal behaviour. Its most stable form, the grey trigonal allotrope, is sometimes called "metallic '' selenium because its electrical conductivity is several orders of magnitude greater than that of the red monoclinic form. The metallic character of selenium is further shown by its lustre, and its crystalline structure, which is thought to include weakly "metallic '' interchain bonding. Selenium can be drawn into thin threads when molten and viscous. It shows reluctance to acquire "the high positive oxidation numbers characteristic of nonmetals ''. It can form cyclic polycations (such as Se) when dissolved in oleums (an attribute it shares with sulfur and tellurium), and a hydrolysed cationic salt in the form of trihydroxoselenium (IV) perchlorate (Se (OH)) ClO. The nonmetallic character of selenium is shown by its brittleness and the low electrical conductivity (~ 10 to 10 S cm) of its highly purified form. This is comparable to or less than that of bromine (7.95 × 10 S cm), a nonmetal. Selenium has the electronic band structure of a semiconductor and retains its semiconducting properties in liquid form. It has a relatively high electronegativity (2.55 revised Pauling scale). Its reaction chemistry is mainly that of its nonmetallic anionic forms Se, SeO and SeO. Selenium is commonly described as a metalloid in the environmental chemistry literature. It moves through the aquatic environment similarly to arsenic and antimony; its water - soluble salts, in higher concentrations, have a similar toxicological profile to that of arsenic. Polonium is "distinctly metallic '' in some ways. Both of its allotropic forms are metallic conductors. It is soluble in acids, forming the rose - coloured Po cation and displacing hydrogen: Po + 2 H → Po + H. Many polonium salts are known. The oxide PoO is predominantly basic in nature. Polonium is a reluctant oxidizing agent, unlike its lightest congener oxygen: highly reducing conditions are required for the formation of the Po anion in aqueous solution. Whether polonium is ductile or brittle is unclear. It is predicted to be ductile based on its calculated elastic constants. It has a simple cubic crystalline structure. Such a structure has few slip systems and "leads to very low ductility and hence low fracture resistance ''. Polonium shows nonmetallic character in its halides, and by the existence of polonides. The halides have properties generally characteristic of nonmetal halides (being volatile, easily hydrolyzed, and soluble in organic solvents). Many metal polonides, obtained by heating the elements together at 500 -- 1,000 ° C, and containing the Po anion, are also known. As a halogen, astatine tends to be classified as a nonmetal. It has some marked metallic properties and is sometimes instead classified as either a metalloid or (less often) as a metal. Immediately following its production in 1940, early investigators considered it a metal. In 1949 it was called the most noble (difficult to reduce) nonmetal as well as being a relatively noble (difficult to oxidize) metal. In 1950 astatine was described as a halogen and (therefore) a reactive nonmetal. In 2013, on the basis of relativistic modelling, astatine was predicted to be a monatomic metal, with a face - centred cubic crystalline structure. Several authors have commented on the metallic nature of some of the properties of astatine. Since iodine is a semiconductor in the direction of its planes, and since the halogens become more metallic with increasing atomic number, it has been presumed that astatine would be a metal if it could form a condensed phase. Astatine may be metallic in the liquid state on the basis that elements with an enthalpy of vaporization (∆ H) greater than ~ 42 kJ / mol are metallic when liquid. Such elements include boron, silicon, germanium, antimony, selenium and tellurium. Estimated values for ∆ H of diatomic astatine are 50 kJ / mol or higher; diatomic iodine, with a ∆ H of 41.71, falls just short of the threshold figure. "Like typical metals, it (astatine) is precipitated by hydrogen sulfide even from strongly acid solutions and is displaced in a free form from sulfate solutions; it is deposited on the cathode on electrolysis. '' Further indications of a tendency for astatine to behave like a (heavy) metal are: "... the formation of pseudohalide compounds... complexes of astatine cations... complex anions of trivalent astatine... as well as complexes with a variety of organic solvents ''. It has also been argued that astatine demonstrates cationic behaviour, by way of stable At and AtO forms, in strongly acidic aqueous solutions. Some of astatine 's reported properties are nonmetallic. It has the narrow liquid range ordinarily associated with nonmetals (mp 302 ° C; bp 337 ° C). Batsanov gives a calculated band gap energy for astatine of 0.7 eV; this is consistent with nonmetals (in physics) having separated valence and conduction bands and thereby being either semiconductors or insulators. The chemistry of astatine in aqueous solution is mainly characterised by the formation of various anionic species. Most of its known compounds resemble those of iodine, which is a halogen and a nonmetal. Such compounds include astatides (XAt), astatates (XAtO), and monovalent interhalogen compounds. Restrepo et al. reported that astatine appeared to be more polonium - like than halogen - like. They did so on the basis of detailed comparative studies of the known and interpolated properties of 72 elements. In the periodic table, some of the elements adjacent to the commonly recognised metalloids, although usually classified as either metals or nonmetals, are occasionally referred to as near - metalloids or noted for their metalloidal character. To the left of the metal -- nonmetal dividing line, such elements include gallium, tin and bismuth. They show unusual packing structures, marked covalent chemistry (molecular or polymeric), and amphoterism. To the right of the dividing line are carbon, phosphorus, selenium and iodine. They exhibit metallic lustre, semiconducting properties and bonding or valence bands with delocalized character. This applies to their most thermodynamically stable forms under ambient conditions: carbon as graphite; phosphorus as black phosphorus; and selenium as grey selenium. Different crystalline forms of an element are called allotropes. Some allotropes, particularly those of elements located (in periodic table terms) alongside or near the notional dividing line between metals and nonmetals, exhibit more pronounced metallic, metalloidal or nonmetallic behaviour than others. The existence of such allotropes can complicate the classification of the elements involved. Tin, for example, has two allotropes: tetragonal "white '' β - tin and cubic "grey '' α - tin. White tin is a very shiny, ductile and malleable metal. It is the stable form at or above room temperature and has an electrical conductivity of 9.17 × 10 S cm (~ 1 / 6th that of copper). Grey tin usually has the appearance of a grey micro-crystalline powder, and can also be prepared in brittle semi-lustrous crystalline or polycrystalline forms. It is the stable form below 13.2 ° C and has an electrical conductivity of between (2 -- 5) × 10 S cm (~ 1 / 250th that of white tin). Grey tin has the same crystalline structure as that of diamond. It behaves as a semiconductor (with a band gap of 0.08 eV), but has the electronic band structure of a semimetal. It has been referred to as either a very poor metal, a metalloid, a nonmetal or a near metalloid. The diamond allotrope of carbon is clearly nonmetallic, being translucent and having a low electrical conductivity of 10 to 10 S cm. Graphite has an electrical conductivity of 3 × 10 S cm, a figure more characteristic of a metal. Phosphorus, sulfur, arsenic, selenium, antimony and bismuth also have less stable allotropes that display different behaviours. The table gives crustal abundances of the elements commonly to rarely recognised as metalloids. Some other elements are included for comparison: oxygen and xenon (the most and least abundant elements with stable isotopes); iron and the coinage metals copper, silver and gold; and rhenium, the least abundant stable metal (aluminium is normally the most abundant metal). Various abundance estimates have been published; these often disagree to some extent. The recognised metalloids can be obtained by chemical reduction of either their oxides or their sulfides. Simpler or more complex extraction methods may be employed depending on the starting form and economic factors. Boron is routinely obtained by reducing the trioxide with magnesium: B O + 3 Mg → 2 B + 3MgO; after secondary processing the resulting brown powder has a purity of up to 97 %. Boron of higher purity (> 99 %) is prepared by heating volatile boron compounds, such as BCl or BBr, either in a hydrogen atmosphere (2 BX + 3 H → 2 B + 6 HX) or to the point of thermal decomposition. Silicon and germanium are obtained from their oxides by heating the oxide with carbon or hydrogen: SiO + C → Si + CO; GeO + 2 H → Ge + 2 H O. Arsenic is isolated from its pyrite (FeAsS) or arsenical pyrite (FeAs) by heating; alternatively, it can be obtained from its oxide by reduction with carbon: 2 As O + 3 C → 2 As + 3 CO. Antimony is derived from its sulfide by reduction with iron: Sb S → 2 Sb + 3 FeS. Tellurium is prepared from its oxide by dissolving it in aqueous NaOH, yielding tellurite, then by electrolytic reduction: TeO + 2 NaOH → Na TeO + H O; Na TeO + H O → Te + 2 NaOH + O. Another option is reduction of the oxide by roasting with carbon: TeO + C → Te + CO. Production methods for the elements less frequently recognised as metalloids involve natural processing, electrolytic or chemical reduction, or irradiation. Carbon (as graphite) occurs naturally and is extracted by crushing the parent rock and floating the lighter graphite to the surface. Aluminium is extracted by dissolving its oxide Al O in molten cryolite Na AlF and then by high temperature electrolytic reduction. Selenium is produced by roasting the coinage metal selenides X Se (X = Cu, Ag, Au) with soda ash to give the selenite: X Se + O + Na CO → Na SeO + 2 X + CO; the selenide is neutralized by sulfuric acid H SO to give selenous acid H SeO; this is reduced by bubbling with SO to yield elemental selenium. Polonium and astatine are produced in minute quantities by irradiating bismuth. The recognised metalloids and their closer neighbours mostly cost less than silver; only polonium and astatine are more expensive than gold, on account of their significant radioactivity. As of 5 April 2014, prices for small samples (up to 100 g) of silicon, antimony and tellurium, and graphite, aluminium and selenium, average around one third the cost of silver (US $1.5 per gram or about $45 an ounce). Boron, germanium and arsenic samples average about three - and - a-half times the cost of silver. Polonium is available for about $100 per microgram. Zalutsky and Pruszynski estimate a similar cost for producing astatine. Prices for the applicable elements traded as commodities tend to range from two to three times cheaper than the sample price (Ge), to nearly three thousand times cheaper (As).
when is the new season of bones starting
Bones (season 12) - wikipedia The twelfth and final season of the American television series Bones premiered on January 3, 2017, on Fox and concluded on March 28, 2017. The final season consists of 12 episodes and aired Tuesdays at 9: 00 pm ET. Fox renewed Bones for a 12 - episode final season on February 25, 2016. The season was initially announced to debut in the fall, but Fox delayed the premiere until January 2017. Former series regular Eric Millegan, who returned in the season 11 finale, continues his role as Zack Addy in the final season. The season features the return of former recurring characters, including Eddie McClintock as Tim "Sully '' Sullivan, who recurred during season two, and Stephen Fry as Gordon Wyatt, who made guest appearances in seasons two, four and five. Betty White reprises her season 11 role as Dr. Beth Mayer in the tenth episode, while veteran actors Ed Asner and Hal Holbrook guest star in the third episode. Filming of the season, and of the entire series, wrapped up on December 15, 2016. The twelfth and final season of Bones was released on DVD (subtitled "The Final Chapter '') in region 1 on June 13, 2017. The set includes all 12 episodes of season twelve and special features include a gag reel and a featurette "Back to the Lab: A Bones Retrospective ''.
how much storage does a hard drive have
Hard disk drive - Wikipedia A hard disk drive (HDD), hard disk, hard drive or fixed disk is a data storage device that uses magnetic storage to store and retrieve digital information using one or more rigid rapidly rotating disks (platters) coated with magnetic material. The platters are paired with magnetic heads, usually arranged on a moving actuator arm, which read and write data to the platter surfaces. Data is accessed in a random - access manner, meaning that individual blocks of data can be stored or retrieved in any order and not only sequentially. HDDs are a type of non-volatile storage, retaining stored data even when powered off. Introduced by IBM in 1956, HDDs became the dominant secondary storage device for general - purpose computers by the early 1960s. Continuously improved, HDDs have maintained this position into the modern era of servers and personal computers. More than 200 companies have produced HDDs historically, though after extensive industry consolidation most current units are manufactured by Seagate, Toshiba, and Western Digital. HDD unit shipments and sales revenues are declining, though production (exabytes per year) is growing. Flash memory has a growing share of the market for secondary storage, in the form of solid - state drives (SSDs). SSDs have higher data - transfer rates, higher areal storage density, better reliability, and much lower latency and access times. Though SSDs have higher cost per bit, they are replacing HDDs where speed, power consumption, small size, and durability are important. The primary characteristics of an HDD are its capacity and performance. Capacity is specified in unit prefixes corresponding to powers of 1000: a 1 - terabyte (TB) drive has a capacity of 1,000 gigabytes (GB; where 1 gigabyte = 1 billion bytes). Typically, some of an HDD 's capacity is unavailable to the user because it is used by the file system and the computer operating system, and possibly inbuilt redundancy for error correction and recovery. Performance is specified by the time required to move the heads to a track or cylinder (average access time) plus the time it takes for the desired sector to move under the head (average latency, which is a function of the physical rotational speed in revolutions per minute), and finally the speed at which the data is transmitted (data rate). The two most common form factors for modern HDDs are 3.5 - inch, for desktop computers, and 2.5 - inch, primarily for laptops. HDDs are connected to systems by standard interface cables such as PATA (Parallel ATA), SATA (Serial ATA), USB or SAS (Serial Attached SCSI) cables. The hard disk drive was initially developed as data storage for the IBM 305 RAMAC computer system., IBM announced HDDs in 1956 as a component of the IBM 305 RAMAC system and as a new component to enhance the existing IBM 650 system, a general - purpose mainframe. The first IBM drive, the 350 RAMAC in 1956, was approximately the size of two medium - sized refrigerators and stored five million six - bit characters (3.75 megabytes) on a stack of 50 disks. In 1962 the IBM 350 was superseded by the IBM 1301 disk storage unit, which consisted of 50 platters, each about 1 / 8 - inch thick and 24 inches in diameter. While the IBM 350 used only two read / write heads, the 1301 used an array of heads, one per platter, moving as a single unit. Cylinder - mode read / write operations were supported, and the heads flew about 250 micro-inches (about 6 μm) above the platter surface. Motion of the head array depended upon a binary adder system of hydraulic actuators which assured repeatable positioning. The 1301 cabinet was about the size of three home refrigerators placed side by side, storing the equivalent of about 21 million eight - bit bytes. Access time was about a quarter of a second. Also in 1962, IBM introduced the model 1311 disk drive, which was about the size of a washing machine and stored two million characters on a removable disk pack. Users could buy additional packs and interchange them as needed, much like reels of magnetic tape. Later models of removable pack drives, from IBM and others, became the norm in most computer installations and reached capacities of 300 megabytes by the early 1980s. Non-removable HDDs were called "fixed disk '' drives. Some high - performance HDDs were manufactured with one head per track (e.g. IBM 2305 in 1970) so that no time was lost physically moving the heads to a track. Known as fixed - head or head - per - track disk drives they were very expensive and are no longer in production. In 1973, IBM introduced a new type of HDD code - named "Winchester ''. Its primary distinguishing feature was that the disk heads were not withdrawn completely from the stack of disk platters when the drive was powered down. Instead, the heads were allowed to "land '' on a special area of the disk surface upon spin - down, "taking off '' again when the disk was later powered on. This greatly reduced the cost of the head actuator mechanism, but precluded removing just the disks from the drive as was done with the disk packs of the day. Instead, the first models of "Winchester technology '' drives featured a removable disk module, which included both the disk pack and the head assembly, leaving the actuator motor in the drive upon removal. Later "Winchester '' drives abandoned the removable media concept and returned to non-removable platters. Like the first removable pack drive, the first "Winchester '' drives used platters 14 inches (360 mm) in diameter. A few years later, designers were exploring the possibility that physically smaller platters might offer advantages. Drives with non-removable eight - inch platters appeared, and then drives that used a 5 ⁄ in (130 mm) form factor (a mounting width equivalent to that used by contemporary floppy disk drives). The latter were primarily intended for the then - fledgling personal computer (PC) market. As the 1980s began, HDDs were a rare and very expensive additional feature in PCs, but by the late 1980s their cost had been reduced to the point where they were standard on all but the cheapest computers. Most HDDs in the early 1980s were sold to PC end users as an external, add - on subsystem. The subsystem was not sold under the drive manufacturer 's name but under the subsystem manufacturer 's name such as Corvus Systems and Tallgrass Technologies, or under the PC system manufacturer 's name such as the Apple ProFile. The IBM PC / XT in 1983 included an internal 10 MB HDD, and soon thereafter internal HDDs proliferated on personal computers. External HDDs remained popular for much longer on the Apple Macintosh. Many Macintosh computers made between 1986 and 1998 featured a SCSI port on the back, making external expansion simple. Older compact Macintosh computers did not have user - accessible hard drive bays (indeed, the Macintosh 128K, Macintosh 512K, and Macintosh Plus did not feature a hard drive bay at all), so on those models external SCSI disks were the only reasonable option for expanding upon any internal storage. The 2011 Thailand floods damaged the manufacturing plants and impacted hard disk drive cost adversely between 2011 and 2013. Driven by ever increasing areal density since their invention, HDDs have continuously improved their characteristics; a few highlights are listed in the table above. At the same time, market application expanded from mainframe computers of the late 1950s to most mass storage applications including computers and consumer applications such as storage of entertainment content. A modern HDD records data by magnetizing a thin film of ferromagnetic material on a disk. Sequential changes in the direction of magnetization represent binary data bits. The data is read from the disk by detecting the transitions in magnetization. User data is encoded using an encoding scheme, such as run - length limited encoding, which determines how the data is represented by the magnetic transitions. A typical HDD design consists of a spindle that holds flat circular disks, also called platters, which hold the recorded data. The platters are made from a non-magnetic material, usually aluminum alloy, glass, or ceramic, and are coated with a shallow layer of magnetic material typically 10 -- 20 nm in depth, with an outer layer of carbon for protection. For reference, a standard piece of copy paper is 0.07 -- 0.18 millimeters (70,000 -- 180,000 nm). The platters in contemporary HDDs are spun at speeds varying from 4,200 rpm in energy - efficient portable devices, to 15,000 rpm for high - performance servers. The first HDDs spun at 1,200 rpm and, for many years, 3,600 rpm was the norm. As of December 2013, the platters in most consumer - grade HDDs spin at either 5,400 rpm or 7,200 rpm. Information is written to and read from a platter as it rotates past devices called read - and - write heads that are positioned to operate very close to the magnetic surface, with their flying height often in the range of tens of nanometers. The read - and - write head is used to detect and modify the magnetization of the material passing immediately under it. In modern drives, there is one head for each magnetic platter surface on the spindle, mounted on a common arm. An actuator arm (or access arm) moves the heads on an arc (roughly radially) across the platters as they spin, allowing each head to access almost the entire surface of the platter as it spins. The arm is moved using a voice coil actuator or in some older designs a stepper motor. Early hard disk drives wrote data at some constant bits per second, resulting in all tracks having the same amount of data per track but modern drives (since the 1990s) use zone bit recording -- increasing the write speed from inner to outer zone and thereby storing more data per track in the outer zones. In modern drives, the small size of the magnetic regions creates the danger that their magnetic state might be lost because of thermal effects, thermally induced magnetic instability which is commonly known as the "superparamagnetic limit ''. To counter this, the platters are coated with two parallel magnetic layers, separated by a 3 - atom layer of the non-magnetic element ruthenium, and the two layers are magnetized in opposite orientation, thus reinforcing each other. Another technology used to overcome thermal effects to allow greater recording densities is perpendicular recording, first shipped in 2005, and as of 2007 the technology was used in many HDDs. In 2004, a new concept was introduced to allow further increase of the data density in magnetic recording, using recording media consisting of coupled soft and hard magnetic layers. That so - called exchange spring media, also known as exchange coupled composite media, allows good writability due to the write - assist nature of the soft layer. However, the thermal stability is determined only by the hardest layer and not influenced by the soft layer. A typical HDD has two electric motors; a spindle motor that spins the disks and an actuator (motor) that positions the read / write head assembly across the spinning disks. The disk motor has an external rotor attached to the disks; the stator windings are fixed in place. Opposite the actuator at the end of the head support arm is the read - write head; thin printed - circuit cables connect the read - write heads to amplifier electronics mounted at the pivot of the actuator. The head support arm is very light, but also stiff; in modern drives, acceleration at the head reaches 550 g. The actuator is a permanent magnet and moving coil motor that swings the heads to the desired position. A metal plate supports a squat neodymium - iron - boron (NIB) high - flux magnet. Beneath this plate is the moving coil, often referred to as the voice coil by analogy to the coil in loudspeakers, which is attached to the actuator hub, and beneath that is a second NIB magnet, mounted on the bottom plate of the motor (some drives have only one magnet). The voice coil itself is shaped rather like an arrowhead, and made of doubly coated copper magnet wire. The inner layer is insulation, and the outer is thermoplastic, which bonds the coil together after it is wound on a form, making it self - supporting. The portions of the coil along the two sides of the arrowhead (which point to the actuator bearing center) then interact with the magnetic field of the fixed magnet. Current flowing radially outward along one side of the arrowhead and radially inward on the other produces the tangential force. If the magnetic field were uniform, each side would generate opposing forces that would cancel each other out. Therefore, the surface of the magnet is half north pole and half south pole, with the radial dividing line in the middle, causing the two sides of the coil to see opposite magnetic fields and produce forces that add instead of canceling. Currents along the top and bottom of the coil produce radial forces that do not rotate the head. The HDD 's electronics control the movement of the actuator and the rotation of the disk, and perform reads and writes on demand from the disk controller. Feedback of the drive electronics is accomplished by means of special segments of the disk dedicated to servo feedback. These are either complete concentric circles (in the case of dedicated servo technology), or segments interspersed with real data (in the case of embedded servo technology). The servo feedback optimizes the signal to noise ratio of the GMR sensors by adjusting the voice - coil of the actuated arm. The spinning of the disk also uses a servo motor. Modern disk firmware is capable of scheduling reads and writes efficiently on the platter surfaces and remapping sectors of the media which have failed. Modern drives make extensive use of error correction codes (ECCs), particularly Reed -- Solomon error correction. These techniques store extra bits, determined by mathematical formulas, for each block of data; the extra bits allow many errors to be corrected invisibly. The extra bits themselves take up space on the HDD, but allow higher recording densities to be employed without causing uncorrectable errors, resulting in much larger storage capacity. For example, a typical 1 TB hard disk with 512 - byte sectors provides additional capacity of about 93 GB for the ECC data. In the newest drives, as of 2009, low - density parity - check codes (LDPC) were supplanting Reed -- Solomon; LDPC codes enable performance close to the Shannon Limit and thus provide the highest storage density available. Typical hard disk drives attempt to "remap '' the data in a physical sector that is failing to a spare physical sector provided by the drive 's "spare sector pool '' (also called "reserve pool ''), while relying on the ECC to recover stored data while the number of errors in a bad sector is still low enough. The S.M.A.R.T (Self - Monitoring, Analysis and Reporting Technology) feature counts the total number of errors in the entire HDD fixed by ECC (although not on all hard drives as the related S.M.A.R.T attributes "Hardware ECC Recovered '' and "Soft ECC Correction '' are not consistently supported), and the total number of performed sector remappings, as the occurrence of many such errors may predict an HDD failure. The "No - ID Format '', developed by IBM in the mid-1990s, contains information about which sectors are bad and where remapped sectors have been located. Only a tiny fraction of the detected errors ends up as not correctable. For example, specification for an enterprise SAS disk (a model from 2013) estimates this fraction to be one uncorrected error in every 10 bits, and another SAS enterprise disk from 2013 specifies similar error rates. Another modern (as of 2013) enterprise SATA disk specifies an error rate of less than 10 non-recoverable read errors in every 10 bits. An enterprise disk with a Fibre Channel interface, which uses 520 byte sectors to support the Data Integrity Field standard to combat data corruption, specifies similar error rates in 2005. The worst type of errors are silent data corruptions which are errors undetected by the disk firmware or the host operating system; some of these errors may be caused by hard disk drive malfunctions. The rate of areal density advancement was similar to Moore 's law (doubling every two years) through 2010: 60 % per year during 1988 -- 1996, 100 % during 1996 -- 2003 and 30 % during 2003 -- 2010. Gordon Moore (1997) called the increase "flabbergasting, '' while observing later that growth can not continue forever. Price improvement decelerated to − 12 % per year during 2010 -- 2017, as the growth of areal density slowed. The rate of advancement for areal density slowed to 10 % per year during 2010 -- 2016, and there was difficulty in migrating from perpendicular recording to newer technologies. As bit cell size decreases, more data can be put onto a single drive platter. In 2013, a production desktop 3 TB HDD (with four platters) would have had an areal density of about 500 Gbit / in which would have amounted to a bit cell comprising about 18 magnetic grains (11 by 1.6 grains). Since the mid-2000s areal density progress has increasingly been challenged by a superparamagnetic trilemma involving grain size, grain magnetic strength and ability of the head to write. In order to maintain acceptable signal to noise smaller grains are required; smaller grains may self - reverse (electrothermal instability) unless their magnetic strength is increased, but known write head materials are unable to generate a magnetic field sufficient to write the medium. Several new magnetic storage technologies are being developed to overcome or at least abate this trilemma and thereby maintain the competitiveness of HDDs with respect to products such as flash memory - based solid - state drives (SSDs). In 2013, Seagate introduced one such technology, shingled magnetic recording (SMR). Additionally, SMR comes with design complexities that may cause reduced write performance. Other new recording technologies that, as of 2016, still remain under development include heat - assisted magnetic recording (HAMR), microwave - assisted magnetic recording (MAMR), two - dimensional magnetic recording (TDMR), bit - patterned recording (BPR), and "current perpendicular to plane '' giant magnetoresistance (CPP / GMR) heads. The rate of areal density growth has dropped below the historical Moore 's law rate of 40 % per year, and the deceleration is expected to persist through at least 2020. Depending upon assumptions on feasibility and timing of these technologies, the median forecast by industry observers and analysts for 2020 and beyond for areal density growth is 20 % per year with a range of 10 -- 30 %. The achievable limit for the HAMR technology in combination with BPR and SMR may be 10 Tbit / in, which would be 20 times higher than the 500 Gbit / in represented by 2013 production desktop HDDs. As of 2015, HAMR HDDs have been delayed several years, and are expected in 2018. They require a different architecture, with redesigned media and read / write heads, new lasers, and new near - field optical transducers. The capacity of a hard disk drive, as reported by an operating system to the end user, is smaller than the amount stated by the manufacturer for several reasons: the operating system using some space, use of some space for data redundancy, and space use for file system structures. Also the difference in capacity reported in SI decimal prefixed units vs. binary prefixes can lead to a false impression of missing capacity. Modern hard disk drives appear to their host controller as a contiguous set of logical blocks, and the gross drive capacity is calculated by multiplying the number of blocks by the block size. This information is available from the manufacturer 's product specification, and from the drive itself through use of operating system functions that invoke low - level drive commands. The gross capacity of older HDDs is calculated as the product of the number of cylinders per recording zone, the number of bytes per sector (most commonly 512), and the count of zones of the drive. Some modern SATA drives also report cylinder - head - sector (CHS) capacities, but these are not physical parameters because the reported values are constrained by historic operating system interfaces. The C / H / S scheme has been replaced by logical block addressing (LBA), a simple linear addressing scheme that locates blocks by an integer index, which starts at LBA 0 for the first block and increments thereafter. When using the C / H / S method to describe modern large drives, the number of heads is often set to 64, although a typical hard disk drive, as of 2013, has between one and four platters. In modern HDDs, spare capacity for defect management is not included in the published capacity; however, in many early HDDs a certain number of sectors were reserved as spares, thereby reducing the capacity available to the operating system. For RAID subsystems, data integrity and fault - tolerance requirements also reduce the realized capacity. For example, a RAID 1 array has about half the total capacity as a result of data mirroring, while a RAID 5 array with x drives loses 1 / x of capacity (which equals to the capacity of a single drive) due to storing parity information. RAID subsystems are multiple drives that appear to be one drive or more drives to the user, but provide fault tolerance. Most RAID vendors use checksums to improve data integrity at the block level. Some vendors design systems using HDDs with sectors of 520 bytes to contain 512 bytes of user data and eight checksum bytes, or by using separate 512 - byte sectors for the checksum data. Some systems may use hidden partitions for system recovery, reducing the capacity available to the end user. Data is stored on a hard drive in a series of logical blocks. Each block is delimited by markers identifying its start and end, error detecting and correcting information, and space between blocks to allow for minor timing variations. These blocks often contained 512 bytes of usable data, but other sizes have been used. As drive density increased, an initiative known as Advanced Format extended the block size to 4096 bytes of usable data, with a resulting significant reduction in the amount of disk space used for block headers, error checking data, and spacing. The process of initializing these logical blocks on the physical disk platters is called low - level formatting, which is usually performed at the factory and is not normally changed in the field. High - level formatting writes data structures used by the operating system to organize data files on the disk. This includes writing partition and file system structures into selected logical blocks. For example, some of the disk space will be used to hold a directory of disk file names and a list of logical blocks associated with a particular file. Examples of partition mapping scheme include Master boot record (MBR) and GUID Partition Table (GPT). Examples of data structures stored on disk to retrieve files include the File Allocation Table (FAT) in the DOS file system and inodes in many UNIX file systems, as well as other operating system data structures (also known as metadata). As a consequence, not all the space on an HDD is available for user files, but this system overhead is usually small compared with user data. The total capacity of HDDs is given by manufacturers using SI decimal prefixes such as gigabytes (1 GB = 1,000,000,000 bytes) and terabytes (1 TB = 1,000,000,000,000 bytes). This practice dates back to the early days of computing; by the 1970s, "million '', "mega '' and "M '' were consistently used in the decimal sense for drive capacity. However, capacities of memory are quoted using a binary interpretation of the prefixes, i.e. using powers of 1024 instead of 1000. Software reports hard disk drive or memory capacity in different forms using either decimal or binary prefixes. The Microsoft Windows family of operating systems uses the binary convention when reporting storage capacity, so an HDD offered by its manufacturer as a 1 TB drive is reported by these operating systems as a 931 GB HDD. Mac OS X 10.6 ("Snow Leopard '') uses decimal convention when reporting HDD capacity. The default behavior of the df command - line utility on Linux is to report the HDD capacity as a number of 1024 - byte units. The difference between the decimal and binary prefix interpretation caused some consumer confusion and led to class action suits against HDD manufacturers. The plaintiffs argued that the use of decimal prefixes effectively misled consumers while the defendants denied any wrongdoing or liability, asserting that their marketing and advertising complied in all respects with the law and that no class member sustained any damages or injuries. HDD price per byte improved at the rate of − 40 % per year during 1988 -- 1996, − 51 % per year during 1996 -- 2003, and − 34 % per year during 2003 -- 2010. The price improvement decelerated to − 13 % per year during 2011 -- 2014, as areal density increase slowed and the 2011 Thailand floods damaged manufacturing facilities. IBM 's first hard drive, the IBM 350, used a stack of fifty 24 - inch platters and was of a size comparable to two large refrigerators. In 1962, IBM introduced its model 1311 disk, which used six 14 - inch (nominal size) platters in a removable pack and was roughly the size of a washing machine. This became a standard platter size and drive form - factor for many years, used also by other manufacturers. The IBM 2314 used platters of the same size in an eleven - high pack and introduced the "drive in a drawer '' layout, although the "drawer '' was not the complete drive. Later drives were designed to fit entirely into a chassis that would mount in a 19 - inch rack. Digital 's RK05 and RL01 were early examples using single 14 - inch platters in removable packs, the entire drive fitting in a 10.5 - inch - high rack space (six rack units). In the mid-to - late 1980s the similarly sized Fujitsu Eagle, which used (coincidentally) 10.5 - inch platters, was a popular product. Such large platters were never used with microprocessor - based systems. With increasing sales of microcomputers having built in floppy - disk drives (FDDs), HDDs that would fit to the FDD mountings became desirable. Thus HDD Form factors, initially followed those of 8 - inch, 5.25 - inch, and 3.5 - inch floppy disk drives. Although referred to by these nominal sizes, the actual sizes for those three drives respectively are 9.5 ", 5.75 '' and 4 '' wide. Because there were no smaller floppy disk drives, smaller HDD form factors developed from product offerings or industry standards. 2.5 - inch drives are actually 2.75 '' wide. As of 2012, 2.5 - inch and 3.5 - inch hard disks were the most popular sizes. By 2009, all manufacturers had discontinued the development of new products for the 1.3 - inch, 1 - inch and 0.85 - inch form factors due to falling prices of flash memory, which has no moving parts. While nominal sizes are in inches, actual dimensions are specified in millimeters. The factors that limit the time to access the data on an HDD are mostly related to the mechanical nature of the rotating disks and moving heads. Seek time is a measure of how long it takes the head assembly to travel to the track of the disk that contains data. The first HDD had an average seek time of about 600 ms;. Some early PC drives used a stepper motor to move the heads, and as a result had seek times as slow as 80 -- 120 ms, but this was quickly improved by voice coil type actuation in the 1980s, reducing seek times to around 20 ms. Seek time has continued to improve slowly over time. The fastest server drives today have a seek time around 4 ms. The average seek time is strictly the time to do all possible seeks divided by the number of all possible seeks, but in practice is determined by statistical methods or simply approximated as the time of a seek over one - third of the number of tracks. Rotational latency is incurred because the desired disk sector may not be directly under the head when data transfer is requested. Average rotational latency is shown in the table, based on the statistical relation that the average latency is one - half the rotational period. The bit rate or data transfer rate (once the head is in the right position) creates delay which is a function of the number of blocks transferred; typically relatively small, but can be quite long with the transfer of large contiguous files. Delay may also occur if the drive disks are stopped to save energy. Defragmentation is a procedure used to minimize delay in retrieving data by moving related items to physically proximate areas on the disk. Some computer operating systems perform defragmentation automatically. Although automatic defragmentation is intended to reduce access delays, performance will be temporarily reduced while the procedure is in progress. Time to access data can be improved by increasing rotational speed (thus reducing latency) or by reducing the time spent seeking. Increasing areal density increases throughput by increasing data rate and by increasing the amount of data under a set of heads, thereby potentially reducing seek activity for a given amount of data. The time to access data has not kept up with throughput increases, which themselves have not kept up with growth in bit density and storage capacity. As of 2010, a typical 7,200 - rpm desktop HDD has a sustained "disk - to - buffer '' data transfer rate up to 1,030 Mbit / s. This rate depends on the track location; the rate is higher for data on the outer tracks (where there are more data sectors per rotation) and lower toward the inner tracks (where there are fewer data sectors per rotation); and is generally somewhat higher for 10,000 - rpm drives. A current widely used standard for the "buffer - to - computer '' interface is 3.0 Gbit / s SATA, which can send about 300 megabyte / s (10 - bit encoding) from the buffer to the computer, and thus is still comfortably ahead of today 's disk - to - buffer transfer rates. Data transfer rate (read / write) can be measured by writing a large file to disk using special file generator tools, then reading back the file. Transfer rate can be influenced by file system fragmentation and the layout of the files. HDD data transfer rate depends upon the rotational speed of the platters and the data recording density. Because heat and vibration limit rotational speed, advancing density becomes the main method to improve sequential transfer rates. Higher speeds require a more powerful spindle motor, which creates more heat. While areal density advances by increasing both the number of tracks across the disk and the number of sectors per track, only the latter increases the data transfer rate for a given rpm. Since data transfer rate performance tracks only one of the two components of areal density, its performance improves at a lower rate. Other performance considerations include quality - adjusted price, power consumption, audible noise, and both operating and non-operating shock resistance. The Federal Reserve Board has a quality - adjusted price index for large - scale enterprise storage systems including three or more enterprise HDDs and associated controllers, racks and cables. Prices for these large - scale storage systems improved at the rate of ‒ 30 % per year during 2004 -- 2009 and ‒ 22 % per year during 2009 -- 2014. Current hard drives connect to a computer over one of several bus types, including parallel ATA, Serial ATA, SCSI, Serial Attached SCSI (SAS), and Fibre Channel. Some drives, especially external portable drives, use IEEE 1394, or USB. All of these interfaces are digital; electronics on the drive process the analog signals from the read / write heads. Current drives present a consistent interface to the rest of the computer, independent of the data encoding scheme used internally, and independent of the physical number of disks and heads within the drive. Typically a DSP in the electronics inside the drive takes the raw analog voltages from the read head and uses PRML and Reed -- Solomon error correction to decode the data, then sends that data out the standard interface. That DSP also watches the error rate detected by error detection and correction, and performs bad sector remapping, data collection for Self - Monitoring, Analysis, and Reporting Technology, and other internal tasks. Modern interfaces connect the drive to the host interface with a single data / control cable. Each drive also has an additional power cable, usually direct to the power supply unit. Older interfaces had separate cables for data signals and for drive control signals. Due to the extremely close spacing between the heads and the disk surface, HDDs are vulnerable to being damaged by a head crash -- a failure of the disk in which the head scrapes across the platter surface, often grinding away the thin magnetic film and causing data loss. Head crashes can be caused by electronic failure, a sudden power failure, physical shock, contamination of the drive 's internal enclosure, wear and tear, corrosion, or poorly manufactured platters and heads. The HDD 's spindle system relies on air density inside the disk enclosure to support the heads at their proper flying height while the disk rotates. HDDs require a certain range of air densities to operate properly. The connection to the external environment and density occurs through a small hole in the enclosure (about 0.5 mm in breadth), usually with a filter on the inside (the breather filter). If the air density is too low, then there is not enough lift for the flying head, so the head gets too close to the disk, and there is a risk of head crashes and data loss. Specially manufactured sealed and pressurized disks are needed for reliable high - altitude operation, above about 3,000 m (9,800 ft). Modern disks include temperature sensors and adjust their operation to the operating environment. Breather holes can be seen on all disk drives -- they usually have a sticker next to them, warning the user not to cover the holes. The air inside the operating drive is constantly moving too, being swept in motion by friction with the spinning platters. This air passes through an internal recirculation (or "recirc '') filter to remove any leftover contaminants from manufacture, any particles or chemicals that may have somehow entered the enclosure, and any particles or outgassing generated internally in normal operation. Very high humidity present for extended periods of time can corrode the heads and platters. For giant magnetoresistive (GMR) heads in particular, a minor head crash from contamination (that does not remove the magnetic surface of the disk) still results in the head temporarily overheating, due to friction with the disk surface, and can render the data unreadable for a short period until the head temperature stabilizes (so called "thermal asperity '', a problem which can partially be dealt with by proper electronic filtering of the read signal). When the logic board of a hard disk fails, the drive can often be restored to functioning order and the data recovered by replacing the circuit board with one of an identical hard disk. In the case of read - write head faults, they can be replaced using specialized tools in a dust - free environment. If the disk platters are undamaged, they can be transferred into an identical enclosure and the data can be copied or cloned onto a new drive. In the event of disk - platter failures, disassembly and imaging of the disk platters may be required. For logical damage to file systems, a variety of tools, including fsck on UNIX - like systems and CHKDSK on Windows, can be used for data recovery. Recovery from logical damage can require file carving. A common expectation is that hard disk drives designed and marketed for server use will fail less frequently than consumer - grade drives usually used in desktop computers. However, two independent studies by Carnegie Mellon University and Google found that the "grade '' of a drive does not relate to the drive 's failure rate. A 2011 summary of research, into SSD and magnetic disk failure patterns by Tom 's Hardware summarized research findings as follows: More than 200 companies have manufactured HDDs over time. But consolidations have concentrated production into just three manufacturers today: Western Digital, Seagate, and Toshiba. Worldwide revenue for disk storage declined 4 % per year, from a peak of $38 billion in 2012 to $27 billion in 2016. Production of HDDs grew 16 % per year, from 335 exabytes in 2011 to 693 exabytes in 2016. Shipments declined 7 % per year during this time period, from 620 million units to 425 million. Seagate and Western Digital each have 40 -- 45 % of unit shipments, while Toshiba has 13 -- 17 %. The average sales price for the two largest manufacturers was $60 per unit in 2015. The maximum areal storage density for flash memory used in solid state drives (SSDs) is 2.8 Tbit / in in laboratory demonstrations as of 2016, and the maximum for HDDs is 1.5 Tbit / in. The areal density of flash memory is doubling every two years, similar to Moore 's law (40 % per year) and faster than the 10 -- 20 % per year for HDDs. As of 2016, maximum capacity was 10 terabytes for an HDD, and 15 terabytes for an SSD. HDDs were used in 70 % of the desktop and notebook computers produced in 2016, and SSDs were used in 30 %. The usage share of HDDs is declining and could drop below 50 % in 2018 -- 2019 according to one forecast, because SSDs are replacing smaller - capacity (less than one - terabyte) HDDs in desktop and notebook computers and MP3 players. The market for silicon - based flash memory (NAND) chips, used in SSDs and other applications, is growing rapidly. Worldwide revenue grew 12 % per year during 2011 -- 2016. It rose from $22 billion in 2011 to $39 billion in 2016, while production grew 46 % per year from 19 exabytes to 120 exabytes. External hard disk drives typically connect via USB; variants using USB 2.0 interface generally have slower data transfer rates when compared to internally mounted hard drives connected through SATA. Plug and play drive functionality offers system compatibility and features large storage options and portable design. As of March 2015, available capacities for external hard disk drives ranged from 500 GB to 10 TB. External hard disk drives are usually available as pre-assembled integrated products, but may be also assembled by combining an external enclosure (with USB or other interface) with a separately purchased drive. They are available in 2.5 - inch and 3.5 - inch sizes; 2.5 - inch variants are typically called portable external drives, while 3.5 - inch variants are referred to as desktop external drives. "Portable '' drives are packaged in smaller and lighter enclosures than the "desktop '' drives; additionally, "portable '' drives use power provided by the USB connection, while "desktop '' drives require external power bricks. Features such as biometric security or multiple interfaces (for example, Firewire) are available at a higher cost. There are pre-assembled external hard disk drives that, when taken out from their enclosures, can not be used internally in a laptop or desktop computer due to embedded USB interface on their printed circuit boards, and lack of SATA (or Parallel ATA) interfaces. In GUIs, hard disk drives are commonly symbolized with a drive icon
who was involved in the brown vs board of education
Brown v. Board of Education - wikipedia Brown v. Board of Education of Topeka, 347 U.S. 483 (1954), was a landmark United States Supreme Court case in which the Court declared state laws establishing separate public schools for black and white students to be unconstitutional. The decision effectively overturned the Plessy v. Ferguson decision of 1896, which allowed state - sponsored segregation, insofar as it applied to public education. Handed down on May 17, 1954, the Warren Court 's unanimous (9 -- 0) decision stated that "separate educational facilities are inherently unequal. '' As a result, de jure racial segregation was ruled a violation of the Equal Protection Clause of the Fourteenth Amendment of the United States Constitution. This ruling paved the way for integration and was a major victory of the Civil Rights Movement, and a model for many future impact litigation cases. However, the decision 's fourteen pages did not spell out any sort of method for ending racial segregation in schools, and the Court 's second decision in Brown II (349 U.S. 294 (1955)) only ordered states to desegregate "with all deliberate speed ''. For much of the sixty years preceding the Brown case, race relations in the United States had been dominated by racial segregation. This policy had been endorsed in 1896 by the United States Supreme Court case of Plessy v. Ferguson, which held that as long as the separate facilities for the separate races were equal, segregation did not violate the Fourteenth Amendment ("no State shall... deny to any person... the equal protection of the laws ''). The plaintiffs in Brown asserted that this system of racial separation, while masquerading as providing separate but equal treatment of both white and black Americans, instead perpetuated inferior accommodations, services, and treatment for black Americans. Racial segregation in education varied widely from the 17 states that required racial segregation to the 16 in which it was prohibited. Brown was influenced by UNESCO 's 1950 Statement, signed by a wide variety of internationally renowned scholars, titled The Race Question. This declaration denounced previous attempts at scientifically justifying racism as well as morally condemning racism. Another work that the Supreme Court cited was Gunnar Myrdal 's An American Dilemma: The Negro Problem and Modern Democracy (1944). Myrdal had been a signatory of the UNESCO declaration. The research performed by the educational psychologists Kenneth B. Clark and Mamie Phipps Clark also influenced the Court 's decision. The Clarks ' "doll test '' studies presented substantial arguments to the Supreme Court about how segregation affected black schoolchildren 's mental status. The United States and the Soviet Union were both at the height of the Cold War during this time, and U.S. officials, including Supreme Court Justices, were highly aware of the harm that segregation and racism played on America 's international image. When Justice William O. Douglas traveled to India in 1950, the first question he was asked was, "Why does America tolerate the lynching of Negroes? '' Douglas later wrote that he had learned from his travels that "the attitude of the United States toward its colored minorities is a powerful factor in our relations with India. '' Chief Justice Earl Warren, nominated to the Supreme Court by President Eisenhower, echoed Douglas 's concerns in a 1954 speech to the American Bar Association, proclaiming that "Our American system like all others is on trial both at home and abroad,... the extent to which we maintain the spirit of our constitution with its Bill of Rights, will in the long run do more to make it both secure and the object of adulation than the number of hydrogen bombs we stockpile. '' In 1951, a class action suit was filed against the Board of Education of the City of Topeka, Kansas in the United States District Court for the District of Kansas. The plaintiffs were thirteen Topeka parents on behalf of their 20 children. The suit called for the school district to reverse its policy of racial segregation. The Topeka Board of Education operated separate elementary schools under an 1879 Kansas law, which permitted (but did not require) districts to maintain separate elementary school facilities for black and white students in 12 communities with populations over 15,000. The plaintiffs had been recruited by the leadership of the Topeka NAACP. Notable among the Topeka NAACP leaders were the chairman McKinley Burnett; Charles Scott, one of three serving as legal counsel for the chapter; and Lucinda Todd. The named plaintiff, Oliver L. Brown, was a parent, a welder in the shops of the Santa Fe Railroad, an assistant pastor at his local church, and an African American. He was convinced to join the lawsuit by Scott, a childhood friend. Brown 's daughter Linda, a third grader, had to walk six blocks to her school bus stop to ride to Monroe Elementary, her segregated black school one mile (1.6 km) away, while Sumner Elementary, a white school, was seven blocks from her house. As directed by the NAACP leadership, the parents each attempted to enroll their children in the closest neighborhood school in the fall of 1951. They were each refused enrollment and directed to the segregated schools. The case "Oliver Brown et al. v. The Board of Education of Topeka, Kansas '' was named after Oliver Brown as a legal strategy to have a man at the head of the roster. The lawyers, and the National Chapter of the NAACP, also felt that having Mr. Brown at the head of the roster would be better received by the U.S. Supreme Court Justices. The 13 plaintiffs were: Oliver Brown, Darlene Brown, Lena Carper, Sadie Emmanuel, Marguerite Emerson, Shirley Fleming, Zelma Henderson, Shirley Hodison, Maude Lawton, Alma Lewis, Iona Richardson, and Lucinda Todd. The last surviving plaintiff, Zelma Henderson, died in Topeka, on May 20, 2008, at age 88. The District Court ruled in favor of the Board of Education, citing the U.S. Supreme Court precedent set in Plessy v. Ferguson, 163 U.S. 537 (1896), which had upheld a state law requiring "separate but equal '' segregated facilities for blacks and whites in railway cars. The three - judge District Court panel found that segregation in public education has a detrimental effect on negro children, but denied relief on the ground that the negro and white schools in Topeka were substantially equal with respect to buildings, transportation, curricula, and educational qualifications of teachers. The case of Brown v. Board of Education as heard before the Supreme Court combined five cases: Brown itself, Briggs v. Elliott (filed in South Carolina), Davis v. County School Board of Prince Edward County (filed in Virginia), Gebhart v. Belton (filed in Delaware), and Bolling v. Sharpe (filed in Washington D.C.). All were NAACP - sponsored cases. The Davis case, the only case of the five originating from a student protest, began when 16 - year - old Barbara Rose Johns organized and led a 450 - student walkout of Moton High School. The Gebhart case was the only one where a trial court, affirmed by the Delaware Supreme Court, found that discrimination was unlawful; in all the other cases the plaintiffs had lost as the original courts had found discrimination to be lawful. The Kansas case was unique among the group in that there was no contention of gross inferiority of the segregated schools ' physical plant, curriculum, or staff. The district court found substantial equality as to all such factors. The lower court, in its opinion, noted that, in Topeka, "the physical facilities, the curricula, courses of study, qualification and quality of teachers, as well as other educational facilities in the two sets of schools (were) comparable. '' The lower court observed that "colored children in many instances are required to travel much greater distances than they would be required to travel could they attend a white school '' but also noted that the school district "transports colored children to and from school free of charge '' and that "no such service (was) provided to white children. '' In the Delaware case the district court judge in Gebhart ordered that the black students be admitted to the white high school due to the substantial harm of segregation and the differences that made the separate schools unequal. The NAACP 's chief counsel, Thurgood Marshall -- who was later appointed to the U.S. Supreme Court in 1967 -- argued the case before the Supreme Court for the plaintiffs. Assistant attorney general Paul Wilson -- later distinguished emeritus professor of law at the University of Kansas -- conducted the state 's ambivalent defense in his first appellate argument. In December 1952, the Justice Department filed a friend of the court brief in the case. The brief was unusual in its heavy emphasis on foreign - policy considerations of the Truman administration in a case ostensibly about domestic issues. Of the seven pages covering "the interest of the United States, '' five focused on the way school segregation hurt the United States in the Cold War competition for the friendship and allegiance of non-white peoples in countries then gaining independence from colonial rule. Attorney General James P. McGranery noted that The existence of discrimination against minority groups in the United States has an adverse effect upon our relations with other countries. Racial discrimination furnishes grist for the Communist propaganda mills. The brief also quoted a letter by Secretary of State Dean Acheson lamenting that The United States is under constant attack in the foreign press, over the foreign radio, and in such international bodies as the United Nations because of various practices of discrimination in this country. British barrister and parliamentarian Anthony Lester has written that "Although the Court 's opinion in Brown made no reference to these considerations of foreign policy, there is no doubt that they significantly influenced the decision. '' In spring 1953, the Court heard the case but was unable to decide the issue and asked to rehear the case in fall 1953, with special attention to whether the Fourteenth Amendment 's Equal Protection Clause prohibited the operation of separate public schools for whites and blacks. The Court reargued the case at the behest of Associate Justice Felix Frankfurter, who used reargument as a stalling tactic, to allow the Court to gather a consensus around a Brown opinion that would outlaw segregation. The justices in support of desegregation spent much effort convincing those who initially intended to dissent to join a unanimous opinion. Although the legal effect would be same for a majority rather than unanimous decision, it was felt that dissent could be used by segregation supporters as a legitimizing counter-argument. Conference notes and draft decisions illustrate the division of opinions before the decision was issued. Justices Douglas, Black, Burton, and Minton were predisposed to overturn Plessy. Fred M. Vinson noted that Congress had not issued desegregation legislation; Stanley F. Reed discussed incomplete cultural assimilation and states ' rights and was inclined to the view that segregation worked to the benefit of the African - American community; Tom C. Clark wrote that "we had led the states on to think segregation is OK and we should let them work it out. '' Felix Frankfurter and Robert H. Jackson disapproved of segregation, but were also opposed to judicial activism and expressed concerns about the proposed decision 's enforceability. Chief Justice Vinson had been a key stumbling block. After Vinson died in September 1953, President Dwight D. Eisenhower appointed Earl Warren as Chief Justice. Warren had supported the integration of Mexican - American students in California school systems following Mendez v. Westminster. However, Eisenhower invited Earl Warren to a White House dinner, where the president told him: "These (southern whites) are not bad people. All they are concerned about is to see that their sweet little girls are not required to sit in school alongside some big overgrown Negroes. '' Nevertheless, the Justice Department sided with the African American plaintiffs. In his reading of the unanimous decision, Justice Warren noted the adverse psychological effects that segregated schools had on African American children. Brown 's cite of the Kenneth and Mamie Doll Study was criticized by Justice Clarence Thomas in a later concurring opinion for implying black inferiority. According to Susan Firestone, the study itself is dubious in conclusion and unreliable in reproduction. While all but one justice personally rejected segregation, the judicial restraint faction questioned whether the Constitution gave the Court the power to order its end. The activist faction believed the Fourteenth Amendment did give the necessary authority and were pushing to go ahead. Warren, who held only a recess appointment, held his tongue until the Senate confirmed his appointment. Warren convened a meeting of the justices, and presented to them the simple argument that the only reason to sustain segregation was an honest belief in the inferiority of Negroes. Warren further submitted that the Court must overrule Plessy to maintain its legitimacy as an institution of liberty, and it must do so unanimously to avoid massive Southern resistance. He began to build a unanimous opinion. Although most justices were immediately convinced, Warren spent some time after this famous speech convincing everyone to sign onto the opinion. Justices Jackson and Reed finally decided to drop their dissent. The final decision was unanimous. Warren drafted the basic opinion and kept circulating and revising it until he had an opinion endorsed by all the members of the Court. Reed was the last holdout and reportedly cried during the reading of the opinion. Reporters who observed the court holding were surprised by two facts. First, the court made a unanimous decision. Prior to the ruling, there were reports that the court members were sharply divided and might not be able to agree. Second, the attendance of Justice Robert H. Jackson who had suffered a mild heart attack and was not expected to return to the bench until early June 1954. "Perhaps to emphasize the unanimity of the court, perhaps from a desire to be present when the history - making verdict was announced, Justice Jackson was in his accustomed seat when the court convened. '' Reporters also noted that former Secretary of State Dean Acheson (who had related the case to foreign policy considerations) and current Attorney General Herbert Brownell were in the courtroom. The key holding of the Court was that, even if segregated black and white schools were of equal quality in facilities and teachers, segregation by itself was harmful to black students and unconstitutional. They found that a significant psychological and social disadvantage was given to black children from the nature of segregation itself, drawing on research conducted by Kenneth Clark assisted by June Shagaloff. This aspect was vital because the question was not whether the schools were "equal '', which under Plessy they nominally should have been, but whether the doctrine of separate was constitutional. The justices answered with a strong "no '': (D) oes segregation of children in public schools solely on the basis of race, even though the physical facilities and other "tangible '' factors may be equal, deprive the children of the minority group of equal educational opportunities? We believe that it does... "Segregation of white and colored children in public schools has a detrimental effect upon the colored children. The effect is greater when it has the sanction of the law, for the policy of separating the races is usually interpreted as denoting the inferiority of the negro group. A sense of inferiority affects the motivation of a child to learn. Segregation with the sanction of law, therefore, has a tendency to (retard) the educational and mental development of negro children and to deprive them of some of the benefits they would receive in a racial (ly) integrated school system. ''... We conclude that, in the field of public education, the doctrine of "separate but equal '' has no place. Separate educational facilities are inherently unequal. Therefore, we hold that the plaintiffs and others similarly situated for whom the actions have been brought are, by reason of the segregation complained of, deprived of the equal protection of the laws guaranteed by the Fourteenth Amendment. The Topeka junior high schools had been integrated since 1941. Topeka High School was integrated from its inception in 1871 and its sports teams from 1949 on. The Kansas law permitting segregated schools allowed them only "below the high school level ''. Soon after the district court decision, election outcomes and the political climate in Topeka changed. The Board of Education of Topeka began to end segregation in the Topeka elementary schools in August 1953, integrating two attendance districts. All the Topeka elementary schools were changed to neighborhood attendance centers in January 1956, although existing students were allowed to continue attending their prior assigned schools at their option. Plaintiff Zelma Henderson, in a 2004 interview, recalled that no demonstrations or tumult accompanied desegregation in Topeka 's schools: "They accepted it, '' she said. "It was n't too long until they integrated the teachers and principals. '' The Topeka Public Schools administration building is named in honor of McKinley Burnett, NAACP chapter president who organized the case. Monroe Elementary was designated a U.S. National Historic Site unit of the National Park Service on October 26, 1992. Not everyone accepted the Brown v. Board of Education decision. In Virginia, Senator Harry F. Byrd, Sr. organized the Massive Resistance movement that included the closing of schools rather than desegregating them. See, for example, The Southern Manifesto. For more implications of the Brown decision, see School integration in the United States. Texas Attorney General John Ben Shepperd organized a campaign to generate legal obstacles to implementation of desegregation. In 1957, Arkansas Governor Orval Faubus called out his state 's National Guard to block black students ' entry to Little Rock Central High School. President Dwight Eisenhower responded by deploying elements of the 101st Airborne Division from Fort Campbell, Kentucky, to Arkansas and by federalizing Arkansas 's National Guard. Also in 1957, Florida 's response was mixed. Its legislature passed an Interposition Resolution denouncing the decision and declaring it null and void. But Florida Governor LeRoy Collins, though joining in the protest against the court decision, refused to sign it, arguing that the attempt to overturn the ruling must be done by legal methods. In Mississippi fear of violence prevented any plaintiff from bringing a school desegregation suit for the next nine years. When Medgar Evers sued to desegregate Jackson, Mississippi schools in 1963 White Citizens Council member Byron De La Beckwith murdered him. Two subsequent trials resulted in hung juries. Beckwith was not convicted of the murder until 1994. In 1963, Alabama Gov. George Wallace personally blocked the door to Foster Auditorium at the University of Alabama to prevent the enrollment of two black students. This became the infamous Stand in the Schoolhouse Door where Wallace personally backed his "segregation now, segregation tomorrow, segregation forever '' policy that he had stated in his 1963 inaugural address. He moved aside only when confronted by General Henry Graham of the Alabama National Guard, who was ordered by President John F. Kennedy to intervene. In North Carolina, there was often a strategy of nominally accepting Brown, but tacitly resisting it. On May 18, 1954 the Greensboro, North Carolina school board declared that it would abide by the Brown ruling. This was the result of the initiative of D.E. Hudgins Jr., a former Rhodes Scholar and prominent attorney, who chaired the school board. This made Greensboro the first, and for years the only, city in the South, to announce its intent to comply. However, others in the city resisted integration, putting up legal obstacles to the actual implementation of school desegregation for years afterward, and in 1969, the federal government found the city was not in compliance with the 1964 Civil Rights Act. Transition to a fully integrated school system did not begin until 1971, after numerous local lawsuits and both nonviolent and violent demonstrations. Historians have noted the irony that Greensboro, which had heralded itself as such a progressive city, was one of the last holdouts for school desegregation. In Moberly, Missouri, the schools were desegregated, as ordered. However, after 1955, the African - American teachers from the local "negro school '' were not retained; this was ascribed to poor performance. They appealed their dismissal in Naomi Brooks et al., Appellants, v. School District of City of Moberly, Missouri, Etc., et al.; but it was upheld, and SCOTUS declined to hear a further appeal. Many Northern cities also had de facto segregation policies, which resulted in a vast gulf in educational resources between black and white communities. In Harlem, New York, for example, not a single new school had been built since the turn of the century, nor did a single nursery school exist, even as the Second Great Migration caused overcrowding of existing schools. Existing schools tended to be dilapidated and staffed with inexperienced teachers. Northern officials were in denial of the segregation, but Brown helped stimulate activism among African - American parents like Mae Mallory who, with support of the NAACP, initiated a successful lawsuit against the city and State of New York on Brown 's principles. Mallory and thousands of other parents bolstered the pressure of the lawsuit with a school boycott in 1959. During the boycott, some of the first freedom schools of the period were established. The city responded to the campaign by permitting more open transfers to high - quality, historically - white schools. (New York 's African - American community, and Northern desegregation activists generally, now found themselves contending with the problem of white flight, however.) The intellectual roots of Plessy v. Ferguson, the landmark United States Supreme Court decision upholding the constitutionality of racial segregation in 1896 under the doctrine of "separate but equal '' were, in part, tied to the scientific racism of the era. However, the popular support for the decision was more likely a result of the racist beliefs held by many whites at the time. In deciding Brown v. Board of Education, the Supreme Court rejected the ideas of scientific racists about the need for segregation, especially in schools. The Court buttressed its holding by citing (in footnote 11) social science research about the harms to black children caused by segregated schools. Both scholarly and popular ideas of hereditarianism played an important role in the attack and backlash that followed the Brown decision. The Mankind Quarterly was founded in 1960, in part in response to the Brown decision. William Rehnquist wrote a memo titled "A Random Thought on the Segregation Cases '' when he was a law clerk for Justice Robert H. Jackson in 1952, during early deliberations that led to the Brown v. Board of Education decision. In his memo, Rehnquist argued: "I realize that it is an unpopular and unhumanitarian position, for which I have been excoriated by ' liberal ' colleagues but I think Plessy v. Ferguson was right and should be reaffirmed. '' Rehnquist continued, "To the argument... that a majority may not deprive a minority of its constitutional right, the answer must be made that while this is sound in theory, in the long run it is the majority who will determine what the constitutional rights of the minorities are. '' Rehnquist also argued for Plessy with other law clerks. However, during his 1971 confirmation hearings, Rehnquist said, "I believe that the memorandum was prepared by me as a statement of Justice Jackson 's tentative views for his own use. '' Justice Jackson had initially planned to join a dissent in Brown. Later, at his 1986 hearings for the slot of Chief Justice, Rehnquist put further distance between himself and the 1952 memo: "The bald statement that Plessy was right and should be reaffirmed, was not an accurate reflection of my own views at the time. '' In any event, while serving on the Supreme Court, Rehnquist made no effort to reverse or undermine the Brown decision, and frequently relied upon it as precedent. Chief Justice Warren 's reasoning was broadly criticized by contemporary legal academics with Judge Learned Hand decrying that the Supreme Court had "assumed the role of a third legislative chamber '' and Herbert Wechsler finding Brown impossible to justify based on neutral principles. Some aspects of the Brown decision are still debated. Notably, Supreme Court Justice Clarence Thomas, himself an African American, wrote in Missouri v. Jenkins (1995) that at the very least, Brown I has been misunderstood by the courts. Brown I did not say that "racially isolated '' schools were inherently inferior; the harm that it identified was tied purely to de jure segregation, not de facto segregation. Indeed, Brown I itself did not need to rely upon any psychological or social - science research in order to announce the simple, yet fundamental truth that the Government can not discriminate among its citizens on the basis of race.... Segregation was not unconstitutional because it might have caused psychological feelings of inferiority. Public school systems that separated blacks and provided them with superior educational resources making blacks "feel '' superior to whites sent to lesser schools -- would violate the Fourteenth Amendment, whether or not the white students felt stigmatized, just as do school systems in which the positions of the races are reversed. Psychological injury or benefit is irrelevant... Given that desegregation has not produced the predicted leaps forward in black educational achievement, there is no reason to think that black students can not learn as well when surrounded by members of their own race as when they are in an integrated environment. (...) Because of their "distinctive histories and traditions, '' black schools can function as the center and symbol of black communities, and provide examples of independent black leadership, success, and achievement. Some Constitutional originalists, notably Raoul Berger in his influential 1977 book "Government by Judiciary, '' make the case that Brown can not be defended by reference to the original understanding of the 14th Amendment. They support this reading of the 14th amendment by noting that the Civil Rights Act of 1875 did not ban segregated schools and that the same Congress that passed the 14th Amendment also voted to segregate schools in the District of Columbia. Other originalists, including Michael W. McConnell, a federal judge on the United States Court of Appeals for the Tenth Circuit, in his article "Originalism and the Desegregation Decisions, '' argue that the Radical Reconstructionists who spearheaded the 14th Amendment were in favor of desegregated southern schools. Evidence supporting this interpretation of the 14th amendment has come from archived Congressional records showing that proposals for federal legislation which would enforce school integration were debated in Congress a few years following the amendment 's ratification. The case also has attracted some criticism from more liberal authors, including some who say that Chief Justice Warren 's reliance on psychological criteria to find a harm against segregated blacks was unnecessary. For example, Drew S. Days has written: "we have developed criteria for evaluating the constitutionality of racial classifications that do not depend upon findings of psychic harm or social science evidence. They are based rather on the principle that ' distinctions between citizens solely because of their ancestry are by their very nature odious to a free people whose institutions are founded upon the doctrine of equality, ' Hirabayashi v. United States, 320 U.S. 81 (1943)... '' In his book The Tempting of America (page 82), Robert Bork endorsed the Brown decision as follows: By 1954, when Brown came up for decision, it had been apparent for some time that segregation rarely if ever produced equality. Quite aside from any question of psychology, the physical facilities provided for blacks were not as good as those provided for whites. That had been demonstrated in a long series of cases... The Court 's realistic choice, therefore, was either to abandon the quest for equality by allowing segregation or to forbid segregation in order to achieve equality. There was no third choice. Either choice would violate one aspect of the original understanding, but there was no possibility of avoiding that. Since equality and segregation were mutually inconsistent, though the ratifiers did not understand that, both could not be honored. When that is seen, it is obvious the Court must choose equality and prohibit state - imposed segregation. The purpose that brought the fourteenth amendment into being was equality before the law, and equality, not separation, was written into the law. In June 1987, Philip Elman, a civil rights attorney who served as an associate in the Solicitor General 's office during Harry Truman 's term, claimed he and Associate Justice Felix Frankfurter were mostly responsible for the Supreme Court 's decision, and stated that the NAACP 's arguments did not present strong evidence. Elman has been criticized for offering a self - aggrandizing history of the case, omitting important facts, and denigrating the work of civil rights attorneys who had laid the groundwork for the decision over many decades. However, Frankfurter was also known for being one of court 's most outspoken advocates of the judicial restraint philosophy of basing court rulings on existing law rather than personal or political considerations. Public officials in the United States today are nearly unanimous in lauding the ruling. In May 2004, the fiftieth anniversary of the ruling, President George W. Bush spoke at the opening of the Brown v. Board of Education National Historic Site, calling Brown "a decision that changed America for the better, and forever. '' Most Senators and Representatives issued press releases hailing the ruling. In an article in Townhall, Thomas Sowell argued that When Chief Justice Earl Warren declared in the landmark 1954 case of Brown v. Board of Education that racially separate schools were "inherently unequal, '' Dunbar High School was a living refutation of that assumption. And it was within walking distance of the Supreme Court. '' In 1955, the Supreme Court considered arguments by the schools requesting relief concerning the task of desegregation. In their decision, which became known as "Brown II '' the court delegated the task of carrying out school desegregation to district courts with orders that desegregation occur "with all deliberate speed, '' a phrase traceable to Francis Thompson 's poem, The Hound of Heaven. Supporters of the earlier decision were displeased with this decision. The language "all deliberate speed '' was seen by critics as too ambiguous to ensure reasonable haste for compliance with the court 's instruction. Many Southern states and school districts interpreted "Brown II '' as legal justification for resisting, delaying, and avoiding significant integration for years -- and in some cases for a decade or more -- using such tactics as closing down school systems, using state money to finance segregated "private '' schools, and "token '' integration where a few carefully selected black children were admitted to former white - only schools but the vast majority remained in underfunded, unequal black schools. For example, based on "Brown II, '' the U.S. District Court ruled that Prince Edward County, Virginia did not have to desegregate immediately. When faced with a court order to finally begin desegregation in 1959 the county board of supervisors stopped appropriating money for public schools, which remained closed for five years, from 1959 to 1964. White students in the county were given assistance to attend white - only "private academies '' that were taught by teachers formerly employed by the public school system, while black students had no education at all unless they moved out of the county. But the public schools reopened after the Supreme Court overturned "Brown II '' in Griffin v. County School Board of Prince Edward County, declaring that "... the time for mere ' deliberate speed ' has run out, '' and that the county must provide a public school system for all children regardless of race. In 1978, Topeka attorneys Richard Jones, Joseph Johnson and Charles Scott, Jr. (son of the original Brown team member), with assistance from the American Civil Liberties Union, persuaded Linda Brown Smith -- who now had her own children in Topeka schools -- to be a plaintiff in reopening Brown. They were concerned that the Topeka Public Schools ' policy of "open enrollment '' had led to and would lead to further segregation. They also believed that with a choice of open enrollment, white parents would shift their children to "preferred '' schools that would create both predominantly African American and predominantly European American schools within the district. The district court reopened the Brown case after a 25 - year hiatus, but denied the plaintiffs ' request finding the schools "unitary ''. In 1989, a three - judge panel of the Tenth Circuit on 2 -- 1 vote found that the vestiges of segregation remained with respect to student and staff assignment. In 1993, the Supreme Court denied the appellant School District 's request for certiorari and returned the case to District Court Judge Richard Rodgers for implementation of the Tenth Circuit 's mandate. After a 1994 plan was approved and a bond issue passed, additional elementary magnet schools were opened and district attendance plans redrawn, which resulted in the Topeka schools meeting court standards of racial balance by 1998. Unified status was eventually granted to Topeka Unified School District No. 501 on July 27, 1999. One of the new magnet schools is named after the Scott family attorneys for their role in the Brown case and civil rights. Linda Brown Thompson later recalled the experience of being refused enrollment: ... we lived in an integrated neighborhood and I had all of these playmates of different nationalities. And so when I found out that day that I might be able to go to their school, I was just thrilled, you know. And I remember walking over to Sumner school with my dad that day and going up the steps of the school and the school looked so big to a smaller child. And I remember going inside and my dad spoke with someone and then he went into the inner office with the principal and they left me out... to sit outside with the secretary. And while he was in the inner office, I could hear voices and hear his voice raised, you know, as the conversation went on. And then he immediately came out of the office, took me by the hand and we walked home from the school. I just could n't understand what was happening because I was so sure that I was going to go to school with Mona and Guinevere, Wanda, and all of my playmates. Linda Brown died on March 25, 2018 at the age of 76. I thought Plessy had been wrongly decided at the time, that it was not a good interpretation of the equal protection clause to say that when you segregate people by race, there is no denial of equal protection. But Plessy had been on the books for 60 years; Congress had never acted, and the same Congress that had promulgated the 14th Amendment had required segregation in the District schools... I saw factors on both sides... I did not agree then, and I certainly do not agree now, with the statement that Plessy against Ferguson is right and should be reaffirmed. I had ideas on both sides, and I do not think I ever really finally settled in my own mind on that... (A) round the lunch table I am sure I defended it... I thought there were good arguments to be made in support of it.
is there a city in turkey called santa claus
List of cities in Turkey - wikipedia This is a list of cities in Turkey by population, which includes cities that are provincial capitals or have a population of at least 7,000. The total population of Turkey is 78,741,053 according to the 2015 estimate, making it the 19th most populated country in the world. Cities with a population of over 7,000 inhabitants according to the Address - Based Birth Recording System data from 31 December 2007 are listed in the following table, along with the results of the censuses from 21 October 1990 and 22 October 2000, as well as the provinces in which the cities are located. The numbers of inhabitants refer to the actual city, not including urban areas. Istanbul, the largest city in Turkey Ankara, the second - largest city and capital of Turkey İzmir, the third - largest city and largest city in the Aegean Region Bursa, the fourth - largest city and former Ottoman capital Adana, the fifth - largest city and largest city in the Mediterranean Region Gaziantep, the sixth - largest city and largest in the Southeastern Anatolia Region Konya, the seventh - largest city and former Seljuk capital Antalya, the eighth - largest city in Turkey. Kayseri, the ninth - largest city in Turkey Mersin, the tenth - largest city in Turkey Eskişehir, the eleventh - largest city in Turkey Diyarbakır, the twelfth - largest city of Turkey Samsun, the thirteenth - largest city and largest in the Black Sea Region Denizli, the fourteenth - largest city and most successful Anatolian Tiger Şanlıurfa Malatya, the seventeenth - largest city and largest in the Eastern Anatolia Region Kahramanmaraş Erzurum İskenderun in Hatay Selimiye Mosque in Edirne The Turkish Republic of Northern Cyprus is not officially recognized by the United Nations, recognized only by Turkey; see Cyprus dispute.
when did australia change from gallons to litres
Metrication in Australia - Wikipedia Metrication in Australia effectively began in 1966 with the conversion to decimal currency under the auspices of the Decimal Currency Board. The conversion of measurements -- metrication -- commenced subsequently in 1971, under the direction of the Metric Conversion Board and actively proceeded until the Board was disbanded in 1981. Before 1970, Australia mostly used the imperial system for measurement, which the Australian colonies had inherited from the United Kingdom. Between 1970 and 1988, imperial units were withdrawn from general legal use and replaced with SI metric units, facilitated through legislation and government agencies. SI units are now the sole legal units of measurement in Australia. Australia 's largely successful transition to the metric system contrasts with the ongoing opposition to metrication in the United States, Canada and the United Kingdom. Although there was debate in Australia 's first Parliament after federation to consider adopting the metric system, metric units first became legal for use in Australia in 1947 when Australia signed the Metre Convention (or Convention du Mètre). However, Imperial "Weights and Measures '' were most commonly used until the Commonwealth government began the metric changeover in the 1970s. In 1960, SI units were adopted as a worldwide system of measurement by international agreement at the General Conference on Weights and Measures. The metre, kilogram, second, ampere, kelvin, candela and mole were defined as base units in this system and units formed from combinations of these base units were known as "derived units ''. SI units were subsequently adopted as the basis for Australia 's measurement standards, whereby they were defined as Australia 's legal units of measurement. In 1968, a Select Committee of the Australian Senate examined metric "Weights and Measures '' and came to the unanimous conclusion that it was both practical and desirable for Australia to change to the metric system. Some of their considerations included the "inherent advantages of the metric system '' that meant that weighing and measuring was facilitated, "often with substantial increases in efficiency ''. Educationally, the reform would "simplify and unify the teaching of mathematics and science, reduce errors, save teaching time and give a better understanding of basic physical principles ''. In 1968, more than 75 % of Australia 's exports went to metric countries, and at that time it was noted that all countries except the United States were metric or were converting to the metric system. It was also noted that because of Australia 's large migrant programme, more than 10 per cent of people over 16 years of age had used the metric system before coming to Australia. They also noted that school pupils were widely familiar with the metric system because it had been taught in the schools for many years. By 1968, metrication was already well under way in Australian industry. The pharmaceutical industry had metricated in 1965 and much of the chemical and electronics industries worked in metric units -- there being no "Imperial '' units for the latter. One of the country 's major automobile manufacturers had already declared its intention to metricate before the Government announced its decision to change to the metric system. "The change itself provided a unique opportunity to rationalise and modernise industrial practices and bring Australia 's technical standard specifications into accord with those adopted internationally ''. On 12 June 1970, the Australian Metric Conversion Act passed by the Australian Parliament was given assent. This Act created the Metric Conversion Board to facilitate the conversion of measurements from imperial to metric. A timeline of major developments in this conversion process is as follows: The Metric Conversion Board spent A $5.955 million during its 11 years of operation, and the federal government distributed $10 million to the states to support their conversion process. The cost of metrication for the private sector was not determined but the Prices Justification Tribunal reported that metrication was not used to justify price increases. Opposition to metrication was not widespread. The Metric Conversion Board did not proceed with education programmes as polling revealed that most people were learning units and their application independently of each other, rendering efforts to teach the systematic nature of the metric system unnecessary and possibly increasing the amount of opposition. The Metric Conversion Board was dissolved in 1981, but the conversion to the metric system was not completed until 1988. Between 1984 and 1988, the conversion was the responsibility of the National Standards Commission, later renamed the National Measurement Institute. In 1987, real estate became the last major industry to convert, and, in 1988, the few remaining imperial units were removed from general use. Restrictions of volume and masses that had previously compelled manufacturers to package products to rounded imperial sizes despite metric labelling, for example packaging a soft drink can as a rounded 13 imp fl oz but labelling it as 375 ml, were removed in 2008. An early change was the metrication of horse racing. This was facilitated because the furlong (one eighth of a mile) is close to 200 m. Therefore, the Melbourne Cup was changed from 2 miles (3,219 m) to 3,200 m (1.988 mi), a reduction of 18.688 metres (61.312 ft) or about 0.58 %. The first metric Melbourne Cup was raced in November 1972. When the Australian Bureau of Meteorology was enlisted to introduce the metric system for weather reporting and forecasts, its public relations officer, Godfrey Wiseman, coined a series of jingles to educate the public, using the terms "frosty fives '', "tingling tens '', "temperate twenties '', "thirsty thirties '' and "fiery forties '' to describe human sensation to temperatures in degrees Celsius. This was very successful because the public soon became aware of the significance of the descriptions. As the culmination of this campaign, weather reports and forecasts in both Imperial and metric measures were only provided for one month. After that, purely metric measurements were used for temperature, wind speed, rainfall, air pressure and other meteorological phenomena. An important and very visible sign of metric conversion in Australia was the change in road signs and the accompanying traffic regulations; "M - day '' for this change was 1 July 1974. Because of careful planning, almost every road sign in Australia was converted within a month. This was achieved by installing covered metric signs alongside the imperial signs before the change and then removing the imperial sign and uncovering the metric sign during the month of conversion. While road signs could not all be changed at the same time, there was little chance of confusion as to what any speed limit sign meant during this short change - over period. This was because the previous (MPH) signs had the signage in black on white and were rectangular, in the same style as current US speed limit signs, while the (km / h) signs which replaced them had the number indicating the speed limit inside a red circle, as is done in Europe. Road distance signs were also converted during this period. To avoid confusion as to whether the distance indicated was in miles or kilometres all the new kilometre signs had affixed to them a temporary yellow plate, on which was indicated the corresponding number of miles. These temporary plates were removed after about one year. Except for bridge - clearance and flood - depth signs, dual marking was avoided. Though people opposed to metrication expressed the fear that ignorance of the meaning of metric speeds would lead to slaughter on the roads, this did not happen as most drivers under the age of 25 had been taught metric units at school and through them, their parents were familiar with metric speeds if not metric units as a whole. It was believed that public education would be the most effective way of ensuring public safety. A Panel for Publicity on Road Travel, made up of the various motoring organisations, regulatory authorities and the media, planned a campaign to publicise the change. The resulting publicity campaign cost $200,000 and the Australian Government Department of Transport paid for it. The Board also produced 2.5 million copies of a pamphlet, "Motoring Goes Metric ''. This was distributed through post offices, police stations and motor registry offices. "For about a year before the change, motor car manufacturers fitted dual speedometers to their vehicles and, after 1974, all new cars were fitted with metric - only speedometers. Several kinds of speedometer conversion kits were available. "As a result of all these changes, conversion on the roads occurred without incident. '' The building industry was the first major industry grouping in Australia to complete its change to metric. This was achieved within two years by January 1976 for all new buildings other than those for which design had commenced well before metrication began. The resulting savings for builders and their sub-contractors has been estimated at about 10 % a year of gross turnover. In this the industry was grateful to the SAA (now Standards Australia) for the early production of the Standard AS 1155 - 1974 "Metric Units for Use in the Construction Industry '', which specified the use of millimetres as the small unit for the metrication upgrade. In the adoption of the millimetre as the "small '' unit of length for metrication (instead of the centimetre) the Metric Conversion Board leaned heavily on experience in the UK and within the ISO, where this decision had already been taken. ((Category: Articles with unsourced statements from October ((subst: 2017))) This was formally stated as follows: "The metric units for linear measurement in building and construction will be the metre (m) and the millimetre (mm), with the kilometre (km) being used where required. This will apply to all sectors of the industry, and the centimetre (cm) shall not be used.... the centimetre should not be used in any calculation and it should never be written down ''. The logic of using the millimetre in this context was that the metric system had been so designed that there would exist a multiple or submultiple for every use. Decimal fractions would not have to be used. Since the tolerances on building components and building practice would rarely be less than one millimetre, the millimetre became the sub-unit most appropriate to this industry. Although for the most part they are ignored in everyday life, it was not possible for education in the logical construction of the metric system to ignore the metric prefixes deci, centi, hecto and deka, since the derivation of the litre is dependent upon the knowledge that it derives from the volume of a cubic decimetre, i.e. 1000 cubic centimetres. However, in Australia, the prefix "centi '' has been limited to use with the metre (mainly by the general public not associated with the building industry) -- unlike in Europe, where the centilitre may often be encountered. Metrication is mostly complete. Road signs solely use metric measurements, as do the speedometers and odometers in motor vehicles produced after 1974; There was no requirement for pre-1974 vehicles to have their speedometers and odometers converted to metric, so any remaining vehicles of this vintage will show miles. Privately imported vehicles, such as classic cars, also do not need to be so converted. The sale of oil and petrol is by the litre. However, vehicle tyre pressures are still commonly talked about in pounds per square inch. Fruits and vegetables are advertised, sold and weighed by the kilogram, groceries are packed and labelled in metric measures. Schooling is wholly metric. Newspaper reports are mostly in metric terms. In some cases old imperial standards were replaced with rounded metric values, as with horse racing or the size of beer glasses (rounded to the nearest 5 mL). The pre-metric names of beer glass sizes, including the pint, have been retained (although in South Australia the "pint '' of beer is not an imperial pint, as it is elsewhere in Australia). Dressed timber is often sold in lengths such as 1.8, 2.1, 2.4, 3.0 and 3.6 metres, each multiples of 300 mm, thereby approximating foot length increments, while pipes and conduits may be specified as having diameters of 12, 19, 25 and 32 etc. mm (based on "soft '' conversions '' of ⁄, ⁄, 1 and 1 ⁄ in). In some cases goods manufactured to pre-metric standards are available, such as some bolts, nuts, screws and pipe threads and there are some instances where pre-metric measures may still be used: While Imperial units of measurements may sometimes be specified instead of SI units (usually, where the product originates from or is intended for an American market) apart from area measurements, the use of any measurement except in SI units is not "legal for trade '' under current Australian legislation. Examples where non-SI units are (sometimes) specified are: The cultural transmission of British and American English in Australia has also been noted to be a cause for residual use of Imperial units of measure.
who does the family medical leave act cover
Family and Medical leave Act of 1993 - wikipedia The Family and Medical Leave Act of 1993 (FMLA) is a United States labor law requiring covered employers to provide employees with job - protected and unpaid leave for qualified medical and family reasons. These include pregnancy, adoption, foster care placement of a child, personal or family illness, or family military leave. The FMLA is administered by the Wage and Hour Division of the United States Department of Labor. The FMLA was intended "to balance the demands of the workplace with the needs of families. '' The Act allows eligible employees to take up to 12 work weeks of unpaid leave during any 12 - month period to attend to the serious health condition of the employee, parent, spouse or child, or for pregnancy or care of a newborn child, or for adoption or foster care of a child. In order to be eligible for FMLA leave, an employee must have been at the business at least 12 months, and worked at least 1,250 hours over the past 12 months, and work at a location where the company employs 50 or more employees within 75 miles. The FMLA covers both public - and private - sector employees, but certain categories of employees are excluded, including elected officials and their personal staff members. The bill was a major part of President Bill Clinton 's agenda in his first term. Rapid growth in the workforce, including a large number of women joining, suggested a necessary federal regulation that would support the working class who desired to raise a family and / or required time off for illness related situations. President Clinton signed the bill into law on February 5, 1993 (Pub. L. 103 -- 3; 29 U.S.C. sec. 2601; 29 CFR 825) to take effect on August 5, 1993. The United States Congress passed the Act with the understanding that "it is important for the development of children and the family unit that fathers and mothers be able to participate in early childrearing... (and) the lack of employment policies to accommodate working parents can force individuals to choose between job security and parenting ''. It also stressed the Act was intended to provide leave protection for individuals "in a manner that accommodates the legitimate interests of employers ''. The Family and Medical Leave Act of 1993 generally applies to employers of 50 or more employees in 20 weeks of the last year. Employees must have worked over 12 months and 1250 hours in the last year (around 25 hours a week). However, employees "at which such employer employs less than 50 employees if the total number of employees employed by that employer within 75 miles of that worksite is less than 50. '' A worksite includes a public agency, including schools and state, local, and federal employers. The 50 employee threshold does not apply to public agency employees and local educational agencies. There are special hours rules for certain airline employees. Employees must give notice of 30 days to employers if birth or adoption is "foreseeable '', and for serious health conditions if practicable. Treatments should be arranged "so as not to disrupt unduly the operations of the employer '' according to medical advice. Along with the 30 day notice, there are also other requirements to be made when seeking the FMLA rights. If an employee wants to leave the first time using ones FMLA rights, the person must first claim the Family and Medical Leave Act. In the case that an employee were to leave again under the FMLA act, the same process must proceed. With the release of employees, there is a certification as well. The absence of an employee due to the conditions he or she may have may require a certification as proof of the verification of absence. In order to certify the leave of an employee, the employer may ask for other requirements. An example of these requirements are requiring multiple medical opinions. All of these prerequisites are at the employer 's expense. There are also certain rules that may apply to those who work at local education agencies. Employees can have up to 12 weeks of unpaid leave for child birth, adoption, to care for a close relative in poor health, or because of an employee 's own poor health. In full, the purposes for leave are: Child care leave should be taken in one lump, unless an employer agrees otherwise. If a father and mother have the same employer, they must share their leave, in effect halving each person 's rights, if the employer so chooses. Employers must provide benefits during the unpaid leave. Under § 2652 (b) states are empowered to provide "greater family or medical leave rights ''. Since 2008, the US Department of Labor, allowed the spouse, child, or parent of an active duty military member who is deployed across seas for 12 or more months to take up to 12 weeks of leave. Also, a military caregiver provision was added that would allow a caregiver to take up to 26 weeks of leave in order to actively care for a military member who requires medical attention for acute or ongoing conditions. Under § 2612 (2) (A) an employer can make an employee substitute the right to 12 unpaid weeks of leave for "accrued paid vacation leave, personal leave or family leave '' in an employer 's personnel policy. Originally the Department of Labor had a penalty to make employers notify employees that this might happen. However, five judges in the US Supreme Court in Ragsdale v Wolverine World Wide, Inc held that the statute precluded the right of the Department of Labor to do so. Four dissenting judges would have held that nothing prevented the rule, and it was the Department of Labor 's job to enforce the law. After unpaid leave, an employee generally has the right to return to his or her job, except for employees who are in the top 10 % of highest paid and the employer can argue refusal "is necessary to prevent substantial and grievous economic injury to the operations of the employer. '' In full, the rights during and after unpaid leave are to: "Highly compensated employees '' have limited rights to return to their jobs. They are defined as "a salaried eligible employee who is among the highest paid 10 percent of the employees employed by the employer within 75 miles of the facility at which the employee is employed ''. Their employers are not required to restore them to their original position (or an equivalent position with equivalent pay and benefits, as is guaranteed to other employees) if the employer determines that denying the employee their position is "necessary to prevent substantial and grievous economic injury to the operations of the employer '' and the employer provides the worker with notice of this decision, though no time frame for providing this notice is established. Employees or the Secretary of Labor can bring enforcement actions, but there is no right to a jury for reinstatement claims. Employees can seek damages for lost wages and benefits, or the cost of child care, plus an equal amount of liquidated damages unless an employer can show it acted in good faith and reasonable cause to believe it was not breaking the law. There is a two - year limit on bringing claims, or three years for willful violations. The federal FMLA does not apply to: Some states have enacted laws that mandate additional family and medical leave for workers in a variety of ways. By 2016 four states had laws for paid family leave: California since 2002, New Jersey since 2008, Rhode Island since 2013, and New York since 2016. Washington state passed a paid family and medical leave law in 2007, but the law has not taken effect due to a lack of funding mechanism. The federal FMLA only applies to employers with 50 or more employees, within 75 miles. Some states have enacted their own FMLAs that have a lower threshold for employer coverage: The federal FMLA only applies to immediate family -- parent, spouse, and child. The 2008 amendments to the FMLA for military family members extend the FMLA 's protection to next of kin and to adult children. The Department of Labor on June 22, 2010 clarified the definition of "son and daughter '' under the FMLA "to ensure that an employee who assumes the role of caring for a child receives parental rights to family leave regardless of the legal or biological relationship '' and specifying that "an employee who intends to share in the parenting of a child with his or her same sex partner will be able to exercise the right to FMLA leave to bond with that child. '' In February 2015, the Department of Labor issued its final rule amending the definition of spouse under the FMLA in response to the decision in United States v. Windsor, effective March 27, 2015. The revised definition of "spouse '' extends FMLA leave rights and job protections to eligible employees in a same - sex marriage or a common - law marriage entered into in a state where those statuses are legally recognized, regardless of the state in which the employee works or resides. Even if an employee works where same - sex or common law marriage is not recognized, that employee 's spouse triggers FMLA coverage if the employee married in a state that recognized same - sex marriage or common law marriage. Some states had already expanded the definition of family in their own FMLAs: FMLA leave can be used for a worker 's serious health condition, the serious health condition of a family member, or upon the arrival of a new child. State FMLA laws and the new military family provisions of the FMLA have broadened these categories: Several states have passed FMLA - type statutes to give parents unpaid leave for other related purposes, including: In 2003, Han and Waldfogel found that "only about 60 % of private sector workers are covered '' due to the clause stipulating a minimum number of employees, and once the clause stipulating a minimum number of hours worked is added, only 46 % of private sector workers are eligible for leave under the FMLA. In June 2007, the Department of Labor estimated that of 141.7 million workers in the United States, 94.4 million worked at FMLA - covered worksites, and 76.1 million were eligible for FMLA leave. Only eight to 17.1 percent of covered, eligible workers (or between 6.1 million and 13.0 million workers) took FMLA leave in 2005. The 2008 National Survey of Employers found no statistically significant difference between the proportion of small employers (79 %) and large employers (82 %) that offer full FMLA coverage. Although much of the research has been conducted on populations in other countries, Berger et al. found that children in the United States whose mothers return to work within the first 3 months after giving birth are less likely to be breastfed, have all of their immunizations up to date (by 18 months), and receive all of their regular medical checkups; they are also more likely to exhibit behavioral problems by four years of age. Chatterji and Markowitz also found an association between longer lengths of maternity leave and lesser incidence of depression among mothers. Despite the lack of rights to leave, there is no right to free child care or day care. This has encouraged several proposals to create a public system of free child care, or for the government to subsidize parents ' costs. Critics of the act have suggested that by mandating various forms of leave that are used more often by female than male employees, the Act, like the Pregnancy Discrimination Act of 1978, makes women more expensive to employ than men. They argue that employers will engage in subtle discrimination against women in the hiring process, discrimination which is much less obvious to detect than pregnancy discrimination against the already hired. Throughout history, gender discrimination towards women was common; certain laws were placed that would restrict a woman 's option in choosing a working position, as well as, how many hours she could work ei. Employers Supporters counter that the act, in contrast to the Pregnancy Discrimination Act of 1978, is aimed at both women and men, and is part of an overall strategy to encourage both men and women to take family - related leave. However, this is based on the assumption that men will take advantage of the opportunity of unpaid leave at comparable rates to women. According to Grossman, there is no basis for this assumption upon the inception of the legislation and no evidence has been found today to support this assumption. Therefore, the employer incentive to prefer male employees is preserved despite the equal opportunity for both sexes to take leave. Moreover, the FMLA is much less comprehensive than Western European leave policies. Namely, the United States is the only industrialized country without paid leave for parents. The following table illustrates the lack of provisions offered in the United States as compared to that of other industrialized countries. For instance, all Western European nations have maternity paid leave and over half have paternity and sick child care paid leave, while the United States has no paid leave. * Y = available, No = not available, - = not applicable Additionally, workplace fairness has been questioned under the Act. For instance, any woman - specific benefits provided by the legislation were considered special treatment and thus unacceptable, and ignoring the idea that women may have a greater share of burden of caregiving in reality. In retort, supporters may argue that creating such legislation that recognizes the female 's greater role in child care, stereotype would be reinforced. The success of the implementation of the policy is also controversial because it is questioned whether the policy is actually going to those who need the benefits. For instance, since the leave offered is unpaid, majorities of eligible employees can not take time off because they can not afford to do so. And according to Pyle and Pelletier, eligible workers may not even know about this policy and the benefits allotted to them. Under law, women are protected from sex discrimination in the workplace but a large stigma against women still exists in terms of them being equally skilled as their male co-workers, and ultimately testing the federal protection of rights in a work environment. Like any other federal regulation, it is strictly prohibited for an employer to discriminate towards an employee (especially if the employee is using their FMLA rights), and to strain from providing accurate information for all employees to access. Vicki Yandle, a receptionist who was fired after asking for a few weeks of time off to care for a daughter with cancer, was on stage with President Clinton when the law was signed.
who is jar jar binks in star wars
Jar Jar Binks - wikipedia Jar Jar Binks is a fictional character from the Star Wars saga created by George Lucas. A major character in Star Wars: Episode I -- The Phantom Menace, he also has a smaller role in Episode II: Attack of the Clones, and a one - line cameo in Episode III: Revenge of the Sith, as well as a role in the television series Star Wars: The Clone Wars. The first lead computer generated character of the franchise, he has been portrayed by Ahmed Best in most of his appearances. Jar Jar 's primary role in Episode I was to provide comic relief for the audience. Upon the movie 's release, he was met with an overwhelmingly negative reception from both critics and audiences, and is today considered one of the most hated characters in not just Star Wars, but the history of film overall. George Lucas was inspired to develop Jar Jar based on the Disney character Goofy. Singer Michael Jackson was originally considered for the role, but he wanted to portray the character using prosthetics while Lucas wanted him to be all CGI. Ahmed Best, who would end up playing the character, would later hypothesize that Lucas might have felt uncomfortable with the thought of the singer 's casting overshadowing the actual movie; Best was chosen based on his work in the production of Stomp as Lucas wanted someone athletic for the role. During his auditions he performed several Martial Arts moves and flips, which was a contrast to how Lucas pictured the character, which according to Best was more in line with comedic silent actors such as Buster Keaton. Best would later remark that after Lucas walked out of the audition he felt he had failed it. Jar Jar Binks first appears in Star Wars: Episode I -- The Phantom Menace as a bumbling, foolish Gungan from the planet Naboo. He is nearly killed by a Federation transport, only to be saved at the last minute by Jedi Knight Qui - Gon Jinn (Liam Neeson). Qui - Gon and his padawan, Obi - Wan Kenobi (Ewan McGregor), persuade Jar Jar 's tribe to release him to their custody as a guide. He later goes with the Jedi and Padmé Amidala (Natalie Portman) to the planet Tatooine, where he meets and befriends Anakin Skywalker (Jake Lloyd). Jar Jar later appears in the film 's climactic battle scene, where he leads his fellow Gungans, as a general in the Gungan army, in defeating the Trade Federation. After the battle, he appears at the funeral of Qui - Gon Jinn and in the ending parade with his fellow Gungans. Jar Jar 's role in Attack of the Clones is much smaller, but his actions are significant. Ten years after helping to save his planet, he is a delegate to the Galactic Senate and as such, plays a role in bringing his old friends, Obi - Wan and Anakin (Hayden Christensen) back to Coruscant, where he greets them with enthusiasm. Later, on the behalf of the Naboo, he gives a speech to the assembled Senate in favor of granting Chancellor Palpatine (Ian McDiarmid) vast emergency powers. These are granted, giving Palpatine the power he needs to subsequently overthrow the senate and bring the galaxy into the dictatorial control of the Sith 's Galactic Empire. Jar Jar appears in only a few scenes in Revenge of the Sith, and has no dialogue (besides a brief "' scuse me '' at one point). He was originally given some dialogue in the beginning, but this was cut. He is most prominently featured in Padmé Amidala 's funeral procession at the end of the film, marching sadly behind her coffin alongside Boss Nass. Jar Jar Binks is a supporting character in the animated series Star Wars: The Clone Wars, once again voiced by Best, although BJ Hughes voiced the character in a handful of season one episodes. In this series, he is a Senate representative who sometimes accompanies the main characters -- Anakin, Ahsoka, Obi - Wan, and Padmé -- on their adventures. He and master Mace Windu are the two main characters of the two - part episode "The Disappeared '' in which they had to search for missing elders and rescue a queen, who was Jar Jar 's past love interest. Chuck Wendig 's 2017 novel Star Wars: Aftermath: Empire 's End, set after the events of Return of the Jedi, finds Binks as a street performer who entertains refugee children but is loathed by adults who blame him for his part in the rise of the Empire. Chris Taylor of Mashable wrote that the situation reflects real life in that adults disliked Jar Jar in the prequel films, but children were entertained by him. In an interview, director J.J. Abrams suggested that Jar Jar 's death might be referenced in Star Wars: The Force Awakens, but this did not happen. With the 2012 acquisition of Lucasfilm by The Walt Disney Company, most of the licensed Star Wars novels and comics produced since the originating 1977 film Star Wars were rebranded as Star Wars Legends and declared non-canon to the franchise in April 2014. In the game Star Wars: The Force Unleashed, Jar Jar is shown to have been frozen in carbonite by Darth Vader and kept in the Sith 's lair. Binks is a Lego mini-figure in the Lego Star Wars video games, and appears as an Angry Bird with a hook move called "Jar Jar Wings '' in Angry Birds Star Wars II. Ahmed Best was signed on to portray Binks in the show Star Wars Detours. Even before the release of The Phantom Menace, Jar Jar Binks became the subject of a great deal of media and popular attention. After the film 's release Binks became symbolic of what many reviewers such as Brent Staples (The New York Times), David Edelstein (Slate), and Eric Harrison (Los Angeles Times) considered to be creative flaws of the film. The character was widely rejected and often ridiculed by people who felt that Jar Jar was included in the film solely to appeal to children. Bruce Handy of Vanity Fair wrote that "Jar Jar has come to symbolize what many fans see as the faults of the prequel trilogy: characters no one much cares about; a sense of humor geared toward the youngest conceivable audience members; an over-reliance on computer graphics; and story lines devoted to the kinds of convoluted political machinations which would n't have been out of place in adaptations of I, Claudius or The Rise and Fall of the Third Reich, but which fit less snugly in films with characters like Jar Jar Binks. '' One fan, Mike J. Nichols, created and distributed, free of charge, a modified version of the film, entitled The Phantom Edit, which cut out several scenes featuring what Nichols dubbed ' Jar Jar antics. ' The character was also lampooned on an episode of the television show South Park entitled "Jakovasaurs '', in The Fairly OddParents (Episode: "Abra - Catastrophe! ''), The Simpsons (Episode: "Co-Dependent's Day ''), as well as the parody Star Wars episodes of Robot Chicken, in which Best reprised the role in voice - over form. Along with film critics, many have also accused the film 's creators of excessive commercialization directed at young children (a criticism first leveled with the introduction of the Ewoks in Return of the Jedi). Star Wars creator George Lucas stated that he feels there is a section of the fanbase who get upset with aspects of Star Wars because "the movies are for children but they do n't want to admit that... There is a small group of fans that do not like comic sidekicks. They want the films to be tough like The Terminator, and they get very upset and opinionated about anything that has anything to do with being childlike. '' Joe Morgenstern of The Wall Street Journal described the character as a "Rastafarian Stepin Fetchit on platform hoofs, crossed annoyingly with Butterfly McQueen. '' Patricia J. Williams suggested that many aspects of Jar Jar 's character are highly reminiscent of the archetypes portrayed in blackface minstrelsy, while others have suggested the character is a "laid - back clown character '' representing a black Caribbean stereotype. George Lucas has denied any racist implications. Ahmed Best also rejected the allegations, saying that "Jar Jar has nothing to do with the Caribbean ''. In late October 2015, a Reddit user by the name of "Lumpawarroo '' published a theory speculating that Binks was originally written as a major antagonist of the series, as Darth Jar Jar, and a prominent collaborator with Palpatine, before being redacted from the villain 's role due to the character 's initial (and ongoing) negative reception. The post quickly became viral and received significant media coverage internationally by independent bloggers and major news outlets like The Guardian, The Washington Post, and The New York Times that included analysis of his actions in The Phantom Menace. Ahmed Best, who portrayed Jar Jar Binks in motion capture and voice, tweeted his thoughts on how it "feels good '' when the truth comes out shortly after the theory gained widespread popularity. Leading up to the release of Star Wars: The Force Awakens in December 2015, numerous sources have denied elements of the theory. Kathleen Kennedy of Lucasfilm explicitly stated that Jar Jar Binks would not make an appearance in the upcoming movie.
how many goals did chelsea concede last season
2016 -- 17 Chelsea F.C. Season - wikipedia The 2016 -- 17 season was Chelsea 's 103rd competitive season, 28th consecutive season in the top flight of English football, 25th consecutive season in the Premier League, and 111th year in existence as a football club. They entered this season looking to rebound from a disappointing 2015 -- 16 campaign, when they finished 10th in the table. Chelsea also participated in the FA Cup and League Cup, but they were not participating in any UEFA competition for the first time since the 1996 -- 97 season. The season covers the period from 1 July 2016 to 30 June 2017. Chelsea won their fifth Premier League title with a 1 -- 0 win away to West Bromwich Albion on 12 May. Chelsea lost the FA Cup Final to Arsenal after a 2 -- 1 loss on 27 May. This season was the last for John Terry, who announced he will leave when his contract ends at the end of the season after Chelsea 's final game. The season saw Chelsea equal the Premier League records for consecutive wins in a season (13) and home and away wins against different sides (12). They also managed to break the record for number of wins in a season (30), as well as record the second - highest points tally in Premier League history (93). It was announced that on the opening day, Chelsea will kick - off the season at home against West Ham United. On 9 June, Vitesse signed an extension on Nathan 's loan and then two weeks later also signed an extension on Lewis Baker 's loan. On 13 June, Chelsea announced it had released Marco Amelia and Kevin Wright, and also confirmed that loanees Radamel Falcao and Alexandre Pato would be returning to their respective teams. After spending a season - and - a-half on loan at Udinese, Udinese activated a clause in Stipe Perica 's contract to sign him permanently. On 22 June, Charly Musonda 's loan at Real Betis was extended for the 2016 -- 17 campaign. On June 27, Chelsea youngster Kyle Scott joined Dutch club Willem II on trial after handing in multiple transfer requests in April 2016. On 29 June, Nathan Aké joined AFC Bournemouth on loan after a successful loan season with Watford in 2015 -- 16. In June, Chelsea submitted a total of three bids for Roma 's Radja Nainggolan, with the third reportedly valued at € 40 million; the player ultimately decided to stay after receiving an improved contract from Roma the following month. On 1 July, it was announced that Pedro would switch to the number 11 shirt, recently vacated by the loan expiration of Alexandre Pato. On 3 July, Michy Batshuayi signed a five - year deal at Chelsea after an accepted bid of € 40 million (£ 33.2 million). Batshuayi became the first signing by new Chelsea manager Antonio Conte. After being linked with multiple teams, on 6 July Jérémie Boga joined La Liga side Granada on a season - long loan. After months of speculation, promising right - back Ola Aina signed a new four - year contract. Although Tika Musonda was on the release list, Chelsea opted to give him a new one - year contract. On 11 July, Chelsea under - 21 assistant manager Andy Myers joined Vitesse on a one - year deal as Henk Fraser 's assistant manager. With Myers joining the Dutch side, Ian Howell is promoted as the new U-21 assistant manager. On 12 July, Players ' Player of the Year Willian signed a new four - year contract. On 13 July, Tomáš Kalas returned to the Championship, joining Fulham on a season - long loan. After promotion to the first - team in the previous season, Kasey Palmer joins Huddersfield Town on 15 July. On 20 July, Kiwomya joined League 2 side Crewe Alexandra on loan until 9 January 2017. On 22 July, it was announced that Matej Delač would join Belgian side Mouscron - Péruwelz on a season - long loan. John Swift was given a new contract in June, but decided to turned it down to sign with Championship side Reading on July 14. On 16 July, N'Golo Kanté signed a five - year contract with Chelsea valued at £ 30 million from Leicester City, becoming Conte 's second signing. Chelsea lost its first pre-season match, against Rapid Wien, which ended in a 2 -- 0 defeat. In the following match of its Austrian tour, Chelsea won 3 -- 0 against Wolfsberger AC, with youngsters Bertrand Traoré, Ruben Loftus - Cheek and Nathaniel Chalobah each scoring a goal. The following day, Chelsea had a closed - door friendly with local team Atus Ferlach, ending its Austrian tour with an 8 -- 0 win over the champions of the Austrian fourth - tier Kärntner Liga. On 28 July, Chelsea started its tour of the United States with a 1 -- 0 victory over Premier League rival Liverpool thanks to an early goal from Gary Cahill. On 30 July, Chelsea set a record during the 3 -- 2 loss against Real Madrid, with a record attendance of 105,826. Youngsters Fikayo Tomori and Mukhtar Ali both signed new long - term contracts. On 2 August, Baba Rahman returned to the Bundesliga on a season - long loan with Schalke 04 after failing to impress Conte during the pre-season. Although Roma announced the signing Mohamed Salah back in October 2015, on 3 August Chelsea finalised the move for an additional € 15 million. On 5 August, Abraham signed for Championship side Bristol City on a season - long loan, with no recall clause and Papy Djilobodji joined Sunderland for a fee reported to be in the region of £ 8 million. On 6 August, Houghton joined Doncaster Rovers on loan until 3 January 2017. On 3 August, in its U.S. tour, Chelsea defeated Milan 3 -- 1. Chelsea concluded its pre-season campaign with a 4 -- 2 victory over Werder Bremen. On 12 August, Bertrand Traoré signed a new three - year contract. He then joined Ajax on loan for the season while Danilo Pantić joined Excelsior on loan. On 14 August 2016, Michael Hector joined German side Eintracht Frankfurt on a season - long loan. On 15 August 2016, goalkeeper Jamal Blackman joined League Two side Wycombe Wanderers on loan until 3 January 2017, while Isaiah Brown joined Rotherham United on loan until the end of the 2016 -- 17 season. On 23 August, Marko Marin joined Greek side Olympiacos on a three - year deal for a fee thought to be in the region of £ 3 million. On 25 August 2016, Eduardo joined Chelsea on free transfer, signing a one - year deal. On 27 August, Mario Pašalić joined Milan on a season - long loan. On 30 August, strikers Patrick Bamford and Loïc Rémy joined Burnley and Crystal Palace respectively on season - long loans. Later in the day, Kenedy was also confirmed to have left on a season - long loan deal, to Watford. Chelsea started its Premier League season with a 2 -- 1 win over London rivals West Ham United, with goals scored by Eden Hazard and Diego Costa. In its second league game, Chelsea left it late yet again, scoring two late goals in the second half to earn their first away win of the season over Watford. Chelsea continued its winning streak after beating Bristol Rovers to advance to the third round of the EFL Cup. On 27 August, in the 3 -- 0 home victory over Burnley, goalkeeper Thibaut Courtois kept the first clean sheet of the season and broke a run of 13 home Premier League games without a clean sheet, with their last being in a 1 -- 0 win over Norwich City in November 2015. In the month of August, Chelsea earned all nine available points and was in second place of the Premier League. On the last day of the transfer window, Chelsea completed a total of thirteen transfers, with 11 loan deals and two additions. Youngsters, Dion Conroy and Nathan Baxter, both joined up with semi-professional clubs, while Jake Clarke - Salter and Charlie Colkett both joined League One side Bristol Rovers. Lucas Piazon joined Tomáš Kalas at Fulham until 15 January 2017 while Christian Atsu joined Newcastle United on a season loan. Kenneth Omeruo returned to the Turkish league, joining newly promoted side Alanyaspor after signing a new contract until 2019. Cristián Cuevas returned to Sint - Truiden for a second season while Islam Feruz joined fellow loanee Matej Delač at Mouscron - Péruwelz. Matt Miazga joined up with the Dutch side Vitesse after his move to Espanyol fell through due to paperwork. Juan Cuadrado would return on loan to Juventus for three seasons which will see Juventus pay a loan fee of € 5 million a season, and also contain a buy - out clause € 25 million with add - on clauses. Marcos Alonso returned to the Premier League for a fee believed to be £ 23 million from Fiorentina, signing a five - year contract. Chelsea 's last summer transfer deal was the £ 30 million signing of David Luiz, who returned to the London side from Paris Saint - Germain after joining PSG from Chelsea in 2014. His return was completed after he insisted on the move and stated that it was a "good deal '' for the French champions after the club had initially refused the offer. Position at the end of August After the international break, Chelsea faced Swansea City at the Liberty Stadium in Wales on 11 September. The match ended in a 2 -- 2 draw, with both of Chelsea 's goals coming from Diego Costa. The draw meant that it was the first game of the season in which Chelsea dropped points. In the closing minutes, John Terry suffered an ankle injury and left the pitch on crutches; scans later showed that his injury was to rule him out for approximately ten days. On 16 September, Chelsea suffered their first defeat of the season at home, as Liverpool won 2 -- 1 at Stamford Bridge. David Luiz made his second Chelsea debut following his deadline day move from PSG. Two Liverpool goals in the first half, from Dejan Lovren 's close range finish and Jordan Henderson 's thunderous 25 - yard strike, put the game out of reach for the hosts, who managed to peg one goal back through Diego Costa. On 20 September, Chelsea beat Leicester City 4 -- 2 after extra-time to advance into the fourth round of the EFL Cup. In the match, youngster Nathaniel Chalobah made his first - team debut and Gary Cahill served as captain for the first time. Disappointment followed on 24 September in the form of another league defeat, this time a 3 -- 0 defeat against Arsenal at the Emirates Stadium. Alexis Sánchez pounced in the 11th minute after a horrific defensive error from Gary Cahill let him roam free on goal, followed three minutes later by another goal from Theo Walcott. Mesut Özil then exposed Chelsea on the counter-attack five minutes before the break, putting the game beyond Chelsea 's reach and sending them further down the league table. The win was also Arsenal 's first against Chelsea in the league since October 2011. In the month of September, Chelsea earned only a single point out of nine available points and were in eighth place in the Premier League. Position at the end of September After suffering back - to - back Premier League losses to top - four rivals Liverpool and Arsenal, Antonio Conte switched to and debuted a 3 -- 4 -- 3 formation against Hull City on the 1 October which earned him a 2 -- 0 victory, thanks to a goal apiece from Willian and Diego Costa. The new formation featured a back - three pairing of Gary Cahill, David Luiz and César Azpilicueta, with two wing - backs providing cover in the form of Marcos Alonso on the left - hand side and Victor Moses on the right - hand side. On 15 October, Chelsea earned a 3 -- 0 home victory over reigning Premier League champions Leicester. The hosts put in a domineering performance against the champions, with Diego Costa opening the scoring for Chelsea in the seventh minute. Two further goals followed from Eden Hazard and Victor Moses to inflict Leicester 's fourth consecutive away league defeat. Leicester could have potentially pegged a goal back following David Luiz hitting his own goalpost as a result of himself attempting to clear a Leicester corner, however it would merely have been a consolation as Chelsea comfortably claimed another three points. On 23 October, Chelsea stunned Manchester United and former manager José Mourinho at Stamford Bridge with a thumping 4 -- 0 win. Chelsea went into the lead within 30 seconds of the match, thanks to Spanish winger Pedro capitalizing on poor defending with a goal. Gary Cahill smashed in the second after United allowed Eden Hazard 's corner to bounce into their box. United offered little sign of making a comeback, falling further behind when Hazard drilled in a precise 15 - yard strike. The game was well and truly over with a rare 70th - minute goal from N'Golo Kanté compounding Mourinho 's misery on his return to Stamford Bridge. With this win, Chelsea had gone eight league games, winning four and drawing four, without losing against Manchester United, making it their best run against the Red Devils in club history. Chelsea lost their next game, an EFL Cup game, 2 -- 1 against West Ham at the newly renovated Olympic Stadium on 26 October 2016, knocking them out of the competition. The game was marred by crowd disturbances amongst both sets of rival fans, with plastic bottles, coins and seats being thrown across the London Stadium. Prior to the match, there had been nine arrests outside the stadium and 23 banning orders issued by West Ham for disorderly fan behaviour since moving into their new stadium. Chelsea bounced back with a 2 -- 0 win in the Premier League over Southampton at St Mary 's Stadium on 30 October. The win meant Chelsea won all their Premier League matches in the month of October; a run of four wins, scoring 11 goals without conceding any. The last time Chelsea had a four - game winning run was April 2015 and the four consecutive clean sheets were also the first since August 2010 when Chelsea had a run of six consecutive Premier League games without conceding. Position at the end of October On 5 November, Chelsea stunned Everton at Stamford Bridge with a 5 -- 0 win. The hosts scored two goals in quick succession, coming from Eden Hazard and Marcos Alonso in the 19th and 20th minutes of the game. Diego Costa added a third goal before half time to seal the game, however Chelsea did not relent with two further goals coming in the second half, one of these being a Pedro goal into an open net. Everton were completely dominated throughout the whole game and penned into their own half, only having one off - target shot in comparison to Chelsea 's 21 shots. With this win, Chelsea had five consecutive league victories, scoring 16 goals and conceding none in their last five games. The win also sent The Blues top of the Premier League table going into the international break. On 18 November, Chelsea 's Eden Hazard and Antonio Conte both won the Premier League Player of the Month and the Premier League Manager of the Month awards respectively for the month of October. On 20 November, Chelsea earned their sixth consecutive league victory, beating Middlesbrough 1 -- 0 at the Riverside Stadium. In the process, Diego Costa became the first player to reach double digits in league goals when he scored his tenth goal of the season. On 26 November, Chelsea ended Tottenham Hotspur 's unbeaten run since the start of the Premier League season, where Chelsea won 2 -- 1. Chelsea conceded their first goal since the 3 -- 0 away defeat to Arsenal in the form of a fantastic long - distance strike from Tottenham 's Christian Eriksen, and were dominated throughout much of the first half, however Chelsea were able to equalize just before half time with a spectacular right - footed curled effort from Pedro. Spurs ' miserable record at Stamford Bridge was extended to 30 games without a win -- dating back to February 1990 -- after Victor Moses scored what proved to be the winner six minutes after the restart. The win ensured that Chelsea would enter the month of December top of the Premier League. Position at the end of November On 3 December, Chelsea handed Manchester City their first home defeat after the Blues came back from a Gary Cahill own goal in the first half with three second half goals to earn a 3 -- 1 victory. The match ended in a wide - scale brawl that occurred as a result of a Sergio Agüero two legged lunge tackle on Chelsea defender David Luiz. Following the brawl, Aguero and Fernandinho were both sent off with straight red cards, Fernandinho being sent off due to his violent conduct against Chelsea midfielder Cesc Fàbregas. Aguero received a four - match ban for his actions, while Fernandinho received a three - match ban. Following the game, the FA charged both clubs involved with failing to control their player 's on - pitch behaviour, with both clubs having until 8 December 2016 to respond to the charges. On 9 December, Chelsea became the first club to collect three Premier League awards in the same month, picking up all the prizes for November: Diego Costa was named Player of the Month after registering two goals and two assists in three November contests; Antonio Conte was named Manager of the Month for the second successive month after guiding the club a perfect three wins out of three matches; and Pedro won Premier League Goal of the Month for November thanks to his curling effort from outside the box in the match against Spurs on 26 November. On 11 December, Chelsea prevailed over West Bromwich Albion with a close 1 -- 0 win, the only goal of the game coming in the 76th minute from Diego Costa, the Spaniard scoring his 12th goal of the season. The win sent Chelsea top of the table again, three points clear of second - placed Arsenal, and gave Chelsea their ninth consecutive league victory. On 13 December, manager Antonio Conte confirmed that 20 - year - old Brazilian winger Kenedy had returned to Chelsea from his loan spell at Watford after struggling with injuries and being unable to hold down a first team place while on loan. Kenedy had made just one substitute appearance for Watford during his loan spell, coming on in the 75th minute of Watford 's 2 -- 0 away defeat to Burnley. On 14 December, Chelsea secured their tenth consecutive league victory with a 1 - 0 away win over Sunderland. Cesc Fàbregas scored his first league goal of the season in the 40th minute after an assist from Willian. Eden Hazard missed his first league game of the season after picking up a knock during the win over West Brom. César Azpilicueta made his 200th appearance for the Blues in the match, just one day after signing a three - and - a-half - year contract that will keep him at the club through the 2020 season. On 17 December, Gary Cahill made his 300th appearance for the club as Chelsea narrowly won over Crystal Palace 1 -- 0, away at Selhurst Park, extending Chelsea 's unbeaten streak at Selhurst Park to 26 years, since Palace last defeated Chelsea there in 1990. The win takes Chelsea nine points clear of title chasers Liverpool and Arsenal, both having a game in hand over Chelsea. The win also meant that Chelsea are the third team in Premier League history to reach 500 league wins, after Arsenal and Manchester United. Chelsea also equal a club record with 11 - straight league wins; Chelsea last achieved this feat from April to September 2009. Diego Costa and N'Golo Kanté both accumulated their fifth yellow cards of the season, resulting in themselves not being available for selection in the Boxing Day match against AFC Bournemouth. Diego Costa scored his 13th league goal of the season and his 50th for Chelsea since first signing. Diego Costa 's 50th goal in 97 games for Chelsea meant that he eclipsed Didier Drogba 's record of 50 goals in 112 games. On 22 December, young Chelsea midfielder Charly Musonda made an early return from his loan spell at Real Betis after struggling for fitness and form while on loan. Musonda only made one start throughout his loan spell, having apparently fallen out with former Betis manager Gus Poyet. On 23 December, Chelsea announced the permanent transfer of Oscar to Shanghai SIPG for a club record £ 52,000,000, to be completed within the January transfer window. On 26 December, Chelsea earned their twelfth straight league victory and broke their all - time record of successive league victories with a 3 -- 0 home win over Bournemouth. A curled effort from Spanish winger Pedro, and a penalty from Eden Hazard in the 49th minute effectively sealed the game for the hosts. Chelsea 's third, a stoppage time goal, came in the form of a Bournemouth own goal from defender Steve Cook, this being as a result of a Pedro shot deflecting off the Bournemouth defender and spinning over the goal line. Chelsea put up an encouraging performance in spite of having two of their key players, Diego Costa and N'Golo Kanté, suspended for the game. The win means that Chelsea remain top of the table and six points clear of second - placed Liverpool. On 31 December, Chelsea equalled a top flight record of 13 consecutive wins in a single season with a thrilling 4 -- 2 home victory over Stoke City. Goals from Gary Cahill with a headed effort, a second - half brace from Willian to help Chelsea regain the lead on two occasions in the match, and an 85th minute Diego Costa strike sent the Blues nine points clear of second - placed Liverpool going into the New Year, with Liverpool being able to cut the deficit to six points should they earn a victory against fellow title challengers Manchester City. On the same day, Dutch midfielder Marco van Ginkel signed a new contract with the Blues, keeping him at Stamford Bridge until the end of the 2018 -- 19 season, whilst also rejoining his former loan club PSV Eindhoven for the rest of the 2016 -- 17 season. Position at the end of December On 1 January, goalkeeper Jamal Blackman extended his loan spell with League Two club Wycombe Wanderers until the end of the 2016 - 17 season. On 4 January, Tottenham ended Chelsea 's 13 - game winning run by defeating them 2 - 0 at White Hart Lane. A brace from midfielder Dele Alli with goals just before and after half time, prevented Chelsea from writing Premier League history with a fourteenth successive win. However, the result itself did not affect Chelsea 's position in the Premier League, with the Blues remaining in first place and five points clear of second - placed Liverpool following their draw with Sunderland. On 6 January, long - serving midfielder John Obi Mikel completed a move to Chinese Super League club Tianjin TEDA for an undisclosed fee, having played 376 times for the Blues since joining in 2006, winning two Premier League titles, four FA Cups and the 2012 Champions League during his time at Stamford Bridge. Mikel had not featured under new Chelsea boss Antonio Conte all season, with Mikel himself stating that the time was right for "a new challenge ''. Besides, Chelsea recalled young forward Isaiah Brown from his loan spell at Rotherham United, with Huddersfield Town signing him on loan for the remainder of the 2016 - 17 season. He joins fellow Chelsea loanee Kasey Palmer at Huddersfield. On 8 January, Chelsea defeated Peterborough United 4 - 1 at home in the third round of the FA Cup. Goals from Michy Batshuayi, Willian and a brace from Pedro ensured that Chelsea would advance into the fourth round. Chelsea captain John Terry was sent off on his first start for the club since October, but the Blues held on for a convincing victory over Posh. On the same day, Chelsea exercised a recall clause in Dutch defender Nathan Ake 's season - long loan deal at Premier League club Bournemouth, following some impressive performances for the south coast club. On 13 January, Antonio Conte won the December Premier League Manager of the Month. As a result, he became the first manager in history to win the award in three successive months. On 14 January, Chelsea returned to winning ways in the league with a 3 - 0 victory over last season 's Premier League champions Leicester City at the King Power Stadium. Marcos Alonso opened the scoring early on with Eden Hazard providing the assist, later scoring another to put the Blues 2 - 0 up shortly after half time. A third Chelsea goal from Pedro in the 71st minute secured up the three points for the away team, sending Chelsea seven points clear of second - placed Tottenham Hotspur at the summit of the Premier League. The win and three points also meant that Chelsea had surpassed their points total from the 2015 -- 16 Premier League season, reaching 52 points compared to last season 's 50 points. On 17 January, Brazilian midfielder Lucas Piazon 's loan at Fulham was extended until the end of the season. On 18 January, young forward Patrick Bamford rejoined his former loan club Middlesbrough on a permanent basis for a reported £ 6 million. On 22 January, The Blues defeated Hull City 2 - 0 at home. Diego Costa scored at his 100th appearance for the club at the 7th minute of first - half injury time. The long stoppage was a result of a clash of heads with between Gary Cahill and Hull midfielder Ryan Mason. Mason was sent to hospital and it was later confirmed that he sustained a skull fracture, while Cahill remained on the pitch and secured the victory with a header goal on the second half. On 28 January, Chelsea defeated Brentford 4 - 0 at home in the West London derby in the fourth round of the FA Cup, Branislav Ivanović scored his first goal of the season and was later fouled to allow Michy Batshuayi to add a fourth from the penalty spot. Youngsters Fikayo Tomori and Mukhtar Ali joined Brighton & Hove Albion and Vitesse respectively on loans until the end of the season. On 31 January, Chelsea recorded their second draw of the season as they drew against Liverpool at Anfield. David Luiz scored a stunning freekick on the first half at his 100th Premier League appearance. It was also his first goal in his second spell at Chelsea. Georginio Wijnaldum equalised with his head after the break. The final result held to 1 - 1 after Diego Costa 's penalty was saved by Simon Mignolet in the 76th minute. The Blues extended their lead at top of the Premier League to nine points as the two tilte contenders Arsenal and Tottenham Hotspur both dropped points on the same night. Position at the end of January On 1 February, Chelsea announced the departure of 32 - year - old Serbian defender Branislav Ivanović. Ivanovic joined Russian side Zenit Saint Petersburg on a free transfer after nine years of service, having scored 34 goals in 377 appearances and won two Premier League medals, one Champions League medal, one Europa League medal, three FA Cup medals and one League Cup medal. He is also one of only five foreign players to reach the 300 - game landmark for the Blues. Branislav Ivanovic missed the 2012 UEFA Champions League Final due to suspension. However, he starred in the 2013 UEFA Europa League Final, scoring in the final minute of stoppage time to clinch a 2 -- 1 win for Chelsea and with it their first Europa League title. He was subsequently named Man of the Match. Ivanovic was also outstanding during the title - winning campaign of 2014 -- 15 and played in every minute of the 38 games. The Blues boasted the best defensive record in the league and he was one of six Chelsea players named in the Team of the Season. He ended his Chelsea career with a goal against Brentford in his final game. On 4 February, Chelsea beat Arsenal 3 -- 1 at home. Eden Hazard scored a magnificent solo goal in the 8th minute of second half. Cesc Fàbregas scored the third goal for the Blues against his former captained team, after an error by ex-Chelsea goalkeeper Petr Čech. On the same day, the Blues announced that on 22 July, they would play Arsenal at Beijing National Stadium in preparation for next season. Position at the end of February On 8 March, Chelsea returned to the Olympic Stadium to face West Ham; this time the home side were beaten. On 13 March, a goal from N'Golo Kanté in the FA Cup quarter - finals put holders Manchester United out of the tournament. On 18 March, Chelsea won at Stoke 2 -- 1, thus emerging from March unbeaten. On 1 April, having taken the lead through Cesc Fàbregas, Chelsea lost 2 -- 1 at home to South London club Crystal Palace, with all the goals being scored in the first eleven minutes. On 5 April, Chelsea return to winning ways with a 2 -- 1 home win over Manchester City. On 16 April, Manchester United exacted revenge for being eliminated from the FA Cup the previous month with league victory over Chelsea at Old Trafford. On 22 April, Chelsea won their FA Cup semi-final at the neutral venue of Wembley Stadium, despatching Tottenham 4 -- 2. On 25 April, Chelsea were 4 -- 2 winners over Southampton: Eden Hazard and Gary Cahill netting in the first half and a Diego Costa double in the second - half; former Blues Oriol Romeu and Ryan Bertrand scored for Saints. On 30 April, Chelsea won at Everton 3 -- 0, featuring an "effort from outside the box '' scored by Pedro, Gary Cahill scoring in his second consecutive game, and an 86th - minute strike from Willian. On 8 May, goals from Diego Costa, Marcos Alonso and Nemanja Matić were enough to relegate visitors Middlesbrough back to the English Football League after just one season in the top - flight. On 12 May, Chelsea defeated West Bromwich Albion 1 -- 0 to clinch the Premier League title as they went ten points clear with two games remaining. Michy Batshuayi scored the winning goal in the 82nd minute. On the same day, Pedro was awarded his second - and Chelsea 's third - Goal of the Month this season with his strike at Goodison Park. On 16 May, a much - changed Chelsea side were 4 -- 3 winners over Watford, with substitute Cesc Fàbregas finding the winner shortly before the away side had a man sent off, the other Blues goalscorers were John Terry, César Azpilicueta, and Michy Batshuayi. The Hertfordshire club gave the champions - elect that day a guard of honour; this included Kenedy, the player making his Chelsea league début having made one appearance for Watford earlier in the season before his loan deal was cancelled. On 21 May, Chelsea defeated already - relegated Sunderland 5 -- 1 with goals from Willian, Eden Hazard, Pedro and a brace from Michy Batshuayi - his fourth in three matches. It was the last league game for John Terry, who was subbed in the 26th minute to a standing ovation from all the supporters. This marked Chelsea 's 30th league win this season, most by any team in a single Premier league season. On 27 May, Chelsea fell behind to Arsenal in the 2017 FA Cup Final in the fourth minute to Alexis Sánchez ' goal, and were reduced to ten men when Victor Moses received his second yellow card. However, despite Arsenal 's extra-man advantage, Chelsea equalized through Diego Costa in the 76th - minute. The London clubs would stay level for only three minutes before Aaron Ramsey headed in the winner. Source: Chelsea F.C. Source: Chelsea F.C. Summer: £ 118,200,000 Winter: £ 0 Total: £ 118,200,000 Summer: £ 34,400,000 Winter: £ 66,000,000 Total: £ 100,400,000 Summer: £ 83,800,000 Winter: £ 66,000,000 Total: £ 17,800,000 On 13 April 2016, it was announced that Chelsea would visit Austria for two pre-season friendlies against Rapid Wien and Wolfsberger AC. Chelsea will conclude their pre-season campaign facing Bundesliga side Werder Bremen in Germany. On 22 March 2016, the schedule for the 2016 International Champions Cup was announced that Chelsea would play Liverpool, Real Madrid and Milan. Last updated: 21 May 2017. Source: Statto Ground: A = Away; H = Home. Result: D = Draw; L = Loss; W = Win; P = Postponed. Win Draw Loss The fixtures for the 2016 -- 17 season were announced on 15 June 2016 at 9am. Win Draw Loss Win Draw Loss Last updated: 27 May 2017. Source: Chelsea F.C. The list is sorted by shirt number when total goals are equal. Last updated: 27 May 2017. Source: Chelsea F.C. The list is sorted by shirt number when total clean sheets are equal. Last updated: 12 May 2017. Source: Chelsea F.C. Last updated: 21 May 2017. Source: Chelsea F.C.
who became the chief minister of independent government set up in theog 1947
Theog - Wikipedia Theog is a town and a municipal committee as well as a tehsil in Shimla district in the Indian state of Himachal Pradesh. First settlements were in 1902. As of 2001 India census, Theog had a population of 3,754. Males constitute 57 % of the population and females 43 %. Theog has an average literacy rate of 64 %, higher than the national average of 59.5 %: male literacy is 73 %, and female literacy is 56 %. In Theog, 12 % of the population is under 6 years of age. According to the census of 1931, the total population of Theog state was 6912, there were 6800 Hindus, 91 muslims and 21 Sikhs in Theog State in 1931 Theog is located at 31 ° 07 ′ N 77 ° 21 ′ E  /  31.12 ° N 77.35 ° E  / 31.12; 77.35. It has an average elevation of 1965 metres (6446 feet). It is situated on National Highway NH22 (on the Hindustan - Tibet Road), is 32 km away from Shimla, and is a town of five ' Ghats ' (or ridges): Rahi Ghat, Deori Ghat, Prem Ghat, Janoghat, and Bagaghat. There are numerous villages which come under the jurisdiction of Theog. Theog is well connected to Himachal and rest of India through National Highway 5 and National Highway 705. theog is 45 km from Shimla airport and 32 km from Shimla railway station
who become the family wizard in wizards of waverly
Wizards of Waverly Place (season 4) - Wikipedia The fourth and final season of Wizards of Waverly Place aired on Disney Channel from November 12, 2010 to January 6, 2012. The Russo children, Alex (Selena Gomez), Justin (David Henrie), and Max Russo (Jake T. Austin) continue to compete to become the leading wizard in their magical family and begin to make difficult decisions about their futures. Maria Canals Barrera and David DeLuise co-star as their parents and Jennifer Stone co-stars as Alex 's best friend, Harper Finkle. This is the second season of the series to be broadcast in high - definition. Season 4 of the series features a revamped opening sequence with clips from past seasons. The opening sequence takes place in the wizard lair at the Russo home. Footage of Alex (Selena Gomez) and Justin Russo (David Henrie) is shown in the Crystal replay ball from the season premiere "Alex Tells the World ''. Then, a spell book opens and footage of Max Russo (Jake T. Austin) and Harper Finkle (Jennifer Stone) appear. After that, footage of Theresa (Maria Canals Barrera) and Jerry Russo (David DeLuise) are shown in a cauldron. After the footage is played, Alex transports them to Times Square, with Alex waving her wand to reveal the title logo and the name of the creator. The theme song Everything Is Not What It Seems has also been remixed and sung by Selena Gomez for the fourth season of the show. Alex and Justin are sent back to level one in the wizard competition after "exposing '' wizardry in "Alex Tells the World '', leaving Max as the most expected to become the family wizard. In "Alex Gives Up '', Alex quits the competition, but eventually decides to stay in the competition in "Journey to the Center of Mason '' to be with Mason. Before then, Harper tries to get Alex to enjoy life without wizardry. Meanwhile, Justin tries to tutor a class of delinquent wizards. In the episode, "Three Maxes and a Little Lady '', Alex and Justin accidentally turn Max into a little girl, Maxine, portrayed by Bailee Madison. She ends up making Alex jealous when their father prefers Maxine over Alex. Maxine remains until "Back to Max ''. In the episode "Everything 's Rosie for Justin '', Justin falls in love with Rosie, a guardian angel, but at the end of "Dancing with Angels '' she becomes an Angel of Darkness. In "Zeke Finds Out '', the Russos finally reveal to Zeke that they are wizards. In "Wizard of the Year '', Alex is crowned Wizard of the Year for saving the world from the angels of darkness, and is reinstated in the Wizards Competition. In "Justin 's Back In '', Justin is reinstated in the Wizard Competition, after his class of delinquent wizards passes. Meanwhile, Alex 's relationship with Mason is jeopardized again when a beast tamer, Chase Riprock, falls in love with Alex and Mason gets jealous. This results in Alex breaking up with Mason. Alex, Justin and Max then save the world from being destroyed by an asteroid. Later, Alex and Harper move into an apartment building with a secret 13th floor for wizards and other creatures in the wizard world. However, it turns out to be a trap made by Gorog to capture the wizards who live in the same floor to make them join the dark side and rule the wizard world. He is defeated and destroyed by Alex, Justin and Max who were using the Power of Three. During this, Alex and Mason get back together and, after a long time, Justin finally gets back together with Juliet. Finally, in the hour - long series finale, Alex, Justin, and Max compete in the family wizard competition. Alex wins and gains full wizardry; Justin becomes a full wizard as well when Professor Crumbs reveals he is retiring as headmaster of WizTech and passes the position to Justin. Jerry also decides to pass down the Waverly Sub Station to Max one day since he is the only child who is not a wizard anymore. The series ends with hugging and Alex saying that they are all happy. Guest stars and recurring cast include: Gregg Sulkin as Mason Greybeck, Ian Abercrombie as Professor Crumbs, David Barrera as Carlos Cucuy, Samantha Boscarino as Lisa Cucuy, Bill Chott as Mr. Laritate, Frank Pacheco as Felix, Bridgit Mendler as Juliet van Heusen, Daniel Samonas as Dean Moriarty, Bailee Madison as Maxine Russo, Shane Harper as Fidel, Dan Benson as Zeke Beakerman, John Rubinstein as Gorog, Andy Kindler as Chancellor Tootietootie, Fred Stoller as Dexter, Leven Rambin as Rosie, China Anne McClain as Tina, Kari Wahlgren as Helen, Cameron Sanders as Nelvis, Josh Sussman as Hugh Normous, Jackie Evancho as Herself, McKaley Miller as Talia, Valente Rodriguez as Muy Macho, and Nick Roux as Chase Riprock. The Russos and Mason finally make it back home, but their wizard powers are still gone. While the family wants to stay quiet, Alex believes that they should take action, so she reveals that they are wizards to reporters and tells them that a group of wizards are being held captive by the government. As she urges them to spread the word, Professor Crumbs appears and tells her that everything that had happened was a test that she failed miserably and orders her to Wizard Court for exposing wizardry to the world. Justin is also ordered to attend because he exposed magic to Agent Lamwood, who was a creation from Professor Crumbs ' mind. The judges rule Alex and Justin guilty, and Professor Crumbs sentences them both to be demoted to level one in the Family Wizard Competition, making them behind Max by two levels. Meanwhile, Max sets up traps in the Sub Station for the "government '', but ends up falling for one of them. Special guest star: Gregg Sulkin as Mason Alex decides to give up the wizard competition to spend more time with Mason and thinks that it does not matter since Max is all the way in the lead, but Chancellor Tootietootie tells Alex that non-wizards and werewolves can not date, because the werewolf always eats the human. Max 's picture on the front of Future Wizards magazine attracts the daughter of a family of Cucuys (mythical Latino monster) and they invite him to a dinner party on their yacht. Alex comes up with a scheme to invite Chancellor Tootietootie to the Cucuy party in hopes of getting him to appeal the law against werewolves and mortals dating. Lisa Cucuy finds out that Mason is a werewolf and decides that she wants him instead of Max. Alex gets jealous when Lisa flirts with Mason, so Alex says embarrassing things about Mason to everyone on the yacht so that he will be mad enough to turn into a werewolf. Carlos and Julie Cucuy jump off the boat as Cucuys are scared of werewolves. Jerry jumps off after them. Alex tells Lisa that she can get off the boat the easy way or her way. Once Lisa jumps off the boat, Chancellor Tootietootie says that their case will not be appealed as by Mason turning into a werewolf proved his point that they are dangerous. In the end, Alex and Mason have to break up to be friends and Harper finishes writing her first novel based on the werewolf - wizard relationship. Jerry tries to get Harper to help him out of the water but accidentally grabs her just finished novel, which causes him to fall back into the water. Meanwhile, Justin tutors a class of delinquent wizards in an attempt to gain a level to re-enter the Wizard Competition. Special guest star: Gregg Sulkin as Mason Harper tries teaching Alex how to live without magic, but the strain of taking responsibility for her actions instead of using magic causes a disagreement between the two best friends when Alex gets in big trouble for crashing Mr. Laritate 's car. As punishment, Alex has to clean the school 's garbage cans. Mr. Laritate forgives Alex after seeing how the crash influenced her, and Alex realizes that she will be okay without magic. Wanting to thank Harper, Alex uses Mr. Laritate 's lucky life - saving hula girl figurine, Leilani, for luck, as Mr. Laritate does. During Harper 's driving test, Alex turns into the hula girl and guides Harper through the test, knowing that she might fail. In the end, Harper passes her test. Meanwhile, Jerry wants to pass on the wizard family robe to Max, but Justin has a hard time letting go of it. Guest stars: Bill Chott as Mr. Laritate, Julio Oscar Mechoso as DMV Instructor After she finds out that she will lose her powers, Alex can no longer date Mason, so they decide to try being best friends instead. After watching a film at the theaters, Harper, Alex, and Mason go home, Dean, who was Alex 's Ex-Boyfriend comes and asks Alex to get back together with him. Mason gets jealous. When Dean brings a gift for Alex, Mason then asks what Alex 's relationship is with Dean, and Max tells him that Dean and Alex used to date, making Mason more jealous. Mason then turns into a werewolf and eats Dean. Alex, Justin, and Max come up with a solution to turn Justin 's toy Captain Jim Bob submarine big and go in it. They then shrink it and go into Mason 's mouth. While there, Alex, Max, and Justin see Mason 's thoughts, which are all of Alex. They get Dean out, and Alex, when Harper, Max, and Justin leave, Alex tells Dean that he was knocked out and the doctor 's solution was for them all to dress up as cows. Dean then gives her his offer once again: he wants to get back with her. Alex tells him ' no ', and then gets back together with Mason, with deciding to get back to the Wizard Competition. Special guest star: Gregg Sulkin as Mason Alex joins Justin 's wizard class to try to move up in the competition. Max joins a sophisticated wizard society. They suggest he move the wizard competition up and he moves it to a few days away. Max quits the society because he finds out that they are taking advantage of him. Alex and Justin transform into Max in an attempt to change the date of the family wizard competition back to its original date and succeed. In the end of the episode they accidentally turn Max indefinitely into a little girl. Guest stars: Bailee Madison as Maxine, Frank Pacheco as Felix, Kari Wahlgren as Helen, Shane Harper as Fidel With Max still a little girl, their parents pay more attention to Max (Maxine), and Alex becomes jealous. Justin takes Maxine to her (Max 's) karate class and Maxine kicks Justin 's butt. After the class, Alex decides to make Jerry 's favorite sandwich in order to win his love back, but Maxine comes along with Justin, announcing that she beat Justin in karate, and Jerry immediately falls for her cuteness again. More jealous than ever, Alex comes back with Justin to Maxine 's karate class with their parents, having a slight idea of knowing how to change Maxine back to Max. They cast spells on Maxine, Justin using the spell he used on Max that turned him into Maxine, and Alex using the reverse spell. Unfortunately, it only makes Maxine cuter, which makes her furious. Instead of going against the person she was supposed to, she demands that she kicks either Alex or Justin 's butt. Justin goes against her, but seeing that he would lose, Alex casts a spell on Justin to make him a karate master. Too fast for Maxine, Justin beats her, and Maxine is hurt (physically). Jerry and Theresa rush to her side, caring for Maxine more than they care that Justin beat Maxine in karate. Justin leaves, and Alex leaves after him, heartbroken. Back at home, Jerry tells Alex that he fell for Maxine because she reminded him of Alex when she was little and they bond again. Meanwhile, Zeke thinks the Russos hate him, but really it was just Harper 's alibi in order to keep him away from the Sub Station so as not to make him suspicious about Max 's disappearance and Maxine 's appearance. Guest stars: Bailee Madison as Maxine, Dan Benson as Zeke, Lanny Joon as Sensei Justin and his delinquent class have a wizard wand dance assessment to do which will get them back in Wiztech. A new student named Rosie joins, whom Justin falls madly in love with to later find out that she is an Angel. Meanwhile, Jerry, Teresa, and Harper try to think of ideas of how to get more people to come to their restaurant. Harper makes cards which customers get hole punched each time they buy a sandwich; when you get 9 hole punches, they get a free sandwich. Contrary to the original purpose, one man punches his own card, demands a free sandwich, and then brings dozens of people into the restaurant, who also have pre-punched cards. Maxine uses her charming cuteness to convince the customers to not exploit her parents. Meanwhile, Justin kicks Rosie out of his class so that it does not fail the wizard dance assessment. Sympathizing, Alex gets Rosie to rejoin the class and puts a copycat spell on her so that Rosie can mimic Alex 's move and pass the class. The assessment ends in disaster when a squirrel - frog attaches itself to Alex 's foot, and Rosie reveals herself to be an angel. The class fails, but Justin starts dating Rosie. Guest stars: Bailee Madison as Maxine, Frank Pacheco as Felix, Leven Rambin as Rosie, Diane Delano as Penny, Chris Coppola as Customer, Cameron Sanders as Nelvis Justin and Rosie start dating but when their date gets ruined, they want a second date. Alex suggests an Angel Club in Los Angeles, but Rosie says that only Angels are allowed. Maxine will not tell their parents as long as Alex brings her back an Ozzy Osbourne stone. After Justin, Harper and Alex sneak in along with Rosie, Zeedrik asks them if they are angels and they say they are. Back at the sub shop, Theresa and Jerry question Maxine to find out where Justin, Alex and Harper are, and Maxine does not say, so they force her to join a Pageant in order for her to tell them where they are. Things get bad at the angel club when Alex and Harper can not fly, until Justin secretly pulls out his wand and gives Harper flying powers. Zeedrik still questions them by giving them a test to sing while playing the harp. Though Alex, Justin, and Harper fail to do so, they are accepted as "angels '' until the key - chain of Ozzy Osbourne 's star reverts to normal. Zeedrik believes they are really angels of darkness, causing everyone in the club to run off, even Rosie. Rosie comes back at the end of the episode to comfort Justin, but when they both look over the city, Rosie 's wings are shown to be black, the color of an angel of darkness ' wings, but Justin does not see because he is looking forward. Meanwhile, Maxine ends up winning the beauty pageant, making Jerry and Theresa celebrate. Guest stars: Bailee Madison as Maxine, Leven Rambin as Rosie, Ransford Doherty as Zeedrik Rosie is an Angel of Darkness, and has been tricking Justin all along to make him evil. Alex finds out, with the help of Tina (a guardian angel in training trying to earn her wings) that Rosie is an Angel of Darkness. Justin crosses over to the dark side with Rosie, steals the "moral compass '', and gives it to Gorog, so Gorog can turn the compass from good to bad, so that the world will be covered in darkness and every human being will be corrupted. Rosie tries to get Justin to run away and save him, because Gorog wants to destroy him, but Justin refuses to leave, stating that he is no longer a wizard but an Angel of Darkness and breaks his wand in half. Alex tries to convince Justin that he is a wizard, a good wizard and not an Angel of Darkness. Meanwhile, Maxine is enrolled in Tribeca Prep, since she is participating in a "cousin exchange program. '' After her first day of school, Maxine laments that she has no friends, which leads Harper to insist Maxine have a slumber party (which is really what Harper wants because she has never had a slumber party), that eventually becomes a disaster. Tina reveals that Rosie was a good angel, and her teacher however, Gorog reached her before her transport arrived and corrupted her. Alex and Tina go to the Dark Realm to save Justin. Alex borrows a pair of wings from the guardian angels. Alex takes the Moral Compass and tries to turn it to good but Justin stops her and then Rosie used her good angel side and they fly up and when they land, Justin turns the compass to good. Tina and Alex take it and run. When Rosie and Justin kiss, Rosie turns into a good angel (with white wings), and Justin loses the dark wings. Guest stars: Bill Chott as Mr. Laritate, Bailee Madison as Maxine, Leven Rambin as Rosie, John Rubinstein as Gorog, China McClain as Tina, Amy Tolsky as Florence Note: This is a one - hour special. Justin invites Professor Crumbs to check on his delinquent wizards class, to see how their wizard training is progressing. During the conversation, Maxine interrupts, causing Alex and Justin to panic since they do not know how to explain "Maxine '', and they try to find a way to turn Maxine back to Max. Meanwhile, Harper takes a role in the school 's "Spirit of America '' play, only to be replaced by Maxine. Guest stars: Bill Chott as Mr. Laritate, Ian Abercrombie as Professor Crumbs, Bailee Madison as Maxine, Cameron Sanders as Nelvis, LJ Benet as Little Crumbs, Jackie Evancho as Choir Girl When Alex gets bored of Zeke 's failing magic tricks, she decides to use magic to make Zeke 's tricks come true. Everything goes downhill when Zeke tells Harper he believes he is a wizard and is now booking a magic act at the sub station for a kid 's birthday party. Since Justin and Harper are tired of Alex ruining things all the time, they decide to leave everything up to her. Harper, not wanting to let Zeke make a fool of himself, tells Alex to tell Zeke the truth about wizards, but she is reluctant to, as exposing wizardry to another mortal could get her kicked out of the wizard competition forever if anyone finds out. Zeke goes on with his belief that he is a "wizard '', but he finds out Harper is keeping a secret from him so he breaks up with her. Alex realizes what a mess she created and on the day of Zeke 's magic act, she makes Harper disappear before she can expose wizardry to Zeke, which freaks him out. The Russos really are in deep waters when Zeke is going to saw Max in half with a real saw for his next act, so Justin takes matters into his own hands and freezes everyone except for himself and Zeke. Justin tells Zeke that he is a wizard, not Zeke, and that he did everything. Still not believing him, he flashes Zeke into their lair and unveils the secret to him. They go back outside and tell Harper and Alex who are fighting, that Zeke now knows about the existence of wizards, but he promises he will never tell anyone. Zeke and Harper reconcile their relationship and Alex and Harper apologize and make up with each other. Guest stars: Dan Benson as Zeke, Maurice G. Smith as Big Mitch When a former legendary Luchador "Muy Macho '' visits the Sub Station, Alex finds out that she was the reason why he quit lucha libre (wrestling), so she tries to make it up to him by arranging a lucha libre match with him and Jerry to restore his honor. Meanwhile, Zeke asks too many questions about wizardry, so Justin offers to grant him one magical wish if he promised he would stop questioning him about magic. Unfortunately, Zeke 's wish is to defeat Muy Macho in the match. Max pretends to like Talia 's hobbies, but does not enjoy them. When Alex asks Mason if she can meet his parents, Mason shows his fake parents and she finds out soon after. She gets upset at Mason and demands him to take her to his real parents. Mason takes her to the Autumn Moon Feast (werewolf holiday) to meet his parents. Soon after that Alex finds out he told his parents that she is a werewolf and is frustrated. She uses a spell to change herself and Harper into werewolves. She tries to be the worst werewolf possible in front of Mason 's parents during the feast. In the end Mason 's parents like her as a werewolf and she says to him that they do not know the real her. She begins to leave and he says that he truly loves Alex to his parents and that she is a wizard. She changes back to a wizard and Mason turns back to human form. After the parents find out they try to eat her and Mason and Alex make a near escape. Meanwhile Max makes his parents ' food taste like kid food in order to make it taste good. Afterwards Jerry and Theresa eat it, and as a result they act like little kids. Justin thinks he will have to fix it but Max comments on how he does not need help, as he will be the future family wizard. Max uses another spell on cookies to make them older, but they turn into teenagers. Finally, Justin uses a spell on a pizza to turn them back into their regular selves. Special guest star: Gregg Sulkin as Mason When Alex, Justin and Max win four tickets to the "Beast Bowl '', the tickets are delivered by Beast Tamer Chase Riprock, who begins flirting with Alex and invites the group for a pre-bowl tour. Alex invites Mason to go with them but he says he can not. During the tour, despite Chase 's warnings, Justin attempts to tame the beast so that Max can have his photo taken with the beast, but both are caught and banned from the event. Chase tries to kiss Alex but she rejects him and returns to the mortal world where she finds Mason coming out a hardware store. Mason is evasive about his plans, which involve making an anniversary present for Alex, so Alex returns to the Beast Bowl. Harper tells Mason that Chase likes Alex and urges him to surprise her with the gift at the Beast Bowl. Meanwhile, Justin and Max return to the Beast Bowl disguised as beast taming clowns where they allow the beast to escape and snatch Alex who throws the whip to Chase. Chase tames the beast who drops Alex and Mason is able to reveal his gift, which is a sculpture of himself and Alex. Alex and Mason leave together, with the sculpture. Special guest star: Gregg Sulkin as Mason Guest star: Nick Roux as Chase Alex has been crowned Wizard of the Year after saving the world from dark angels, as well as being reinstated in the Wizard Competition, and a banquet honoring her achievement is hosted. With paparazzi in tow, Chase Riprock visits the lair to congratulate Alex. As he leaves, Keith Keith hounds them about being in a romantic relationship, to which Alex states she already has a boyfriend. Later, Mason and Alex watch his gossip segment, and to their horror, Keith Keith confirms that Chase and Alex are dating. For Alex winning Wizard of the Year, the Russo family must record a holographic video congratulating Alex. However, much to Justin 's displeasure, Justin 's video instead only contains him complaining about Alex. The night of the awards banquet, Mason stands Alex up, leaving her devastated. At the banquet, Chancellor TootieTootie tells Justin that his display of affection and pride for Alex may reinstate him into the competition, causing Justin to be frantic and do everything within his power to destroy the tape. Chase purposely invites himself to sit at the Russo 's table. However, Mason shows up late and sees Alex and Chase flirting together. A short battle ensues between Mason and Chase. Alex soon breaks up the fight, and is forced to choose either Mason or Chase. Appalled by both of their behaviors that evening, Alex chooses neither and breaks up with Mason. Special guest star: Gregg Sulkin as Mason The Russos head to the beach on an extremely hot day, and Harper hopes to enjoy reading her book while at the beach. The Russos come across Zelzar, a fortune - telling machine. Jerry warns the kids about Zelzar -- he is from the wizard world, and a wizard 's fortune really comes true. After Max and Justin receive good fortunes, Alex decides to get a fortune despite her father 's warning. However, her fortune says, "Say good - bye to your life. '' Alex thinks nothing of it and proceeds to enjoy the day at the beach, but comes close to life - threatening situations multiple times. Alex, Justin, Max, and Harper make a compromise with Zelzar -- if they can extract him from the machine and let him enjoy a day at the beach, he will take back Alex 's fortune. Max replaces Zelzar, and Justin is forced to drive the beach - goers away from the machine (as Max gives out silly fortunes about his new facial hair); meanwhile, the girls help Zelzar fancy his day at the beach. In the end, Zelzar reveals that while he can not take away Alex 's fortune, he can give it to the next person. The next person happens to be a little girl, who gets Alex 's fortune. However, it is revealed that a man from Random Prize Giveaway shows up, saying: "Say good - bye to your life because we 're randomly giving you one million dollars! '' When it is announced that a huge asteroid is hurtling towards the earth, the Russos decide to hide in the wizard world, but Alex, Justin, and Max change their minds and go into space to destroy the asteroid. Meanwhile, Alex finds out that she is not graduating high school, but when a small part of the asteroid hits the school, she gets a chance to graduate. Justin 's class of delinquent wizards has completed their studies, which means they are eligible to rejoin WizTech if they all pass their final evaluation. When they all fail, Alex suspects something is not right. The truth soon uncovered -- the class had actually passed, but the historian who checked the exams was revealed to be evil, and had lied about the exam scores. He tries to attack Justin 's class, so Justin must stop the historian, and save Alex. but one of the students in the class is able to stop him by pulling the most powerful wand in the wizard world out of a crystal ball and using it. This means that he is actually the descendant of a famous wizard who had previously been thought to have no living descendants. Justin 's class is revealed to have passed the final exam, so they are readmitted into WizTech and Justin is officially back in the family wizard competition. Meanwhile, Jerry and Theresa realize that they never saved any of Max 's childhood accomplishments, so they try to recreate them with the help of Harper. During the credits, it is revealed that Max had kept his favorite memories all along. Special guest star: Tim Conway as Cragmont Alex and Harper decide to do a puppet show to raise money for a new apartment and Zeke and Justin help them. But Alex forgets to write the lines for the show. Harper gets mad and decides to do a show of her own and makes her puppets act like the way Alex treats her. Alex makes Justin and Zeke puppets for her show and steals Harper 's crowd which makes Harper upset. Then Alex apologizes to Harper using the puppets. Meanwhile, Talia 's parents meet Theresa and Jerry for the first time, but it does not go well. Justin puts a spell on Talia 's parents to forget they ever meet Theresa and Jerry. They meet again, but this time they end up watching a 16 - hour play together. Special guest star: Joely Fisher as Meg Robinson Harper and Zeke spend so much time together that Alex decides to make a Harper - clone. Justin comes with lots of different ideas to save money in the sub-shop and Max starts a fast - food restaurant in the wizard world. Guest star: Dan Benson as Zeke Alex and Harper receive a flyer for a hotel. They meet the hotel 's owner, Dexter, who sells them Apartment 13B, which belongs to a floor for non-humans. They rent it, but Harper is forced to use a training wand that can only open things. Later at their apartment, they realize Mason lives on the floor. Mason will stop at nothing to win Alex back; Alex tells him she does not want a boyfriend, but Mason is relentless. That night, Alex and Harper throw a party, where the whole floor comes, including Felix. Mason goes to the party and uses Harper 's wand to open Alex 's heart. However, she flirts with an ugly giant. Mason tries again, but makes Alex opens her heart to Felix. Meanwhile, Theresa and Jerry trick Justin into going to Alex 's apartment to drive him out of the house. When Justin arrives, he fixes Mason 's mistakes by resetting Harper 's wand and it is revealed that despite his actions, he misses Alex. Alex, freed from Mason 's spell, is angry at Mason, and Mason, deflated, leaves. Outside their door, Dexter turns into Gorog when no one is looking. He wants revenge. Max appears in the apartment, and when Harper and Alex kick him out, he encounters Gorog, who gives him a flyer for a free camp. Max accepts. Special guest star: Gregg Sulkin as Mason Guest stars: John Rubinstein as Gorog, Frank Pacheco as Felix, Fred Stoller as Dexter, Christopher Douglas Reed as Ogre Alex and Harper have trouble paying the apartment bills, so Dexter recommends getting a roommate. Alex and Harper find a wealthy ghost named Lucy, who they let move in. Meanwhile, Justin orders a robot to help Zeke at the sub station, as Justin is busy studying for the wizard competition. The robot is lost in transit, but Dexter (Gorog in disguise) makes an evil robot to spy on Justin to pass information to Gorog. While Zeke and the robot are working, the robot asks him about Justin and the wizard competition, which confuses Zeke because no one had talked about it in front of the robot. Later that night, Lucy starts crying and explains that her boyfriend Donny had disappeared sixty years prior in a plane accident. Alex tries to set Lucy up with Justin, but he makes matters worse. Later, Lucy meets Mason and they go out on a long date, which makes Alex jealous. Meanwhile, the robot again asks Zeke about the wizard world and Zeke realizes that the robot might have a secret agenda. He tells Justin, who simply does not believe him and is ignorant of the situation because of his intensive studying. Because of her jealousy, Alex helps Lucy locate Donny, who had been stranded on an island in the Bermuda Triangle. They soon reconnect, but not after Alex realizes that she can not get back home, due to heavy magnetic currents interfering with her magic. Mason goes to save Alex after he finds out that she had left for the Bermuda Triangle. The robot attacks Justin and Zeke after they find him scanning the spell book. Mason and Alex manage to get off of the island using the Bermuda shorts given to them by Justin. The robot is shown giving Gorog the spell he needed that he got from the Russos. Special guest star: Gregg Sulkin as Mason Guest stars: Dan Benson as Zeke, Fred Stoller as Dexter, Josh Sussman as Hugh Normous, Linsey Godfrey as Lucy, Travis Caldwell as Donny, Michael Carbonaro as Robot Note: This is part 2 of a 4 - part arc. Alex and Harper are going to their apartment when they run into Mr. Laritate, who also lives in the hotel. Felix tells Justin he broke his wand and Justin informs him that it 's fake, because he saw batteries in it. He concludes that someone stole Felix 's wand. Dexter is showing a zombie to the hall for magical creatures while he introduces them to Alex and Harper. Mason, who will stubbornly not stop asking Alex on a date, meets them in the elevator. When Alex ignores him, Mason messes with the buttons so he can spend more time with her and they land on the second floor. Justin tells Jerry that Felix lost his wand and Jerry tells them to use the abracadoobler wand app and Felix realizes that his wand is around the area. Meanwhile, because Mason messed with the buttons, all the humans have discovered the mysterious thirteenth floor. Alex lies and tells the humans that they accidentally got into a secret open house floor for the people in the hotel. Mr. Laritate, after all the other humans left, discovers the thirteenth floor too and sees Alex, Harper, and Mason. Mr. Laritate sees the zombie, Alex explains this is a haunted house, but Mr. Laritate does not believe them. The zombie bites Mr. Laritate and turns him into a zombie. They take him to their apartment while Alex looks for a spell to turn Mr. Laritate human again. Mr. Laritate runs away and Alex, Harper, and Mason suspect he went to the substation. Professor Crumbs questions Felix, who reveals the 13th floor to them, and Crumbs denies the existence of a "13th floor '', so they go to investigate. At Waverly Place they 're throwing a festival while Alex tells Jerry what happened. Alex and Mason, trying to sneak the zombie into the substation, are forced into dance. They then, after Harper causes a distraction, take Laritate to the substation and give him a medicine from the wizard world that turns Laritate human. He remembers nothing and goes outside to dance. Alex kisses him on the cheek, and Mason suggests they just be friends. Alex gets back together with Mason. Crumbs tells Felix the floor should not exist. Harper, Mason, and Alex return to the thirteenth floor. Justin tells them that the floor does not exist. Crumbs thinks that there is evil going on. Dexter then comes with dark angel wings and it is revealed he stole Felix 's his wand. Dexter tricks Felix into turning to the dark side in return for the wand. Felix casts a spell that traps them all on the floor. Back at Waverly Place, Mr. Laritate is dancing while Jerry is selling chili, when Mr. Laritate gets one and gets water to get "the burn '' out of his mouth. Special guest star: Gregg Sulkin as Mason Guest stars: Bill Chott as Mr. Laritate, Ian Abercrombie as Professor Crumbs, Frank Pacheco as Felix, Fred Stoller as Dexter, Regan Burns as Everett, Jane Carr as Martha St. Claire Note: This is part 3 of a 4 - part arc. The wizards of 13th floor come out and Alex explains Dexter 's evil. Dexter reveals himself as Gorog, and that he is trying to rebuild his army of darkness. Felix uses a spell that makes all the wizards from the 13th floor go with them, so they can make them all evil, all except Alex, Justin, and Harper, who are left behind. A trashcan tells them to go through it, however Alex ends up in the room again. Justin realizes only mortals can go through it. Alex convinces Harper into going through it, Harper then arrives in the wizard lair. Gorog takes the kidnapped wizards to his evil lair, where he plans to make them evil. He tricks Hugh Normous into becoming one of the evil wizards in his army. Professor Crumbs fakes becoming evil when Gorog plans on making him evil. Harper contacts Max and tells him about Gorog. Max arrives and suggests they look for spells in the wizard books. Gorog makes the wizard army to dig a hole to the atmosphere to the wizard world, so they can take over it. Gorog also tries to trick Mason into being one of the members in his army, but Mason refuses. Gorog tricks him into joining, however. Gorog then makes Felix send a black whole to suck Alex and Justin in it. Gorog gives the kidnapped wizards their wands so they can take over the wizard world, but Crumbs refuses to let them. Felix then gets rid of Crumbs on Gorog 's order. The black hole nearly sucks Alex and Justin when Crumbs arrives at the wizard lair. Max suggests they put a black hole in the wizard lair. Max goes in and rescues Alex and Justin. Crumbs tells them to use their magic to stop Gorog. They arrive at Gorog 's lair and attempt to destroy him, but Mason tries to convince Alex to join them. Gorog also reveals Juliet (young again), Justin 's ex-girlfriend, and she convinces Justin to join. Alex also surrenders, and Max is forced to as well, the three then use their combined magic to get rid of Gorog, and all the supernatural beings are freed from his influence. Juliet and Justin get back together. Alex tricks Harper into moving in the Russos ' basement by making her think all the wizards and supernatural creatures, with no home, are going to stay. Special guest stars: Gregg Sulkin as Mason, Bridgit Mendler as Juliet van Heusen Guest stars: Ian Abercrombie as Professor Crumbs, John Rubinstein as Gorog, Frank Pacheco as Felix, Fred Stoller as Dexter, Josh Sussman as Hugh Normous, Christopher Douglas Reed as Ogre Note: This is part 4 of a 4 - part arc. The current landlord of the Russo 's building decides to sell the sub shop, evicting the Russos of the shop, as well as their apartment and lair, which contains their portal. Without the portal, the Russos will lose all contact with the Wizard world. So, the entire family (along with Harper) travel back to 1957 to stop Jerry 's father (the previous owner of the shop) from selling the shop to the landlord, so that they do not get evicted. Jerry 's father agrees not to sell the shop, and the Russos return to present day. However, they accidentally leave Harper behind when Harper is distracted by a woman with a poodle, and Harper suggests that the woman should put a picture of a poodle on her skirt, creating a poodle skirt, which were popular in the 1950s. Back in present day, the Russos (who have not yet realized Harper 's absence) find the sub shop obsolete, boarded up, and empty, with the subway car and the lair missing, too. The Russos realized that they have messed with the fabric of time, and something they did while time - traveling in 1957 must have created a ripple effect, and affected present day. While Alex suggests that they go back to 1957 to find out what they did wrong, Justin denies this, saying they should not solve the problem by doing exactly what caused the problem, however Theresa states that they will have to go back, when she realizes that they left Harper in 1957. The Russos return to 1957, where Jerry 's father reveals that he did not sell the restaurant to the landlord, however the restaurant failed to make enough business, and that he has been forced to close it. Justin realizes that when Jerry 's father lost the restaurant, he moved out of the building and since their family no longer lived in the building, the lair disappeared in present day. The Russos decide just to find Harper and go back to present day, and discover that Harper enrolled at Tribeca Prep as soon as they forgot her in 1957, as Harper did not want to ruin her perfect attendance record. Justin, Alex, and Max go to Tribeca Prep to retrieve Harper, while Jerry and Theresa wait at the sub shop, however Harper decides to stay in the past, as she finds new popularity in 1957 that she does not experience in present day. While at Tribeca, Max introduces the high - five to several students. Alex realizes that if they make the sub shop the 1957 high school hangout, the sub shop will get better business and will not have to close. However, no one has any fun at the sub shop that night as the jukebox is broken. Just as everyone is about to leave, Max plugs his MP3 player into the jukebox and Harper uses her popularity to get the students dancing and having fun at the shop, and the business at the sub shop peaks. The Russos return to the present day, where the lair and the sub shop are back, and Jerry owns the entire building, while the landlord is now the owner of a Janitor service, instead of the building, and the high - five is now called a "Max ''. Alex then remembers that they forgot Harper back in 1957 and they go back and get her. (the Russos accidentally end up in 1977 twice in the episode, thanks to Justin) The episode begins on Waverly Place where Harper is reading a storybook to a group of children when Alex pushes her away. Alex explains that it 's a Wizard Storybook, a magical storybook that has the power of turning what the person reads real. Then Harper is teleported into the loft where Theresa, her evil (or "wicked '' as she referred) stepmother calls her Harperella (a combination of Harper and Cinderella) and forces her to do the chores. Then her evil stepbrothers, Justin and Max come down and give her laundry to her wash. Later Harper is washing the Waverly Substation when Alex appears, as she is Harper 's fairy godmother. Alex explains to Harper that the fairy tale has to run as it 's supposed to before things can go back to the way they were and then vanishes. Then Max comes in and says that the prince charming has decided to use the Substation to be the ballroom for the royal party. Harper says that she can go to the party but her stepfamily laughs at her. Back in the loft Harper summons Alex, she uses a spell to transform Harper 's bookdress into a princess gown and glass bowls into glass slippers. They go down (through the door) into the street and Alex turns the hot dog stand into a carriage to take Harper to the party. There she dances with prince charming, Zeke. At the stroke of midnight she goes outside leaving her glass slipper behind. Then, to both girls ' shock, Justin, Max and Zeke (with pig noses) rush inside the substation being chased by the big, bad wolf, Mason. Alex realizes that some pages are missing and Cinderella and three little pigs stories were blended together. The pig - turned boys and Harper rush into the loft being chased by Mason. While Alex finds the pages Harper has to keep Mason from eating Zeke (who built the straw house), Alex eventually finds the book pages in a drawer along with other book pages. Alex tries a page from Max 's favourite magic trick book then David Copperfield comes into the room and does some magic tricks which impresses Harper. But another page that Alex tried was from Justin 's science book and a caveman appeared, but the caveman has a thing for Harper. So when Harper is impressed by the magic trick he is jealous and tries to attack David Copperfield. Alex finds the right pages and the story goes back to Cinderella. Zeke Charming comes in and Theresa tries to wear the slipper but fails. Then Harper tries it and fits. She and Zeke go back to the royal party where they dance and then the story ends and the world is returned to its normality. Last but not least, Harper was back in the real world and she lived happily ever after. Finally, Alex planned that Harper and Zeke dance because they were great when they danced in romance, so they practiced but they were really bad. The ball dance is turned into a clogging dance. Special guest stars: David Copperfield as himself, Gregg Sulkin as Mason Guest star: Dan Benson as Zeke During a dinner that Alex made for the family, Professor Crumbs visits to reveal that with this selfless act, the Russo children can finally have their family wizard competition. Round one of the competition is a trivia round, in which questions related to magic are asked. During this round, Harper and Zeke appear. Zeke has accidentally smelled a purple substance in the lair that caused him to turn purple. This causes a griffin to kidnap him and Harper. Alex, Justin and Max use their three time - outs to go and rescue their friends. They make it back, only to discover that they have been gone too long and have been disqualified from the competition. They all return home, where the lair disappears and they all lose their magic, thus becoming mortals. Justin and Max begin to hate Alex because of her insistence on saving Zeke and Harper resulting in them losing. Convinced that the loss of their powers has ruined the family, Jerry decides to sell the sub shop. The kids go for a day without magic, during which, slowly gaining back trust in each other as they reopen and run the shop. Suddenly, they are returned to the area of the Wizard Competition, where the host reveals that the griffin attack (which was actually meant for Professor Crumbs) and the weeks they went without magic were tests to test their family bond. They enter the final round of the competition, a massive labyrinth. The first out of the maze will win. Justin makes it out first and is declared the Family Wizard. However, he decides he can not accept this because Alex was about to get out first: Justin had been trapped by magical vines and Alex, near the exit, came back to help. Alex is thus declared the true family wizard. Professor Crumbs, proud of Justin 's integrity in telling the truth, declares that he is going to retire as Headmaster of Wiz - Tech and declares Justin the new headmaster, granting him full wizard powers. Considering Max is the only Russo child that does not get powers or becomes a full wizard, Jerry thus decides to one day pass down the sub shop to Max, which he happily accepts. It ends with the Russo family hugging and gathering together with Alex stating that they are all finally happy at the same time. Special guest stars: Gregg Sulkin as Mason, Bridgit Mendler as Juliet van Heusen Guest stars: Dan Benson as Zeke, Ian Abercrombie as Professor Crumbs, Andy Kindler as Chancellor Tootietootie
is there a hurricane with an l name
List of retired Atlantic Hurricane names - wikipedia This is a cumulative list of previously used tropical cyclone (tropical storm and hurricane) names which have been indefinitely removed from reuse in the North Atlantic region. The naming of North Atlantic tropical cyclones is currently under the oversight of the Hurricane Committee of the World Meteorological Organization. This group maintains six alphabetic lists of names, with one list used each year. This normally results in each name being reused every six years. However, in the case of a particularly deadly or damaging storm, that storm 's name is retired, and a replacement starting with the same letter is selected to take its place. The decision whether to remove a name is made yearly at an annual session of the Hurricane Committee that occurs in the spring after the conclusion of the North Atlantic hurricane season. The practice of retiring storm names was begun by the United States Weather Bureau in 1955, after major hurricanes Carol, Edna, and Hazel struck the Northeastern United States during the previous year. Initially their names were retired for 10 years, after which time they could be reintroduced; however, in 1969, the policy was changed to have the names retired indefinitely. In 1977, the United States National Oceanic and Atmospheric Administration (NOAA) transferred control of the naming lists to the Hurricane Committee. Since the formal start of naming during the 1947 Atlantic hurricane season, an average of one Atlantic storm name has been retired each year, though many seasons (most recently 2014) did not have any names retired. The deadliest storm to have its name retired was Hurricane Mitch, which caused over 10,000 fatalities when it struck Central America during October 1998, while the costliest storm was Hurricane Katrina, which caused over $108 billion in damage when it struck the U.S. Gulf Coast in August 2005. The most recently retired storm names are Hurricane Matthew and Hurricane Otto. By 1947, tropical cyclones developing in the North Atlantic Ocean were named by the United States Army Air Forces in private communications between weather centres and aircraft using the Phonetic alphabet. This practice continued until September 1950, when the names started to be used publicly after three hurricanes (Baker, Dog, Easy) had occurred simultaneously and caused confusion within the media and the public. Over the next 2 years the public use of the phonetic alphabet to name systems continued before at the 1953 Interdepartmental Hurricane Conference it was decided to start using a new list of female names during that season, as a second phonetic alphabet had been developed. During the active but mild 1953 Atlantic hurricane season, the names were readily used in the press with few objections recorded; as a result, the same names were reused during the next year with only one change: Gilda for Gail. Over the next 6 years a new list of names was developed ahead of each season, before in 1960 forecasters developed four alphabetical sets and repeated them every four years. These new sets followed the example of the typhoon names and excluded names beginning with the letters Q, U, X, Y, and Z, and keeping them to female names only. In 1955, it was decided to start retiring the names of significant tropical cyclones for 10 years after which they might be reintroduced, with the names Carol and Edna reintroduced ahead of the 1965 and 1968 hurricane seasons respectively. At the 1969 Interdepartmental hurricane conference the naming lists were revised after it was decided that the names Carol, Edna and Hazel, would be permanently retired because of their importance to the research community. It was also decided that any significant hurricane in the future would also be permanently retired. Ahead of the 1971 Atlantic hurricane season, 10 lists of hurricane names were inaugurated, by the National Oceanic and Atmospheric Administration. In 1977 it was decided that the World Meteorological Organization 's Hurricane Committee (WMO) would control the names used, who subsequently decided that six lists of names would be used in the Atlantic Ocean from 1979 onwards with male names included. Since 1979 the same six lists have been used by the United States National Hurricane Center (NHC) to name systems, with names of significant tropical cyclones retired from the lists permanently and replaced with new names as required at the following year 's hurricane committee meeting. At present, the name of any tropical cyclone may be retired or withdrawn from the list of names at the request of a member state, if it acquires notoriety for various reasons including the number of deaths, amount of damages or other impacts. The committee subsequently discuss the proposal and either through building consensus or a majority vote decides if the name should be retired or withdrawn. In March 2017, members of the British Caribbean Territories proposed that a third retirement criterion be added: the tropical cyclone must have sustained winds of at least 96 mph (154 km / h). This came in light of the retirement of Tropical Storm Erika in 2015 which caused catastrophic flooding and mudslides in Dominica without producing sustained tropical storm - force winds on the island. No action has been taken on this proposal yet. Between 1954 and 1959, eight names were deemed significant enough to be retired for 10 years due to their impact, before being permanently retired after 1969. There were no names retired after the 1956, 1958, and 1959 seasons. Collectively, these storms resulted in at least 2090 fatalities and over 7009205500000000000 ♠ $2.06 billion in damage. The deadliest hurricane was Hurricane Hazel, which killed at least 701 people, while the costliest was Hurricane Diane, which caused 7008856000000000000 ♠ US $856 million in damage. In 1960, four rotating lists of names were developed to avoid having to create new lists each year, while the practice of retiring any particularly damaging storm names for 10 years continued, with 11 names deemed significant enough to be retired during the decade. At the 1969 Hurricane Warning Conference, the National Hurricane Center requested that Carol, Edna, Hazel, and Inez be permanently retired due to their importance to the research community. This request was subsequently accepted and led to today 's practice of retiring names of significant tropical cyclones permanently. There were no names retired after the 1962 and the 1968 seasons. Collectively, the 11 systems were responsible for over 9000 fatalities and in excess of 7009441000000000000 ♠ US $4.41 billion in damage. Starting in 1979, the World Meteorological Organization began assigning both male and female names to tropical cyclones. This decade featured hurricanes David and Frederic, the first male Atlantic hurricane names to be retired. During this decade, 9 storms were deemed significant enough to have their names retired. Together these 9 systems caused at least 7009940800000000000 ♠ $9.41 billion in damage, while more than 10,500 people lost their lives. No names were retired by the Hurricane Committee after the 1971, 1973, or 1976 seasons. After control of the naming scheme was turned over to the World Meteorological Organization 's Hurricane Committee during the mid-1970s, the 1980s marked the least prolific decade in terms of the number of retired storms with 7 names warranting removal. Between them the 7 systems caused over 7010208876000000000 ♠ $20.9 billion in damage while over 893 people lost their lives. Hurricane Gilbert was the most intense tropical cyclone during the decade by pressure, with a minimum value of 888 hPa (26.22 inHg). This was the lowest recorded pressure in a North Atlantic hurricane until Hurricane Wilma surpassed it during 2005. In addition, Hurricane Allen was the most intense tropical cyclone during the decade by wind speed, with maximum 1 -- minute sustained winds of 190 mph (305 km / h). This remains the highest sustained wind speed of any Atlantic hurricane on record. No names were retired by the Hurricane Committee after the 1981, 1982, 1984, 1986, or 1987 seasons, which was the most of any decade since the introduction of the practice of retiring hurricane names. During the 1990s, the Atlantic Ocean moved into its active era, which led to more tropical cyclones forming during the hurricane seasons. The decade featured Hurricane Andrew which at the time was the costliest hurricane on record, and also Hurricane Mitch which is considered to be the deadliest tropical cyclone to have its name retired killing over 11,000 people in Central America. A total of 15 names were retired in this decade, with seven of those during the 1995 and 1996 seasons. There were no names retired after the 1993, 1994 and 1997 seasons. After the Atlantic basin had moved into the warm phase of the Atlantic multidecadal oscillation during the mid-1990s, the 2000s marked the most prolific decade in terms of the number of retired storms, with 24 names warranting removal. The decade featured one of the costliest tropical cyclones on record, Hurricane Katrina, which inflicted roughly 7011108000000000000 ♠ US $108 billion in damage across the Gulf Coast of the United States. Katrina was also the deadliest hurricane to strike the United States since the 1928 Okeechobee hurricane. After causing approximately 7009900000000000000 ♠ US $9 billion in damage, Tropical Storm Allison became the first tropical storm in this basin to have its name retired, while subtropical storms started to be named during 2002. Hurricane Jeanne was the deadliest storm during the decade and was responsible for over 3000 deaths, when it impacted Haiti and other parts of the Caribbean as a tropical storm and minimal hurricane. During October 2005, Hurricane Wilma became the most intense tropical cyclone in the Atlantic basin on record, with a central pressure of 882 hPa (26.05 inHg). There were no retired names after the 2006 and 2009 hurricane seasons. Collectively, the 24 systems were responsible for nearly 7,900 fatalities and in excess of 7011280247700000000 ♠ US $280 billion in damage. So far during the current decade, nine tropical cyclone names have been retired. Collectively, these systems killed at least 1117 people and caused at least 7011109537350000000 ♠ $110 billion worth of damage. So far, Hurricane Igor is the most intense tropical cyclone during the decade whose name has been retired by pressure, with a minimum value of 924 hPa (27.29 inHg). In addition, Hurricane Matthew is the deadliest and most intense tropical cyclone by wind speed during the decade, with over 600 deaths and maximum 1 -- minute sustained winds of 165 mph (270 km / h), respectively. Hurricane Sandy is currently the costliest Atlantic hurricane in the 2010s as well as the third - costliest in history, with 7010750000000000000 ♠ US $75 billion in damage attributed to it. 2014 saw no retired names.
who wrote all of elton john's songs
Bernie Taupin - wikipedia Bernard John Taupin (born 22 May 1950) is an English lyricist, poet, and singer, best known for his long - term collaboration with Elton John, writing the lyrics for the majority of the star 's songs. In 1967, Taupin answered an advertisement placed in the UK music paper New Musical Express by Liberty Records, a company that was seeking new songwriters. Around the same time, Elton John responded to the same advertisement, and the duo were brought together, collaborating on many projects since. In 1971, journalist Penny Valentine wrote that "Bernie Taupin 's lyrics were to become as important as Elton (John) himself, proved to have a mercurial brilliance. Not just in their atmospheric qualities and descriptive powers, but in the way he handled words to form them into straightforward poems that were easy to relate to. '' Taupin was born at Flatters Farmhouse, which is located between the village of Anwick and the town of Sleaford in the southern part of Lincolnshire, England. Of French ancestry, Taupin 's father was educated in Dijon, and was employed as a stockman by a large farm estate near the town of Market Rasen. Taupin 's mother worked as a nanny, having previously lived in Switzerland. The family later moved to Rowston Manor, a significant step up from Flatters Farmhouse, which had no electricity. Taupin 's father decided to try his hand at independent farming, and the family moved to the run - down Maltkiln Farm in the north - Lincolnshire village of Owmby - by - Spital. Taupin 's 11 - year younger brother, Kit, was born there. Unlike his older brother Tony who attended a grammar school (selective secondary school), Taupin was not a diligent student, although he showed an early flair for writing. At age 15, he left school and started work as a trainee in the print room of the local newspaper, The Lincolnshire Standard, with aspirations of becoming a journalist. Taupin soon left that job, and spent the rest of his teenage years hanging out with friends, hitchhiking the country roads to attend youth club dances in the surrounding villages, playing snooker in the Aston Arms Pub in Market Rasen and drinking. Taupin had worked at several part - time, dead - end jobs when, at age 17, he answered the advertisement that eventually led to his collaboration with Elton John. Taupin 's mother had studied French literature and his maternal grandfather Poppy was a classics teacher and graduate of the University of Cambridge. They taught him an appreciation for nature and for literature and narrative poetry, both of which influenced his early lyrics. In 1967, Taupin answered an advertisement for talent that was placed in the New Musical Express by Liberty records A&R man Ray Williams. Elton John answered the same advert. Neither Taupin nor John passed the audition for Liberty Records. Elton told the man behind the desk that he could not write lyrics, so the man handed Elton a sealed envelope from the pile of people submitting lyrics, which he opened on the London Underground ride home. The envelope contained poems by Taupin. The duo have collaborated on more than thirty albums to date. The team took some time off from each other for a while between 1977 and 1979, while Taupin worked with other songwriters, including Alice Cooper, and John worked with other lyricists, including Gary Osborne and Tom Robinson. (The 1978 single - only A side "Ego '' was their only collaboration of note during the period, although John / Taupin B - sides such as "Lovesick '' and "I Cry at Night '' were issued with the respective singles "Song for Guy '' and "Part - time Love '' from the album A Single Man.) John and Taupin resumed writing together on (at first) an occasional basis in 1980, with Taupin contributing lyrics to only three or four songs each on albums such as The Fox, 21 at 33 and Jump Up!. However, by 1983 's Too Low for Zero, the two renewed their partnership on a full - time basis and from that point forward Taupin was again John 's primary lyricist. (John often works with other lyricists on specific theatrical or film projects such as 1994 's The Lion King and 2000 's Aida, both of which featured lyrics by Tim Rice.) Taupin 's lyrics include such songs as "Rocket Man '', "Levon '', "Crocodile Rock '', "Honky Cat '', "Tiny Dancer '', "Candle in the Wind '', "Saturday Night 's Alright for Fighting '', "Bennie and the Jets '', "Goodbye Yellow Brick Road '', "Mona Lisas and Mad Hatters '', "Do n't Let the Sun Go Down on Me '', "The Bitch is Back '', "Daniel '', and 1970 's "Your Song '', their first hit. Hits in the 1980s include "I 'm Still Standing '', "I Guess That 's Why They Call It The Blues '', "Sad Songs '', and "Nikita. '' In the 1990s, Taupin and John had more hits, including "The One '', "Simple Life '', "The Last Song '', "Club at the End of the Street '' and "Believe. '' In September 1997, Taupin rewrote the lyrics of "Candle in the Wind '' for "Candle in the Wind 1997 '', a tribute to the late Diana, Princess of Wales. Bernie Taupin on writing the lyrics for "Candle in the Wind 1997 '' The 1991 film documentary, Two Rooms, described the John / Taupin writing style, which involves Taupin writing the lyrics on his own and John then putting them to music, with no further interaction between the two. The process is still fundamentally the same, with John composing to Taupin 's words, but the two interact on songs far more today, with Taupin joining John in the studio as the songs are written and occasionally during recording sessions. Taupin and John had their first Broadway musical open in March 2006 with Lestat: The Musical. Taupin wrote lyrics for 10 songs (and an 11th completed non-album track "Across the River Thames '') for John 's 2006 album The Captain & The Kid (sequel to Captain Fantastic and the Brown Dirt Cowboy) and appeared on the cover with him for the first time marking their 40th anniversary of working together. ("Across the River Thames '' was issued as an Internet - only download as a bonus with certain editions of The Captain & the Kid.) On 25 March 2007, Taupin made a surprise appearance at John 's 60th birthday celebration at Madison Square Garden, briefly discussing their 40 - year songwriting partnership. Of Taupin 's importance to their careers, as recorded on the Elton 60 - Live at Madison Square Garden DVD, John told the audience that without Taupin there probably would n't be an Elton John as the public has come to know him. Taupin and John also composed several songs for The Union, a collaboration album between Elton and his longtime hero Leon Russell released in October 2010. They also collaborated on five original songs for the 2011 Miramax movie Gnomeo and Juliet, including the Golden Globe - nominated "Hello Hello ''. The duo collaborated on their 31st studio album, "The Diving Board, '' which was released in September 2013. Their next studio album, "Wonderful Crazy Night, '' was released in February 2016. In addition to writing for Elton John, Taupin has also written lyrics for use by other composers, with notable successes including "We Built This City '', which was recorded by Starship, and "These Dreams, '' recorded by Heart (both of which were collaborations with English composer / musician Martin Page). In 1978, he co-wrote the album From the Inside with Alice Cooper. Taupin has also produced American Gothic for singer - songwriter David Ackles. Released in 1972, it did not enjoy big sales, but the album was highly acclaimed by music critics in the US and UK. The influential British music critic Derek Jewell of the UK Sunday Times described the album as being "the Sgt. Pepper of folk. '' Of Ackles ' four albums, it was the only one recorded in England rather than in the United States. Taupin and Ackles had become acquainted when Ackles was selected to be the co-headlining act for Elton John 's 1970 American debut at the Troubadour in Los Angeles. Taupin was mentioned specifically as being one of the reasons American Gothic was selected by the writers and editors for inclusion in the book, 1001 Albums You Must Hear Before You Die. He also collaborated on the book Burning Cold with photographer Gary Bernstein. In the late 1980s and early 1990s Taupin also collaborated with French American musician, Josquin Des Pres (Producer and Bassist) on at least 13 songs in his collection that have been performed and recorded by artists worldwide. In 2002, Willie Nelson and Kid Rock recorded "Last Stand in Open Country '' for Nelson 's album The Great Divide. The song was the title track of the first album from Taupin 's band Farm Dogs (see below). Nelson 's album included two other Taupin songs, "This Face '' and "Mendocino County Line ''. The latter song, a duet between Nelson and Lee Ann Womack, was made into a video and released as the album 's first single. The song won the 2003 Grammy for best vocal collaboration in country music. In 2004, he co-wrote Courtney Love 's song "Uncool '', from her 2004 debut solo album America 's Sweetheart. In 2005, he co-wrote the title track to What I Really Want For Christmas with Brian Wilson for his first seasonal album. In 2006, he won a Golden Globe Award for his lyrics to the song "A Love That Will Never Grow Old '' from the film Brokeback Mountain. The music of the song was composed by Argentine producer and songwriter Gustavo Santaolalla. In 1971, Taupin recorded a spoken - word album entitled Taupin, in which he recites some of his early poems against a background of impromptu, sitar - heavy music created by some members of Elton 's band, including Davey Johnstone and Caleb Quaye. Side One is entitled "Child '' and contains poems about his early childhood in southern Lincolnshire. The first poem, "The Greatest Discovery, '' which looks at his birth from the perspective of his older brother Tony, was also set to music by Elton John and included on Elton 's eponymous second album, Elton John. There are poems about Taupin 's first two childhood homes, Flatters and Rowston Manor, and others about his relationship with his brother and grandfather. Side Two includes a variety of poems of varying obscurity, from a marionette telling her own story to a rat catcher who falls victim to his prey. Taupin stated in interviews that he was n't pleased with the album. In 1980, Taupin recorded his first album as a singer, He Who Rides the Tiger. The album failed to make a dent in the charts. Taupin later suggested in interviews that he did n't have the creative control he would have liked over the album. In 1987, he recorded another album entitled Tribe. The songs were co-written with Martin Page. "Citizen Jane '' and "Friend of the Flag '' were released as singles. Videos of both singles featured Rene Russo, the sister of Toni, his wife at that time. In 1996, Taupin formed a band called Farm Dogs, whose two albums were conscious (and successful) throwbacks to the grittier, earthier sound of Tumbleweed Connection. While Taupin wrote the lyrics, the music was a collaborative effort among the band members. Their first album, 1996 's Last Stand in Open Country, received critical praise but little airplay. The title track was later recorded by Willie Nelson and Kid Rock for Nelson 's 2002 album The Great Divide. In 1998, Farm Dogs released its second and final album, Immigrant Sons. The album was unsuccessful despite a tour of small clubs across America. In 1973, Taupin collected all his lyrics up through the Goodbye Yellow Brick Road album into a book entitled Bernie Taupin: The One Who Writes the Words for Elton John. In addition to the lyrics from the albums, this book contained the lyrics to all the single B - sides, various rarities, and Taupin 's 1970 spoken - word album. The songs are illustrated by various artists, friends, and celebrity guests such as John Lennon and Joni Mitchell. The book is in black & white except for the cover. In 1977, Taupin collaborated with rock photographer David Nutter on "It 's A Little Bit Funny '', adding text and helping chronicle Elton John 's year - long, "Louder Than Concorde, But Not Quite As Pretty '' world concert tour. The now - collectible book was published in hard and soft cover editions by Penguin Books. It collects the better part of one year 's worth of personal adventures and memories of Elton and the band, aboard his private plane, on the beaches of Barbados, at backstage gatherings and in some quieter off - stage moments with friends (including some famous faces that Elton and Bernie met and palled around with in their travels). In 1978, Taupin also appeared in an episode of The Hardy Boys / Nancy Drew Mysteries, "The Hardy Boys & Nancy Drew Meet Dracula, '' singing backup to Shaun Cassidy. In 1988, Taupin published an autobiography of his childhood entitled A Cradle of Haloes: Sketches of a Childhood. The book was released only in the UK. It tells the tale of a childhood fuelled by fantasy in rural Lincolnshire in the 1950s and 1960s, ending in 1969 as Taupin gets on the train to seek his fortune in London. In 1991, Taupin self - published a book of poems called The Devil at High Noon. In 1994, Taupin 's lyrics up through the Made in England album were collected into a hardcover book entitled Elton John & Bernie Taupin: The Complete Lyrics, published by Hyperion. However, it does n't appear that Taupin was intimately involved in this project, as it contains multiple misspellings and outright misrenderings of the lyrics. It is also missing some of the rarities and B - sides found in the earlier collection. As with the 1973 collection, the songs are illustrated by various artists, this time in full colour throughout. In 1992, Taupin was asked to produce a benefit for AIDS Project Los Angeles. The event featured no songs written by the writer, instead opening with an acoustic set of performances of material chosen by the performers followed by selections from the musical West Side Story, chosen for its "timeless message of tolerance that is relevant to every decade. '' In addition to his music, much of his time is spent creating his visual art, Taupin began displaying and selling his original artwork in 2010. Consisting of large, mixed media, contemporary works, the art has been shown in select galleries and art fairs across the United States. Taupin has been married four times and divorced three: Maxine Feibelman (1971 -- 76); Toni Lynn Russo (1979 -- 91), sister of actress Rene Russo; Stephanie Haymes (1993 -- 98), daughter of entertainers Dick Haymes and Fran Jeffries; and Heather Kidd (March 2004 -- present), with whom he has two daughters, Charley Indiana and Georgey Devon. Taupin moved to Southern California from England in the mid-1970s. Since the 1980s, he has been living on a ranch north of Los Angeles near Santa Ynez, California.
who plays cruella in once upon a time
Victoria Smurfit - wikipedia Victoria Smurfit (born 31 March 1974) is an Irish actress. She is known for playing Orla O'Connell in the BBC television series Ballykissangel and Detective Chief Inspector Roisin Connor in the ITV police procedural Trial & Retribution. Victoria Smurfit is part of the Smurfit family, one of the richest in Ireland, through Smurfit Kappa. The family, headed by Victoria 's uncle Michael Smurfit, sponsors a number of sporting events including the Smurfit European Open and the Champion Hurdle. The family is also associated with Smurfit Business School in University College Dublin (UCD). She was educated at two Anglican schools, Saint Columba 's College, Dublin and St. George 's School, Ascot, England. She did an A level in theatre studies and then went to the Bristol Old Vic theatre school. Smurfit gained fame for her role as Orla O'Connell in the BBC television series Ballykissangel from 1998 -- 99. She played Nina in the 2003 movie Bulletproof Monk. From 2003 to 2009, Smurfit portrayed the lead role of Detective Chief Inspector Roisin Connor in the ITV police procedural Trial & Retribution. She also guest starred in the BBC Radio 4 series Baldi. In 2011, Smurfit appeared in the Agatha Christie 's Marple television episode "The Mirror Crack 'd from Side to Side ''. In 2013, Smurfit costarred as Lady Jane Wetherby in the NBC television period drama Dracula. In 2014, she began playing the recurring guest role of villainess Cruella de Vil on ABC 's Once Upon a Time. She is currently shooting for her role in Homecoming, a film she has described as a "mean girls for grownups. '' She plays Nikki, the "head mean girl. '' Smurfit married advertising executive Douglas Baxter on 29 July 2000 in Surrey, England. She gave birth to their first child, daughter Evie Dorothy Baxter in Dublin, Ireland on 2 November 2004. A second daughter, Ridley Belle Baxter was born in May 2007. Their third child, a boy, was born in November 2008 and named Flynn Alexander Baxter. In 2012, the family relocated to Santa Monica, California. In February 2015 it was announced that Smurfit and her husband had filed for divorce. She writes an opinion blog for The Dubliner, which often features anecdotes from her personal life, and is a patron of the children 's charity World Vision Ireland.
where did the idea for jumanji come from
Jumanji - wikipedia Jumanji is a 1995 American fantasy adventure film directed by Joe Johnston. It is an adaptation of the 1981 children 's book of the same name by Chris Van Allsburg. The film was written by Van Allsburg, Greg Taylor, Jonathan Hensleigh, and Jim Strain and stars Robin Williams, Bonnie Hunt, Kirsten Dunst, Bradley Pierce, Jonathan Hyde, Bebe Neuwirth, and David Alan Grier. The story centers on 12 - year - old Alan Parrish, who becomes trapped in a board game while playing with his best friend Sarah Whittle in 1969. Twenty - six years later, in 1995, siblings Judy and Peter Shepherd find the game, begin playing and then unwittingly release the now - adult Alan. After tracking down Sarah, the quartet resolve to finish the game in order to reverse all of the destruction it has caused. The film was released on December 15, 1995. Despite the film receiving generally unfavorable reviews from critics, it was a box office success, earning $263 million worldwide on a budget of approximately $65 million and it became the 10th highest - grossing film of 1995. A similar film, marketed as a spiritual successor to Jumanji, titled Zathura: A Space Adventure, was released in 2005 and was also adapted from a Van Allsburg book that was more directly connected to the Jumanji book. It is part of the Jumanji franchise and spawned a direct sequel, Jumanji: Welcome to the Jungle (2017), as well as an animated television series, which aired from 1996 to 1999. In 1869, near Brantford, New Hampshire, two brothers bury a chest and hope that no one will ever find it. A century later in 1969, Alan Parrish escapes a gang of bullies and retreats to a shoe factory owned by his father, Sam. An employee, Carl Bentley, shows him a new shoe prototype he made by himself. Alan misplaces the shoe and damages a machine, but Carl takes responsibility and loses his job. After the bullies attack Alan and steal his bicycle, he follows the sound of tribal drumbeats to a construction site. There he finds the chest, which contains a board game called "Jumanji '', which he brings home. Sam and Alan argue about Alan attending boarding school, causing Alan to plan to run away. His friend Sarah Whittle arrives to return his bicycle, and Alan shows her Jumanji and invites her to play. With each roll of the dice, the game piece moves by itself and a cryptic message describing the roll 's outcome appears in the crystal ball at the center of the board. Sarah reads the first message on the board and hears an eerie sound. Alan then unintentionally rolls the dice after being startled by the chiming clock; a message tells him to wait in a jungle until someone rolls a 5 or 8. Alan is sucked into the game, and a colony of bats chases Sarah out of the mansion. Twenty - six years later in 1995, Judy and Peter Shepherd move into the vacant Parrish mansion with their aunt Nora, their parents having recently died in a car accident in Canada. Soon after, Judy and Peter find Jumanji in the attic and begin playing it. Their rolls release a swarm of giant mosquitoes and a troop of monkeys. The game rules state that everything will be restored when the game ends, so they continue playing. Peter 's next roll releases both a lion and an adult Alan. Alan leaves the house, where he meets Carl, who has been working as a police officer. Alan, Judy and Peter go to the now derelict shoe factory, where a vagrant tells Alan that after his disappearance, Sam and his wife abandoned the business and searched for Alan, until their deaths just four years ago. Realizing they need Sarah to finish the game, the three locate her, now suffering from posttraumatic stress disorder due to Alan 's disappearance, and persuade her to join them. Sarah 's roll releases fast - growing carnivorous plants, and Alan 's next roll releases a big - game hunter, Van Pelt. Judy 's next roll releases a stampede of various animals, and a pelican snatches the game. Peter retrieves it, but Alan is arrested by Carl. Later, Van Pelt catches up to Alan 's friends and steals the game. Peter, Sarah, and Judy follow Van Pelt to a department store, where they fight him, retrieve the game, and reunite with Alan. When the four return to the mansion, it is now completely overrun by jungle wildlife. As the game releases further calamities and Van Pelt returns, Alan finally rolls a winning turn, causing everything that had happened as a result of the game to be reversed. Alan and Sarah return to 1969 as children, but have full memories of the future events. Alan reconciles with his father and admits that he was responsible for the shoe that damaged the factory 's machine to prevent Carl from getting fired, and Sam tells his son that he does not have to attend boarding school. Alan and Sarah throw Jumanji into a river and then share a kiss. Twenty - six years later, Alan and Sarah are married and expecting their first child. Alan runs the factory, his parents having retired. He and Sarah reunite with Judy and Peter (who have no memories of the game), and meet their parents Jim and Martha during a Christmas party. The Parrishes offer Jim a job and convinces the Shepherds to cancel their upcoming ski trip which lead to their car accident deaths, and begin a friendship with them. On a beach, two French - speaking girls hear drumbeats as Jumanji lies partially buried in the sand. While Peter Guber was visiting Boston, he invited author Chris Van Allsburg, who lives in Providence, Rhode Island, to option his book. Van Allsburg wrote one of the screenplay 's drafts, which he described as "sort of trying to imbue the story with a quality of mystery and surrealism ''. Tristar Pictures agreed to finance the film on the condition that Robin Williams play the starring role. However, Williams turned down the role based on the first script he was given. Only after director Joe Johnston and screenwriters Jonathan Hensleigh, Greg Taylor and Jim Strain undertook extensive rewrites did Williams accept. Johnston had reservations over casting Williams because of the actor 's reputation for improvisation, fearing that he would n't adhere to the script. However, Williams understood that it was "a tightly structured story '' and filmed the scenes as outlined in the script, often filming duplicate scenes afterwards where he was allowed to improvise with Bonnie Hunt. Shooting took place in various New England locales, mainly Keene, New Hampshire, which represented the story 's fictional town of Brantford, New Hampshire, and North Berwick, Maine, where the Olde Woolen Mill stood in for the Parrish Shoe Factory. Additional filming took place in Vancouver, British Columbia, where a mock - up of the Parrish house was built. Special effects were a combination of more traditional techniques like puppetry and animatronics (provided by Amalgamated Dynamics) with state - of - the - art digital effects overseen by Industrial Light & Magic. ILM developed two new software programs specially for Jumanji, one called iSculpt, which allowed the illustrators to create realistic facial expressions on the computer - generated animals in the film, and another that for the first time created realistic digital hair, used on the monkeys and the lion. Actor Bradley Pierce (Peter) underwent three and a half hours of prosthetic makeup application daily for a period of two and a half months to film the scenes where he transformed into a monkey. The film was dedicated to visual effects supervisor Stephen L. Price, who died before the film 's release. Jumanji was released in theatres on December 15, 1995. Jumanji was first released on VHS on May 14, 1996, and re-released as a Collector 's Series DVD on January 25, 2000. This was followed by an initial Blu - ray release on June 28, 2011. The Blu - ray was re-released as a 20th Anniversary Edition on September 14, 2015 (with the same transfer found on the 2011 release). A restored version was released on December 5, 2017 on Blu - ray and 4K UHD to coincide with the premiere of the sequel. Commercial songs from film, but not on soundtrack Jumanji did well at the box office, earning $100.5 million in the United States and Canada and an additional $162.3 million overseas, bringing the worldwide gross to $262.8 million. On Rotten Tomatoes, the film has an approval rating of 53 % from 36 reviews, with an average rating of 5.7 / 10. On Metacritic the film has a weighted average score of 39 out of 100, based on 18 critics, indicating "generally unfavorable reviews ''. Audiences polled by CinemaScore gave the film an average grade of "A − '' on an A+ to F scale. Van Allsburg of the Los Angeles Times approved of the film despite the changes from the book and its not being as "idiosyncratic and peculiar '', declaring that "(t) he film is faithful in reproducing the chaos level that comes with having a jungle animal in the house. It 's a good movie. '' Zathura: A Space Adventure, the spiritual successor that was marketed as being from the same continuity with varied uses of the tagline, "From the world of Jumanji '' was released as a feature film in 2005. Unlike the book Zathura, the film makes no references to the previous film outside of the marketing statement. Both films are based on books written by Chris Van Allsburg. With the films being based on books that take place in the same series, the films vaguely make reference to that concept from the novels by having a similar concept and themes. In July 2012, rumors emerged that a remake of the film was already in development. In a conversation with The Hollywood Reporter, Columbia Pictures president Doug Belgrad said: "We 're going to try and reimagine Jumanji and update it for the present. '' On August 1, 2012, it was confirmed that Matthew Tolmach would be producing the new version alongside William Teitler, who produced the original film. On August 5, 2015, Sony Pictures Entertainment announced their plans to film a remake and set the release date as December 25, 2016. Internet reception to this announcement was negative, with some posters remarking that this announcement came too soon after the death of Williams. The news was also heavily criticized by Bradley Pierce and E! News, the latter of which stated that they felt that the remake was "unnecessary and kind of insulting ''. On January 14, 2016, it was announced that Jake Kasdan will direct the remake. On January 20, 2016, it was announced that the remake would be pushed back to July 28, 2017. In April 2016, Dwayne Johnson signed on to produce and star in the remake, while Variety, TheWrap and Deadline.com reported that Kevin Hart, Jack Black, and Nick Jonas were in early talks to co-star. In August 2016, Dwayne Johnson confirmed that the film would not be a remake, rather a continuation of the 1995 film and that it would be filmed in Hawaii. In August, Johnson announced on Instagram that Karen Gillan has been cast in the film. In September 2016, Johnson released a concept art of his character "The Smoldering '' Dr. Bravestone. The film, officially titled Jumanji: Welcome to the Jungle, was released on December 20, 2017. Jack Black, Dwayne Johnson, and Nick Jonas have discussed in interviews what a third installment would be about. Karen Gillan has also said that an alternate ending for Welcome to the Jungle would have left the door open for another film. An animated television series was produced between 1996 and 1999. While it borrowed heavily from the film -- incorporating various characters, locations and props, and modeling Alan 's house and the board game on the way they appeared in the film -- the series retcons rather than using the film 's storyline. In the series version, on each turn the players are given a "game clue '' and then sucked into the jungle until they solve it. Alan is stuck in Jumanji because he has not seen his clue. Judy and Peter try to help him leave the game, providing their motivation during the series. Sarah is absent from the series. Jumanji is a board game originally published by Milton Bradley in the US and MB Spiele in Germany in 1995. Jumanji is a North American - exclusive game for Microsoft Windows that was released on 1996 and based on the film. It was developed by Studio Interactive and published by Philips Media. It contains five different action - arcade - based mini-games that are based on popular scenes from the film. Clips of cutscenes from the film can also be viewed. There are five different mini-games that the player can choose from, with different rules and objectives. Animals from the film provide instructions to the player for each mini-game, except for the Treasure Maze mini-game, where the Jumanji board game spirit provides instructions instead. Notably, players can not play the actual Jumanji board game from the film. All of these mini-games contain rounds (or levels) and when players reach a goal, that level is cleared and the player advances to a more difficult version of the mini-game. The player must try to score as many points as possible, and set the best high score. A video game based on the film was released in Europe for the PlayStation 2 in 2006. In 2007, Fujishoji released a Pachinko game, using clips from the film and also used 3D rendered CGI anime character designs for the game as part of the screen interaction. In 2005 Jumanji was listed 48 in Channel 4 's 100 Greatest Family Films documentary just behind Dumbo, The Lion King and Spider - Man. In 2011, Robin Williams recorded an audiobook for Van Allsburg 's book 's 30th edition to coincide its release. In 2014, a game board prop from the movie was auctioned on eBay and sold for US $60,800.
which of the following would not increase german exports to the united states
Foreign trade of the United States - wikipedia Foreign trade of the United States comprises the international imports and exports of the United States, one of the world 's most significant economic markets. The country is among the top three global importers and exporters. The regulation of trade is constitutionally vested in the United States Congress. After the Great Depression, the country emerged as among the most significant global trade policy - makers, and it is now a partner to a number of international trade agreements, including the General Agreement on Tariffs and Trade (GATT) and the World Trade Organization (WTO). Gross U.S. assets held by foreigners were $16.3 trillion as of the end of 2006 (over 100 % of GDP). The country has trade relations with many other countries. Within that, the trade with Europe and Asia is predominant. To fulfill the demands of the industrial sector, the country has to import mineral oil and iron ore on a large scale. Machinery, cotton yarn, toys, mineral oil, lubricants, steel, tea, sugar, coffee, and many more items are traded. The country 's export list includes food grains like wheat, corn, and soybean. Aeroplane, cars, computers, paper, and machine tools required for different industries. In 2016 United States current account balance was - $469,400,000,000. The Constitution gives Congress express power over the imposition of tariffs and the regulation of international trade. As a result, Congress can enact laws including those that: establish tariff rates; implement trade agreements; provide remedies against unfairly traded imports; control exports of sensitive technology; and extend tariff preferences to imports from developing countries. Over time, and under carefully prescribed circumstances, Congress has delegated some of its trade authority to the Executive Branch. Congress, however, has, in some cases, kept tight reins on the use of this authority by requiring that certain trade laws and programs be renewed; and by requiring the Executive Branch to issue reports to Congress to monitor the implementation of the trade laws and programs. The authority of Congress to regulate international trade is set out in Article I, Section 8, Paragraph 1 of the United States Constitution: The Congress shall have power To lay and collect Taxes, Duties, Imposts and Excises, to pay the Debts and provide for the common Defence and to promote the general Welfare of the United States; but all Duties, Imposts and Excises shall be uniform throughout the United States; Embargo Act of 1807 was designed to force Britain to rescind its restrictions on American trade, but failed, and was repealed in early 1809. During the Civil War period, leaders of the Confederacy were confident that Britain would come to their aid because of British reliance on Southern cotton. The Union was able to avoid this, through skillful use of diplomacy and threats to other aspects of European - U.S. trade relations. According to Paul Bairoch, since the end of the 18th century, the United States has been "the homeland and bastion of modern protectionism ''. In fact, the United States never adhered to free trade until 1945. A very protectionist policy was adopted as soon as the presidency of George Washington by Alexander Hamilton, Secretary of the Treasury from 1789 to 1795 and author of the text "Report on the Manufactures, 1792 "which called for customs barriers to allow American industrial development. This text was one of the references of the German economist Friedrich List (1789 - 1846). This policy remained throughout the 19th century and the overall level of tariffs was very high (close to 50 % in 1830). The victory of the protectionist North states against the free trade states of the South at the end of the Civil War (1861 - 1865) perpetuated this trend, even during periods of free trade in Europe (1860 - 1880). While the United States has always participated in international trade, it did not take a leading role in global trade policy - making until the Great Depression. Congress and The Executive Branch came into conflict in deciding the mix of trade promotion and protectionism. In order to stimulate employment, Congress passed the Reciprocal Trade Agreements Act of 1934, allowing the executive branch to negotiate bilateral trade agreements for a fixed period of time. During the 1930s the amount of bilateral negotiation under this act was fairly limited, and consequently did little to expand global trade. Near the end of the Second World War U.S. policy makers began to experiment on a broader level. In the 1940s, working with the British government, the United States developed two innovations to expand and govern trade among nations: the General Agreement on Tariffs and Trade (GATT) and the International Trade Organization (ITO). GATT was a temporary multilateral agreement designed to provide a framework of rules and a forum to negotiate trade barrier reductions among nations. The growing importance of international trade led to the establishment of the Office of the U.S. Trade Representative in 1963 by Executive Order 11075, originally called The Office of the Special Representative for Trade Negotiations. United States trade policy has varied widely through various American historical and industrial periods. As a major developed nation, the U.S. has relied heavily on the import of raw materials and the export of finished goods. Because of the significance for American economy and industry, much weight has been placed on trade policy by elected officials and business leaders. The 1920s marked a decade of economic growth in the United States following a Classical supply side policy. U.S. President Warren Harding signed the Emergency Tariff of 1921 and the Fordney -- McCumber Tariff of 1922. Harding 's policies reduced taxes and protected U.S. business and agriculture. Following the Great Depression and World War II, the United Nations Monetary and Financial Conference brought the Bretton Woods currency agreement followed by the economy of the 1950s and 1960s. In 1971, President Richard Nixon ended U.S. ties to Bretton Woods, leaving the U.S. with a floating fiat currency. The stagflation of the 1970s saw a U.S. economy characterized by slower GDP growth. In 1988, the United States ranked first in the world in the Economist Intelligence Unit "quality of life index '' and third in the Economic Freedom of the World Index. Over the long run, nations with trade surpluses tend also to have a savings surplus. The U.S. generally has developed lower savings rates than its trading partners, which have tended to have trade surpluses. Germany, France, Japan, and Canada have maintained higher savings rates than the U.S. over the long run. Some economists believe that GDP and employment can be dragged down by an over-large deficit over the long run. Others believe that trade deficits are good for the economy. The opportunity cost of a forgone tax base may outweigh perceived gains, especially where artificial currency pegs and manipulations are present to distort trade. In 2006, the primary economic concerns focused on: high national debt ($9 trillion), high non-bank corporate debt ($9 trillion), high mortgage debt ($9 trillion), high financial institution debt ($12 trillion), high unfunded Medicare liability ($30 trillion), high unfunded Social Security liability ($12 trillion), high external debt (amount owed to foreign lenders) and a serious deterioration in the United States net international investment position (NIIP) (- 24 % of GDP), high trade deficits, and a rise in illegal immigration. These issues have raised concerns among economists and unfunded liabilities were mentioned as a serious problem facing the United States in the President 's 2006 State of the Union address. On June 26, 2009, Jeff Immelt, the CEO of General Electric, called for the U.S. to increase its manufacturing base employment to 20 % of the workforce, commenting that the U.S. has outsourced too much in some areas and can no longer rely on the financial sector and consumer spending to drive demand. In 1985, the U.S. had just began a growing trade deficit with China. During the 1990s, the U.S. trade deficit became a more excessive long - run trade deficit, mostly with Asia. By 2012, the U.S. trade deficit, fiscal budget deficit, and federal debt increased to record or near - record levels following the implementation of broad unconditional or unilateral U.S. free trade policies and formal trade agreements in the preceding decades. The US last had a trade surplus in 1975. However, recessions may cause short - run anomalies to rising trade deficits. The balance of trade in the United States has been a concern among economists and business people. Warren Buffett, founder of Berkshire Hathaway, was quoted in the Associated Press (January 20, 2006) as saying "The U.S. trade deficit is a bigger threat to the domestic economy than either the federal budget deficit or consumer debt and could lead to political turmoil... Right now, the rest of the world owns $3 trillion more of us than we own of them. '' In both a 1987 guest editorial to the Omaha - World Herald and a more detailed 2003 Fortune article, Buffett proposed a tool called Import Certificates as a solution to the United States ' problem and ensure balanced trade. "The rest of the world owns a staggering $2.5 trillion more of the U.S. than we own of other countries. Some of this $2.5 trillion is invested in claim checks -- U.S. bonds, both governmental and private -- and some in such assets as property and equity securities. '' Today the United States ' largest trading partner is Canada. China has seen substantial economic growth in the past 50 years and though a nuclear - security summit that took place in early 2010 president Obama hoped to insure another 50 years of growth between the two countries. On April 19, 2010, President Barack Obama met with China 's President Hu Jintao to discuss trade policies between the two countries. Though the US trade deficit has been stubborn, and tends to be the largest by dollar volume of any nation, even the most extreme months as measured by percent of GDP there are nations that are far more noteworthy. Case in point, post 2015 Nepal earthquake, Nepal 's trade gap (in goods & services) was a shocking 33.3 % of GDP although heavy remittances considerably offset that number. According to the US Department of Commerce Bureau of Economic Analysis (BEA), January 27, 2017 report, the GDP "increased 4.0 percent, or $185.5 billion, in the fourth quarter of 2016 to a level of $18,860.8 billion. '' The main customs territory of the United States includes the 50 states, the District of Columbia, and the territory of Puerto Rico, with the exception of over 200 foreign trade zones designated to encourage economic activity. People and goods entering this territory are subject to inspection by U.S. Customs and Border Protection. The remaining insular areas are separate customs territories administered largely by local authorities: Transportation of certain living things or agricultural products may be prohibited even within a customs territory. This is enforced by U.S. Customs and Border Protection, the federal Animal and Plant Health Inspection Service, and even state authorities such as the California Department of Food and Agriculture. Gross U.S. assets held by foreigners were $16.3 trillion as of the end of 2006 (over 100 % of GDP). The U.S. net international investment position (NIIP) became a negative $2.5 trillion at the end of 2006, or about minus 19 % of GDP. This figure rises as long as the US maintains an imbalance in trade, when the value of imports substantially outweighs the value of exports. This external debt does not result mostly from loans to Americans or the American government, nor is it consumer debt owed to non-US creditors. It is an accounting entry that largely represents US domestic assets purchased with trade dollars and owned overseas, largely by US trading partners. For countries like the United States, a large net external debt is created when the value of foreign assets (debt and equity) held by domestic residents is less than the value of domestic assets held by foreigners. In simple terms, as foreigners buy property in the US, this adds to the external debt. When this occurs in greater amounts than Americans buying property overseas, nations like the United States are said to be debtor nations, but this is not conventional debt like a loan obtained from a bank. If the external debt represents foreign ownership of domestic assets, the result is that rental income, stock dividends, capital gains and other investment income is received by foreign investors, rather than by U.S. residents. On the other hand, when American debt is held by overseas investors, they receive interest and principal repayments. As the trade imbalance puts extra dollars in hands outside of the U.S., these dollars may be used to invest in new assets (foreign direct investment, such as new plants) or be used to buy existing American assets such as stocks, real estate and bonds. With a mounting trade deficit, the income from these assets increasingly transfers overseas. Of major concern is the magnitude of the NIIP (or net external debt), which is larger than those of most national economies. Fueled by the sizable trade deficit, the external debt is so large that economists are concerned over whether the current account deficit is unsustainable. A complicating factor is that trading partners such as China, depend for much of their economy on exports, especially to America. There are many controversies about the current trade and external debt situation, and it is arguable whether anyone understands how these dynamics will play out in a historically unprecedented floating exchange rate system. While various aspects of the U.S. economic profile have precedents in the situations of other countries (notably government debt as a percentage of GDP), the sheer size of the U.S., and the integral role of the US economy in the overall global economic environment, create considerable uncertainty about the future. According to economists such as Larry Summers and Paul Krugman, the enormous inflow of capital from China is one of the causes of the global financial crisis of 2008 -- 2009. China had been buying huge quantities of dollar assets to keep its currency value low and its export economy humming, which caused American interest rates and saving rates to remain artificially low. These low interest rates, in turn, contributed to the United States housing bubble because when mortgages are cheap, house prices are inflated as people can afford to borrow more. 2008 Exports, Imports 2009 Exports, Imports The United States is a partner to many trade agreements, shown in the chart below and the map to the right. The United States has also negotiated many Trade and Investment Framework Agreements, which are often precursors to free trade agreements. It has also negotiated many bilateral investment treaties, which concern the movement of capital rather than goods. The U.S. is a member of several international trade organizations. The purpose of joining these organizations is to come to agreement with other nations on trade issues, although there is domestic political controversy to whether or not the U.S. government should be making these trade agreements in the first place. These organizations include: American foreign trade is regulated internally by: Proportion of US exports to imports 1960 -- 2004 U.S. exports of goods and services 1960 -- 2004 U.S. imports of goods and services 1960 -- 2004 United States exported $1.44 trillion to other countries in 2014 US exports of goods by country in 2004 (does not include exports of services) US imports of goods by country in 2004 (does not include imports of services) United States imported $2.13 trillion from other countries in 2014
when were bold 2 in 1 pearls released
Costume jewelry - wikipedia Costume jewelry, trinkets, fashion jewelry, junk jewelry, fake jewelry, or fallalery is jewelry manufactured as ornamentation to complement a particular fashionable costume or garment as opposed to "real '' (fine) jewelry which may be regarded primarily as collectibles, keepsakes, or investments. The term costume jewelry dates back to the early 20th century. It reflects the use of the word "costume '' to refer to what is now called an "outfit ''. Originally, costume or fashion jewelry was made of inexpensive simulated gemstones, such as rhinestones or lucite, set in pewter, silver, nickel, or brass. During the depression years, rhinestones were even down - graded by some manufacturers to meet the cost of production. During the World War II era, sterling silver was often incorporated into costume jewelry designs primarily because: This resulted in a number of years during which sterling silver costume jewelry was produced and some can still be found in today 's vintage jewelry marketplace. Modern costume jewelry incorporates a wide range of materials. High end crystals, cubic zirconia simulated diamonds, and some semi-precious stones are used in place of precious stones. Metals include gold - or silver - plated brass, and sometimes vermeil or sterling silver. Lower - priced jewelry may still use gold plating over pewter, nickel or other metals; items made in countries outside the United States may contain lead. Some pieces incorporate plastic, acrylic, leather, or wood. Costume jewelry can be characterized by the period in history in which it was made. The Art Deco movement was an attempt to combine the harshness of mass production with the sensitivity of art and design. It was during this period that Coco Chanel introduced costume jewelry to complete the costume. The Art Deco movement died with the onset of the Great Depression and the outbreak of World War II. According to Schiffer, some of the characteristics of the costume jewelry in the Art Deco period were: In the Retro period, designers struggled with the art versus mass production dilemma. Natural materials merged with plastics. The retro period primarily included American - made jewelry, which has a distinct American look. With the war in Europe, many European jewelry firms were forced to shut down. Many European designers emigrated to the U.S. since the economy was recovering. According to Schiffer, some of the characteristics of the costume jewelry in the Retro period were: In the Art Modern period following World War II, jewelry designs became more traditional and understated. The big, bold styles of the Retro period went out of style and were replaced by the more tailored styles of the 1950s and 1960s. According to Schiffer, some of the characteristics of the costume jewelry in the Art Modern period were: With the advent of the Mod period came "Body Jewelry ''. Carl Schimel of Kim Craftsmen Jewelry was at the forefront of this style. While Kim Craftsmen closed in the early 1990s, many collectors still forage for their items at antique shows and flea markets. The Boston Museum of Fine Art recently displayed Carl Schimel 's "Chastity Belt '' created in 1969 in their "When High Fashion Inhaled The ' 60s -- ' Hippie Chic ' '' at MFA. This piece and exhibit was reviewed by Gregg Cook of Boston NPR "Carl Schimel 's circa 1969 base metal "Chastity Belt '' -- displayed here atop a black bodystocking -- imitates medieval designs in its erotic chains and medallion ("a container meant to hold birth control pills, '' according to the MFA). (Greg Cook) http://artery.wbur.org/2013/07/20/hippie-chic-mfa a photo of this piece can be seen at http://media.wbur.org/wordpress/18/files/2013/07/picHippieChicCook_0176.jpg Costume jewelry has been part of culture for almost 300 years. During the 18th century, jewelers began making pieces with inexpensive glass. In the 19th century, costume jewelry made of semi-precious material came into the market. Jewels made of semi-precious material were more affordable, and this affordability gave common people the chance to own costume jewelry. But the real golden era for the costume jewelry began in the middle of the 20th century. The new middle class wanted beautiful, but affordable jewelry. The demand for jewelry of this type coincided with the machine - age and the industrial revolution. The revolution made the production of carefully executed replicas of admired heirloom pieces possible. As the class structure in America changed, so did measures of real wealth. Women in all social stations, even the working - class woman, could own a small piece of costume jewelry. The average town and country woman could acquire and wear a considerable amount of this mass - produced jewelry that was both affordable and stylish. Costume jewelry was also made popular by various designers in the mid-20th century. Some of the most remembered names in costume jewelry include both the high and low priced brands: Crown Trifari, Dior, Chanel, Miriam Haskell, Monet, Napier, Corocraft, Coventry, and Kim Craftsmen. A significant factor in the popularization of costume jewelry was the Hollywood movie. The leading female stars of the 1940s and 1950s often wore and then endorsed the pieces produced by a range of designers. If you admired a necklace worn by Bette Davis in The Private Lives of Elizabeth and Essex, you could buy a copy from Joseff of Hollywood, who made the original. Stars such as Vivien Leigh, Elizabeth Taylor, and Jane Russell appeared in adverts for the pieces and the availability of the collections in shops such as Woolworth made it possible for ordinary women to own and wear such jewelry. Coco Chanel greatly popularized the use of faux jewelry in her years as a fashion designer, bringing costume jewelry to life with gold and faux pearls. Kenneth Jay Lane has since the 1960s been known for creating unique pieces for Jackie Onassis, Elizabeth Taylor, Diana Vreeland, and Audrey Hepburn. He is probably best known for his three - strand faux pearl necklace worn by Barbara Bush to her husband 's inaugural ball. In many instances, high - end fashion jewelry has achieved a "collectible '' status, and increases in value over time. Today, there is a substantial secondary market for vintage fashion jewelry. The main collecting market is for ' signed pieces ', that is pieces which have the maker 's mark, usually stamped on the reverse. Amongst the most sought after are Miriam Haskell, Coro, Butler and Wilson, Crown Trifari, and Sphinx. However, there is also demand for good quality ' unsigned ' pieces, especially if they are of an unusual design. Costume jewelry is considered a discrete category of fashion accessory, and displays many characteristics of a self - contained industry. Costume jewelry manufacturers are located throughout the world, with a particular concentration in parts of China and India, where entire citywide and region - wide economies are dominated by the trade of these goods. There has been considerable controversy in the United States and elsewhere about the lack of regulations in the manufacture of such jewelry -- these range from human rights issues surrounding the treatment of labor, to the use of manufacturing processes in which small, but potentially harmful, amounts of toxic metals are added during production. In 2010, the Associated Press released the story that toxic levels of the metal cadmium. were found in children 's jewelry. An AP investigation found some pieces contained more than 80 percent of cadmium... The wider issues surrounding imports, exports, trade laws, and globalization also apply to the costume jewelry trade. As part of the supply chain, wholesalers in the United States and other nations purchase costume jewelry from manufacturers and typically import or export it to wholesale distributors and suppliers who deal directly with retailers. Wholesale costume jewelry merchants would traditionally seek out new suppliers at trade shows. As the Internet has become increasingly important in global trade, the trade - show model has changed. Retailers can now select from a large number of wholesalers with sites on the World Wide Web. Some of these sites also market directly to consumers, who can purchase costume jewelry at greatly reduced prices. Some of these sites include fashion jewelry as a separate category, while some use this term in favor of costume jewelry. The trend of jewelry - making at home by hobbyists for personal enjoyment or for sale on sites like Etsy has resulted in the common practice of buying wholesale costume jewelry in bulk and using it for parts.
is mexicali in baja california norte o sur
Baja California - wikipedia ^ a. 2010 and later. Baja California is the only state to use the US DST schedule state wide, while the rest of Mexico (except for small portions of other northern states) starts DST 3 -- 4 weeks later and ends DST one week earlier. Baja California (Spanish pronunciation: (ˈbaxa kaliˈfoɾnja) (listen)), (English: Lower California), officially the Free and Sovereign State of Baja California (Spanish: Estado Libre y Soberano de Baja California), is a state in Mexico. It is the northernmost and westernmost of the 32 Federal Entities of Mexico. Before becoming a state in 1952, the area was known as the North Territory of Baja California (El Territorio Norte de Baja California). It has an area of 70,113 km (27,071 sq mi), or 3.57 % of the land mass of Mexico and comprises the northern half of the Baja California Peninsula, north of the 28th parallel, plus oceanic Guadalupe Island. The mainland portion of the state is bordered on the west by the Pacific Ocean, on the east by Sonora, the U.S. state of Arizona, and the Gulf of California (also known as the "Sea of Cortez ''), and on the south by Baja California Sur. Its northern limit is the U.S. state of California. The state has an estimated population of 3,315,766 (2015) much more than the sparsely populated Baja California Sur to the south, and similar to San Diego County, California on its north. Over 75 % of the population lives in the capital city, Mexicali, in Ensenada, or in Tijuana. Other important cities include San Felipe, Rosarito and Tecate. The population of the state is composed of Mestizos, mostly immigrants from other parts of Mexico, and, as with most northern Mexican states, a large population of Mexicans of Spanish ancestry, and also a large minority group of East Asian, Middle Eastern and indigenous descent. Additionally, there is a large immigrant population from the United States due to its proximity to San Diego and the lower cost of living compared to San Diego. There is also a significant population from Central America. Many immigrants moved to Baja California for a better quality of life and the number of higher paying jobs in comparison to the rest of Mexico and Latin America. Baja California is the twelfth largest state by area in Mexico. Its geography ranges from beaches to forests and deserts. The backbone of the state is the Sierra de Baja California, where the Picacho del Diablo, the highest point of the peninsula, is located. This mountain range effectively divides the weather patterns in the state. In the northwest, the weather is semi-dry and mediterranean. In the narrow center, the weather changes to be more humid due to altitude. It is in this area where a few valleys can be found, such as the Valle de Guadalupe, the major wine producing area in Mexico. To the east of the mountain range, the Sonoran Desert dominates the landscape. In the south, the weather becomes drier and gives way to the Vizcaino Desert. The state is also home to numerous islands off both of its shores. In fact, the westernmost point in Mexico, the Guadalupe Island, is part of Baja California. The Coronado, Todos Santos and Cedros Islands are also on the Pacific Shore. On the Gulf of California, the biggest island is the Angel de la Guarda, separated from the peninsula by the deep and narrow Canal de Ballenas. The first people came to the peninsula at least 11,000 years ago. At that time two main native groups are thought to have been present on the peninsula. In the south were the Cochimí. In the north were several groups belonging to the Yuman language family, including the Kiliwa, Paipai, Kumeyaay, Cocopa, and Quechan. These peoples were diverse in their adaptations to the region. The Cochimí of the peninsula 's Central Desert were generalized hunter - gatherers who moved frequently; however, the Cochimí on Cedros Island off the west coast had developed a strongly maritime economy. The Kiliwa, Paipai, and Kumeyaay in the better - watered northwest were also hunter - gatherers, but that region supported denser populations and a more sedentary lifestyle. The Cocopa and Quechan of northeastern Baja California practiced agriculture in the floodplain of the lower Colorado River. Another group of people were the Guachimis, who came from the north and created much of the Sierra de Guadalupe cave paintings. Not much is known about them except that they lived in the area between 100 BC and the coming of the Europeans and created World Heritage rock art. Europeans reached the present state of Baja California in 1539, when Francisco de Ulloa reconnoitered its east coast on the Gulf of California and explored the peninsula 's west coast at least as far north as Cedros Island. Hernando de Alarcón returned to the east coast and ascended the lower Colorado River in 1540, and Juan Rodríguez Cabrillo (or João Rodrigues Cabrilho (in Portuguese)) completed the reconnaissance of the west coast in 1542. Sebastián Vizcaíno again surveyed the west coast in 1602, but outside visitors during the following century were few. The Jesuits founded a permanent mission colony on the peninsula at Loreto in 1697. During the following decades, they gradually extended their sway throughout the present state of Baja California Sur. In 1751 -- 1753, the Croatian Jesuit mission - explorer Ferdinand Konščak made overland explorations northward into the state of Baja California. Jesuit missions were subsequently established among the Cochimí at Santa Gertrudis (1752), San Borja (1762), and Santa María (1767). After the expulsion of the Jesuits in 1768, the short - lived Franciscan administration (1768 -- 1773) resulted in one new mission at San Fernando Velicatá. More importantly, the 1769 expedition to settle Alta California under Gaspar de Portolà and Junípero Serra resulted in the first overland exploration of the northwestern portion of the state. The Dominicans took over management of the Baja California missions from the Franciscans in 1773. They established a chain of new missions among the northern Cochimí and western Yumans, first on the coast and subsequently inland, extending from El Rosario (1774) to Descanso (1817), just south of Tijuana. Baja California encompasses a territory, within The Californias region of North America, which exhibits diverse geography for a relatively small area. The Peninsular ranges of the California cordillera run down the geographic center of the state. The most notable ranges of these mountains are the Sierra de Juarez and the Sierra de San Pedro Martir. These ranges are the location of forests reminiscent of Southern California 's San Gabriel Mountains. Picacho del Diablo is the highest peak in the whole peninsula. Valleys between the mountain ranges are located within a climate zone that are suitable for agriculture. Such valleys included the Valle de Guadalupe and the Valle de Ojos Negros, areas that produce citrus fruits and grapes. The mineral - rich mountain range extends southwards to the Gulf of California, where the western slope becomes wider, forming the Llanos del Berrendo in the border with Baja California Sur. The mountain ranges located in the center and southern part of the state include the Sierra de La Asamblea, Sierra de Calamajué, Sierra de San Luis and the Sierra de San Borja. Temperate winds from the Pacific Ocean and the cold California Current make the climate along the northwestern coast pleasant year - round. As a result of the state 's location on the California current, rains from the north barely reach the peninsula, thus leaving southern areas drier. South of El Rosario River the state changes from a Mediterranean landscape to a desert one. This desert exhibits diversity in succulent flora species that flourish in part due to the coastal fog. To the east, the Sonoran Desert enters the state from both California and Sonora. Some of the highest temperatures in Mexico are recorded in or nearby the Mexicali Valley. However, with irrigation from the Colorado River, this area has become truly an agricultural center. The Cerro Prieto geothermal province is near Mexicali as well (this area is geologically part of a large pull apart basin); producing about 80 % of the electricity consumed in the state and enough more to export to California. Laguna Salada, a saline lake below sea level lying between the rugged Sierra de Juarez and the Sierra de los Cucapah, is also in the vicinity of Mexicali. The state government has recently been considering plans to revive Laguna Salada. The highest mountain in the Sierra de los Cucapah is the Cerro del Centinela or Mount Signal. The Cucapah are the primary indigenous people of that area and up into the Yuma, Arizona area. There are numerous islands on the Pacific shore. Guadalupe Island is located in the extreme west of the state 's boundaries and is the site of large colonies of sea lions. Cedros Island exists in the southwest of the state 's maritime region. The Todos Santos Islands and Coronado Islands are located off the coast of Ensenada and Tijuana respectively. All of the islands in the Gulf of California, on the Baja California side, belong to the municipality of Mexicali. Baja California obtains much of its water from the Colorado River. Historically the river drained into the Colorado River Delta which flowed into the Gulf of California, but due to large demands for water in the American Southwest, less water now reaches the Gulf. The Tijuana metropolitan area also relies on the Tijuana River as a source of water. Much of rural Baja California depends predominantly on wells, a few dams and even oases. Tijuana also purchases water from San Diego County 's Otay Water District. Potable water is the largest natural resource issue of the state. Baja California 's climate varies from Mediterranean to arid. The Mediterranean climate is observed in the northwestern corner of the state where the summers are dry and mild and the winters cool and rainy. This climate is observed in areas from Tijuana to San Quintin and nearby interior valleys. The cold oceanic California Current often creates a low - level marine fog near the coast. The fog occurs along any part of the Pacific Coast of the state. The change of altitude towards the Sierra de Baja California creates an alpine climate in this region. Summers are cool while winters can be cold with below freezing temperatures at night. It is common to see snow in the Sierra de Juarez and Sierra de San Pedro Martir (and in the valleys in between) from December to April. Due to orographic effects, precipitation is much higher in the mountains of northern Baja California than on the western coastal plain or eastern desert plain. Pine, cedar and fir forests are found in the mountains. The east side of the mountains produce a rain shadow, creating an extremely arid environment. The Sonoran Desert region of Baja California experiences hot summers and nearly frostless mild winters. The Mexicali Valley (which is below sea level), experiences the highest temperatures in Mexico, that frequently surpass 47 ° C (116.6 ° F) in mid-summer, and have exceeded 50 ° C (122 ° F) on some occasions. Further south along the Pacific coast, the Mediterranean climate transitions into a desert climate but it is milder and not as hot as along the gulf coast. Transition climates, from Mediterranean to Desert, can be found from San Quintin to El Rosario. Further inland and along the Gulf of California the vegetation is scarce and temperatures are very high during the summer months. The islands in the Gulf of California also belong to the desert climate. Some oases can be found in the desert in which few towns are located -- for instance, Catavina, San Borja and Santa Gertrudis. Common trees are the Jeffrey Pine, Sugar Pine and Pinon Pine. Understory species include Manzanita. Fauna include a variety of reptiles including the Western fence lizard, which is at the southern extent of its range. The name of the fish genus Bajacalifornia is derived from the Baja California Peninsula. In the main terrestrial wildlife refuges on the peninsula of Baja California, Constitution 1857 National Park and Sierra de San Pedro Mártir National Park contain several coniferous species; the most abundant are: pinus jeffreyi, pinus ponderosa, pinus cembroide, pinus quadrifolia, pinus monophylla, juniperus, arctostaphylos drupacea, artemisa ludoviciana, and adenostoma esparcifolium. The flora share many species with the Laguna Mountains and San Jacinto Mountains in southwest California. The lower elevations of the Sierra Juárez are characterized by chaparral and desert shrub. The fauna in the parks exhibit a large number of mammals primarily: mule deer, bighorn sheep, cougar, bobcat, ringtail cat, coyote, rabbit, squirrel and more than 30 species of bats. The park is also home to many avian species like: bald eagle, golden eagle, falcon, woodpecker, black vulture, crow, several species of Sittidae and duck. At 3: 40: 41 pm PDT on Easter Sunday, 4 April 2010 a 7.2 M w (\ displaystyle M_ (\ mathrm (w))) magnitude northwest trending strike - slip earthquake hit the Mexicali Valley, with its epicenter 26 km southwest of the city of Guadalupe Victoria, Baja California, Mexico. The main shock was felt as far as the Los Angeles, Las Vegas, Phoenix and Tucson metropolitan areas, and in Yuma. At least a half - dozen aftershocks with magnitudes between 5.0 and 5.4 were reported, including a 5.1 - magnitude shaker at 4: 14 am. that was centered near El Centro. As of 6: 31 am PDT, 5 April 2010, two people were confirmed dead. Baja California is subdivided into five municipios (municipalities). These are Ensenada, Mexicali, Tecate, Tijuana and Rosarito. The majority of the population of Baja California is Mestizo, however the state has one of the larger percentages of White (European) Mexicans (about 40 %). There are small indigenous communities as well. Historically, the state has had sizable East Asian immigration. Mexicali has a large Chinese community, as well as many Filipinos from the Philippines who arrived to the state during the eras of Spanish and American rule (1898 -- 1946) in much of the 19th and 20th centuries. Tijuana and Ensenada were a major port of entry for East Asians entering the U.S. ever since the first Asian - Americans were present in California. Also a significant number of Middle Eastern immigrants such as Lebanese, Syrians and Armenians settle near the U.S. border, and small waves of settlers in the early 20th century, usually members of the Molokan sect of the Russian Orthodox church fled the Russian Revolution of 1917 when the Soviet Union took power, had established a few villages along the Pacific coast south of Ensenada. Since 1960, large numbers of migrants from southern Mexican states have arrived to work in agriculture (esp. the Mexicali Valley and nearby Imperial Valley, California, US) and manufacturing. The cities of Ensenada, Tijuana and Mexicali grew as a result of migrants, primarily those who sought US citizenship and those temporary residents awaiting their entry into the United States are called Flotillas, which is derived from the Spanish word "flota, '' meaning "fleet. '' There is also a sizable immigrant community from Central and South America, and from the United States and Canada. An estimated 200,000 + American expatriates live in the state, especially in coastal resort towns such as Ensenada, known for affordable homes purchased by retirees who continue to hold US citizenship. San Felipe, Rosarito and Tijuana also have a large American population (second largest in Mexico next to Mexico City), particularly for its cheaper housing and proximity to San Diego. Some 60,000 Oaxacans live in Baja California, the vast majority being indigenous. Some 40 % of them lack proper birth certificates. According to a Conacyt investigator, a bit under a million people were classified as "poor '' in the state, up from 2008 when there were roughly 810,000. Exactly who these people are, whether locals, interstate or international migrants, was not explained. Baja California offers one of the best educational programs in the country, with high rankings in schooling and achievement. The State Government provides education and qualification courses to increase the workforce standards, such as School - Enterprise linkage programs which helps the development of labor force according to the needs of the industry. 91.60 % of the population from six to fourteen years of age attend elementary school. 61.95 % of the population over fifteen years of age attend or have already graduated from high school. Public School is available in all levels, from kindergarten to university. The state has 32 universities offering 103 professional degrees. These universities have 19 research and development centers for basic and applied investigation in advanced projects of biotechnology, physics, oceanography, computer science, digital geothermal technology, astronomy, aerospace, electrical engineering and clean energy, among others. At this educational level, supply is steadily growing. Baja California has developed a need to be self - sufficient in matters of technological and scientific innovation and to be less dependent on foreign countries. Current businesses demand new production processes as well as technology for the incubation of companies. The number of graduate degrees offered, including Ph. D. programs, is 121. The state has 53 graduate schools. As of 2005, Baja California 's economy represents 3.3 % of Mexico 's gross domestic product or 21,996 million USD. Baja California 's economy has a strong focus on tariff - free export oriented manufacturing (maquiladora). As of 2005, 284,255 people are employed in the manufacturing sector. There are a more than 900 companies operating under the federal Prosec program in Baja California. The Foreign Investment Law of 1973 allows foreigners to purchase land within the borders and coasts of Mexico by way of a trust, handled through a Mexican bank (Fideicomiso). This trust assures to the buyer all the rights and privileges of ownership, and it can be sold, inherited, leased, or transferred at any time. Since 1994, the Foreign Investment Law stipulates that the Fideicomiso must be to a 50 - year term, with the option to petition for a 50 - year renewal at any time. Any Mexican citizen buying a bank trust property has the option to either remain within the Trust or opt out of it and request the title in "Escritura ''. Mexico 's early history involved foreign invasions and the loss of vast amounts of land; in fear of history being repeated, the Mexican constitution established the concept of the "Restricted Zone ''. In 1973, in order to bring in more foreign tourist investment, the Bank Trust of Fideicomiso was created, thus allowing non-Mexicans to own land without any constitutional amendment necessary. Since the law went into effect, it has undergone many modifications in order to make purchasing land in Mexico a safer investment.
who owns the rides at the ohio state fair
Ohio State Fair - wikipedia The Ohio State Fair is one of the largest State Fairs in the United States. The event is held in Columbus, Ohio during late July through early August. As estimated in a 2011 economic impact study conducted by Saperstein & Associates; the State Fair contributes approximately 68.5 million dollars to the state 's economy. In 2015, attendance was 982,305, the Fair 's highest 12 - day attendance on record. From the very first three - day Fair in 1850 in Cincinnati to the 12 - day exposition of today (from 1981 - 2003, the Fair lasted 17 days), the Ohio State Fair has celebrated Ohio 's products, its people, and their accomplishments for more than 160 years. In 1846, the Ohio Legislature created the 53 - member Ohio State Board of Agriculture. One of the Board 's first acts was to establish a District Fair. The resulting 1847 District Fair at Wilmington, Ohio and the 1848 District Fair at Xenia, Ohio were both successful, leading to the planning of a State Fair. The first Ohio State Fair was planned for September 1849, but an outbreak of Asiatic cholera forced cancellation of those plans. The following year, autumn dates were chosen to lessen the risk of cholera. Even so, the superintendent of grounds, Darius Lapham, died of the disease just a few weeks before the opening date. Camp Washington, Cincinnati (two miles north of downtown Cincinnati, Ohio) was the site for the first Ohio State Fair, October 2 -- 4, 1850. The site was described as 8 -- 10 acres with grassy slopes, shade trees, and numerous tents. The grounds were enclosed by a 10 - foot (3.0 m) - high board fence. Cattle were tethered to a railing along the carriage road. The railroads offered strong support to the early State Fairs. Special rates were offered whereby exhibits were transported without charge, and the exhibitor rode for half fare. Several Central Ohioans contributed to the support of the first Fair, including Alfred Kelley, owner of the Columbus and Xenia Railroad. Cash premiums at the first Fair did not exceed $20, with the exception of an award of $50 given to essayists on the topic "Improving the Soil. '' During the early State Fairs, winners received medals, not ribbons, as awards. In 1850, the silver medal was valued at $3. The public was admitted only on the second and third day of the first Fair. Day one was devoted to setting up and judging. Admission was twenty cents, but exhibitors could buy a $1 badge for admission of their families. A visitor could also buy a $1 badge, which admitted one gentleman and two ladies. The two - day attendance was estimated at 25,000 to 30,000 people. Transportation around Ohio was difficult. Therefore, the majority of exhibitors came from close proximity to the Fair. Officials reasoned that moving the Fair ought to increase interest and attendance. Over the next 22 years, the Fair was held in the following cities: From 1874 until 1885, the site of Columbus ' Franklin Park served as home to the Ohio State Fair. Finally, in 1886, the Fair moved to its current location to what is currently called the Ohio Expo Center and State Fairgrounds. The main entrance to the site was at the southwest corner of the grounds along Woodard Avenue. It is now along 11th Avenue. On July 26, 2017, the opening day of the 2017 Fair, a swinging ride known as Fire Ball broke apart mid-swing, flinging passengers out of the ride. One rider, 18 - year - old Tyler Jarrell, was flung 50 feet (15 m) and killed on impact. Seven other riders were seriously injured, two of whom were reported to be in critical condition. The ride had been inspected earlier in the day and cleared for operation. The Ohio Department of Agriculture and Ohio State Highway Patrol have begun an investigation into the incident, and the Fair announced that it would not operate any rides on July 27 until they are all re-inspected by state authorities. Governor John Kasich considered it to be the "worst tragedy '' in the Fair 's history. In response to the incident, the ride 's manufacturer, KMG, as well as Chance Morgan -- which produces a similar ride, requested that other rides of the same type be shut down pending an investigation into the failure. Preliminary findings by KMG found that the failure was caused by "excessive corrosion on the interior of the gondola support beam '' over the life of the 18 year - old unit. 1853 -- Entertainment crept into the Fair programming with the first pony rides for children and monkeys dressed in hats that danced to minstrel tunes. 1860 -- Fair premiums rose to $200 (up from $20 in 1850). 1884 -- In July, prior to the Fair 's opening, a racing mare kicked over a lantern resulting in 100 stalls being burned. Loss to buildings was set at $1,100. The dead mare was valued at $5,000. 1886 -- The current Ohio State Fair site was dedicated on Tuesday, August 31 during the 37th Ohio State Fair. Governor Foraker accepted the grounds in front of a crowd of 6,000. 1890s -- At least 16 railroad companies served Columbus and the Fair. 1894 -- A college football tournament was held in this year, with Denison University, Miami University, Wittenberg University, Buchtel and Ohio State University participating. 1896 -- The Ohio State Fair became the first fair with an electric lighting system. This made it possible to offer night - time racing. Also this year, horseless vehicles made their first appearance at the Ohio State Fair. 1903 -- The first Butter Cow and Calf were featured at the Fair this year. They were made by A.T. Shelton & Company, distributors of Sunbury Creamery Butter. Additional sculptures were added in the 1960s, the subjects of which change every year. 1916 -- On the eve of World War I, the largest American flag, measuring 136 ft × 65 ft (41 m × 20 m), was displayed at the 11th Avenue entrance. 1922 -- Just days before the scheduled opening of the Fair, fire raced across the grounds. Six buildings were destroyed including the central group, the Horticulture Building and the East and West Buildings. Loss was estimated at more than $800,000. 1924 -- Earliest records of the Ohio State Fair Queen contest date back to this year. 1925 -- The Diamond Jubilee Spectacle this year saluted the 75th anniversary of the Fair. More than 2,000 participants enacted the evolution of the Fair in three, 25 - year periods on three stages and with fireworks. 1925 -- The All - Ohio Boys Band was first mentioned in historical accounts this year. It is now called the All - Ohio State Fair Band and includes both boys and girls. 1928 -- The renowned John Philip Sousa Band performed twice daily at the 1928 Fair. 1929 -- The Junior Fair was formed this year. Today, Ohio is proud to host the nation 's largest Junior Fair with more than 17,000 youth participating. In the same year, the Ohio State Fair Junior Fair Board was formed. The Junior Fair Board is made up of outstanding individuals from various youth organizations including 4 - H, Future Farmers of America, Girl Scouts of the USA, Boy Scouts of America, Farm Bureau Youth, and others. 1941 -- A sign of the times found 150 female Fair ticket takers were hired in place of men for the first time. 1942 -- 45 -- The Board of Agriculture canceled the Ohio State Fair and allowed the War Department to use the grounds and buildings for handling airplane parts and equipment. The Army Air Corps rented the facility for $1 per year. A similar fate befell fairs in Indiana, Illinois and Pennsylvania. When the Army vacated the fairgrounds, they left the grounds and buildings in a shambles. 1957 -- The first female livestock judge appeared this year. Mrs. Maurice Neville judged the Yorkshire Swine Show. 1963 -- The All - Ohio State Fair Youth Choir was established. It was directed by Glenville Thomas of Zanesville. 1966 -- At the 11th Avenue gate, the new OHIO entrance was built at a cost of $40,000. 1968 -- The first Sale of Champions livestock auction was held with sales amounting to $22,674. The Bee Gees, Bob Hope, James Brown, Johnny Carson, and Sly and the Family Stone performed. 1969 -- The first portion of the sky ride was built this year. It was extended to 11th Avenue in 1984. Bob Hope and Johnny Cash performed. 1972 -- Fire struck the Ohio Expo Center in October during the American Dairy Show. Three connected barns were burned, killing three head of cattle and destroying virtually all the exhibitors ' belongings. These buildings have since been replaced by the Gilligan Complex (1972 and 1978) and the O'Neill Swine Arena (1973). Bob Hope, Kenny Rogers, Mac Davis, Roberta Flack, The Osmonds, and Ike & Tina Turner performed. 1976 -- In celebration of the United States Bicentennial, a time capsule was buried in the gardens near the 11th Avenue gate. It holds treasures of the times from the Ohio State Fair, Ohio sports teams, coins and stamps, a T - shirt, Levi 's jeans and tennis shoes. It will be opened in 2026. Bob Hope, Mac Davis, the Osmonds, Pat Boone, Tanya Tucker, Johnny Cash perform. 1981 -- The fair was stretched to 17 days, running from Friday, August 14 -- Sunday, August 30. Entertainment was held at the outdoor grandstand on the infield of the race track. Two shows were performed usually at 3: 30 and 7: 30 pm. Entertainment was free and seating was on a first come, first served basis. Wooden folding chairs were available for seating on the race track. There was also a VIP section in front of the stage on the racetrack. Tickets were required to get into this section. 1983 -- Air Supply performed. Wheel of Fortune taped several episodes at the Ohio State Fairgrounds during the running of the fair. 1989 -- New Kids on the Block performed. Their performance was marred by several people being injured in a crush. 1990 -- The condemned Ohio State Fair Grandstand was demolished. The Celeste Center replaced it as the site for the Fair 's big - name entertainment, as well as a venue for many Expo events each year. 1997 -- Wheel of Fortune premiered their 15th season with two weeks of taped shows. 2000 -- The Ohio State Fair celebrated its 150th anniversary with a new exhibit, "History in the Making. '' Alabama performed. 2002 -- The 11th Avenue OHIO gate, built in 1966, was torn down and replaced with a redesigned, contemporary OHIO gate to take the facility into the future. Willie Nelson, Vince Gill, Travis Tritt, Rascal Flatts, Michael W. Smith, and Lifehouse perform. 2003 -- The Ohio State Fair celebrated its 150th Fair. The first Fair was in 1850 and there had been one every year since, excluding 1942 - 1945. Celebration activities can be found all over the fair. Bow Wow, Alan Jackson, Uncle Kracker, Terri Clark, The Oak Ridge Boys, and Diamond Rio performed. Coordinates: 40 ° 00 ′ 00 '' N 82 ° 59 ′ 35 '' W  /  40.000000 ° N 82.993000 ° W  / 40.000000; - 82.993000
who is the best player in cricket 2018
ICC player rankings - wikipedia The International Cricket Council Player Rankings is a widely followed system of rankings for international cricketers based on their recent performances. The current sponsor is MRF Tyres who signed a 4 - year deal with the ICC that will last until 2020. The ratings were developed at the suggestion of Ted Dexter in 1987. The intention was to produce a better indication of players ' current standing in the sport than is provided by comparing their averages. Career averages are based on a player 's entire career and do not make any allowance for match conditions or the strength of the opposition, whereas the ratings are weighted towards recent form and account for match conditions and the quality of the opponent using statistical algorithms. Initially the rankings were for Test cricket only, but separate One Day International rankings were introduced in 1998. Both sets of rankings have now been calculated back to the start of those forms of the game. The rankings include the top 10 Test, ODI and T20I batsmen, bowlers and all - rounders based on the rating of each player.
luckiest day of the week to get married
Auspicious wedding dates - wikipedia Auspicious wedding dates refer to auspicious, or lucky, times to get married, and is a common superstition among many cultures. "A January bride will be a prudent housekeeper, and very good tempered. A February bride will be an affectionate wife, And a tender mother. A March bride will be a frivolous chatterbox, Somewhat given to quarreling. An April bride will be inconsistent, or forceful, But well - meaning. A May bride will be handsome, agreeable, And practical. A June bride will be impetuous, And generous. A July bride will be handsome, But a trifle quick - tempered. An August bride will be agreeable, And practical as well. A September bride will be discreet, affable, And much liked. An October bride will be pretty, coquettish, Loving but jealous. A November bride will be liberal and kind, But sometimes cold. A December bride will be fond of novelty, Entertaining but extravagant. '' "Married in January 's hoar and rime, Widowed you 'll be before your prime. Married in February 's sleepy weather, Life you 'll tread in time together. Married when March winds shrill and roar, Your home will lie on a distant shore. Married ' neath April 's changeful skies, A chequered path before you lies. Married when bees o'er May - blossoms flit, Strangers around your board will sit. Married in month of roses - June - Life will be one long honeymoon. Married in July with flowers ablaze, Bitter - sweet memories in after days. Married in August 's heat and drowse, Lover and friend in your chosen spouse. Married in September 's golden glow, Smooth and serene your life will go. Married when leaves in October thin, Toil and hardships for you begin. Married in veils of November mist, Fortune your wedding - ring has kissed. Married in days of December 's cheer, Love 's star shines brighter from year to year. '' "Married when the year is new, he 'll be loving, kind and true. When February birds do mate, You wed nor dread your fate. If you wed when March winds blow, joy and sorrow both you 'll know. Marry in April when you can, Joy for Maiden and for Man. Marry in the month of May, and you 'll surely rue the day. Marry when June roses grow, over land and sea you 'll go. Those who in July do wed, must labour for their daily bred. Whoever wed in August be, many a change is sure to see Marry in September 's shrine, your living will be rich and fine. If in October you do marry, love will come but riches tarry. If you wed in bleak November, only joys will come, remember. When December snows fall fast, marry and true love will last. '' "Monday for health Tuesday for wealth Wednesday best of all Thursday for losses Friday for crosses Saturday for no luck at all. '' Although there are a few periods, such as the month of May, which they agree on, a number of cultures, including Hindu, Chinese, Catholic, Scottish, Irish, Old English, Ancient Roman and Moroccan culture, favor and avoid particular months and dates for weddings. A number of cultures, including the Chinese and Hindu cultures, favor particular auspicious dates for weddings. Auspicious days may also be chosen for the dates of betrothals. Dates for a particular couple 's wedding may often be determined with the help of a traditional fortune - teller. Lucky days of the week to get married, according to an old popular poem are: "Monday for wealth / Tuesday for health / Wednesday the best day of all; '' In many Churches the end of April was a busy time for weddings as couples wanted to avoid being married in May. The month of May is considered the most inauspicious month in which to get married. This is because in Pagan times, the start of summer was when the Festival of Beltane was celebrated with outdoor orgies. This was therefore thought to be an unsuitable time to start married life. In Roman times the Feast of the Dead occurred in May. The advice was taken more seriously in Victorian times than it is today. According to an old poem, the unlucky days of the week to have a wedding are: "Thursday for crosses, / Friday for losses, / Saturday, no luck at all -- '' In Hindu culture, Akshaya Tritiya is viewed as one of the foremost auspicious wedding dates, of which there are many. In Hindu Vedic astrology, a couple should first determine each other 's zodiac signs. In addition to aiding them in the search for an auspicious wedding date, it will help them to further understand each other, as each sign has its own meaning and character. An aiding astrologer will first determine the groom 's astrological position in relation to the moon, then he will do the same for the bride in relation to the sun. In light of that data, he is able to give the couple lucky times and dates for their wedding. Inauspicious dates are determined in light of certain circumstances, such as getting married in a court. Birthdays can be times of trial, so they are recommended to be avoided. In Chinese culture, auspicious wedding dates are typically found by numerological analysis of the date in the Chinese calendar. Some modern sources also apply numerological analysis to the date as given by the Gregorian calendar. Another way to determine an auspicious wedding date in Chinese culture is to start with the espoused 's Zodiac animal sign, distinguished by their respective years of birth. To start, the couple must not get married in a year of the animal with which theirs conflict. The couple may not be of the same Zodiac animal but will likely be similar enough to distinguish the years in which they can marry. Some couples eliminate months that will clash with their Zodiac animals. The next step is to set a time period in which the couple might like to marry in and eliminate all inauspicious days within this period. These days include Year Breaker days (the branch of the year clashes with the branch of the day), Month Breaker days (the branch of the month clashes with the branch of the day), and Personal Clash Day (the branch of the couples ' years of birth clash with the branch of the day). There are other inauspicious days, such as the "Four Extinct and Four Departure Days '', the Impoverish or No Wealth days; the couple is allowed to decide for themselves if they are to eliminate these days, as well. From that point, the couple may eliminate further dates with the 12 Day Officers method. Advanced couples even eliminate dates by the stars (or energy) that influence the day. Although the Catholic Church does not have particular auspicious dates, because of numerous feast days and penitentiary periods, restrictions on marriage during certain spans of time during the year used to be in place. April was favored because of the prohibition during Lent and the promise of a holiday brought by Easter. It was additionally favored because the following month, May, was because of its continual dedication to the Virgin Mary. A 1678 almanac summarizes the prohibited dates for marriage as such: "Marriage comes in on the 13th day of January, and at Septuagesima Sunday it is out again until Low Sunday, at which time it comes in again, and goes not out until Rogation Sunday; from whence it is unforbidden till Advent Sunday, but then it goes out, and comes not in again till the 13th January next following. '' Like nearly all of the other cultures, Scottish culture considers May an unlucky month. May 3rd is a particularly unlucky date for a wedding. January 1, on the other hand, is the luckiest day of the year for any novel experience because of the introduction of the New Year. Unlike Scottish culture, the Irish believe the year 's final day is particularly auspicious for weddings, although the Feast of the Holy Innocents (December 28) is unlucky for any occasion. In addition to their consideration of St. Joseph 's Day 's unluckiness, along with the 17th of December, "English common law forbids marriages between Rogation Sunday and Trinity Sunday. '' Marriages that were scheduled between those two dates required dispensations. Among the ancient Romans, the month of June was particularly auspicious, due to its affiliation with the goddess Juno; it was supposedly derived and sacred to the queen of the gods. A full or new moon were lucky during this month, especially. As many cultures agree, the Romans believed May to be an unfavorable and even illegal month to marry because it was during this month that the festival of Bona Dea, the goddess of charity, took place. The Feasts of the Dead, also named lemuralia, where specters called lemures, or larvœ haunted the living, particularly the young. Their affinity towards tormenting weddings caused the delegalization of marriage during that time. Moroccan culture does not have any specific dates that are lucky. They favor the autumn as the lucky season to get married; it allows the bride to participate in the augmentation of the abundant crops by blessing them with her baraka, wedding blessing. The lucky days of the week for Moroccan culture are Thursday and Sunday only; all other days are unlucky. The consummation of the marriage is best done on Thursday evening. However, there are seven days of hesoum - February 24 to March 4 - during which there is a ban on marriage. Besides those dates, marriages may take place any time during the year.
which of these is true of the nile river
Nile - wikipedia The Nile (Arabic: النيل ‎) is a major north - flowing river in northeastern Africa, and is commonly regarded as the longest river in the world, though some sources cite the Amazon River as the longest. The Nile, which is 6,853 km (4,258 miles) long, is an "international '' river as its drainage basin covers eleven countries, namely, Tanzania, Uganda, Rwanda, Burundi, the Democratic Republic of the Congo, Kenya, Ethiopia, Eritrea, South Sudan, Republic of the Sudan and Egypt. In particular, the Nile is the primary water source of Egypt and Sudan. The river Nile has two major tributaries, the White Nile and Blue Nile. The White Nile is considered to be the headwaters and primary stream of the Nile itself. The Blue Nile, however, is the source of most of the water and silt. The White Nile is longer and rises in the Great Lakes region of central Africa, with the most distant source still undetermined but located in either Rwanda or Burundi. It flows north through Tanzania, Lake Victoria, Uganda and South Sudan. The Blue Nile begins at Lake Tana in Ethiopia and flows into Sudan from the southeast. The two rivers meet just north of the Sudanese capital of Khartoum. The northern section of the river flows north almost entirely through the Sudanese desert to Egypt, then ends in a large delta and flows into the Mediterranean Sea. Egyptian civilization and Sudanese kingdoms have depended on the river since ancient times. Most of the population and cities of Egypt lie along those parts of the Nile valley north of Aswan, and nearly all the cultural and historical sites of Ancient Egypt are found along riverbanks. In the ancient Egyptian language, the Nile is called Ḥ'pī or Iteru (Hapy), meaning "river ''. In Coptic, the word ⲫⲓⲁⲣⲱ, pronounced piaro (Sahidic) or phiaro (Bohairic), means "the river '' (lit. p (h). iar - o "the. canal - great ''), and comes from the same ancient name. In Egyptian Arabic, the Nile is called en - Nīl while in Standard Arabic it is called an - Nīl. The river is also called in Coptic: ⲫⲓⲁⲣⲱ, P (h) iaro; in Ancient Egyptian: Ḥ'pī and Jtrw; and in Biblical Hebrew: הַיְאוֹר ‬, Ha - Ye'or or הַשִׁיחוֹר ‬, Ha - Shiḥor. The English name Nile and the Arabic names en - Nîl and an - Nîl both derive from the Latin Nilus and the Ancient Greek Νεῖλος. Beyond that, however, the etymology is disputed. Hesiod at his Theogony refers that Nilus (Νεῖλος) was one of the Potamoi (river gods), son of Oceanus and Tethys. Another derivation of Nile might be related to the term Nil (Sanskrit: नील, translit. nila; Egyptian Arabic: نيلة ‎), which refers to Indigofera tinctoria, one of the original sources of indigo dye; or Nymphaea caerulea, known as "The Sacred Blue Lily of the Nile '', which was found scattered over Tutankhamen 's corpse when it was located in 1922. Another possible etymology derives it from a Semitic Nahal, meaning "river ''. The standard English names "White Nile '' and "Blue Nile '', to refer to the river 's source, derive from Arabic names formerly applied only to the Sudanese stretches which meet at Khartoum. With a total length of 6,853 km (4,258 mi) between the region of Lake Victoria and the Mediterranean Sea, the Nile is the longest river on the African continent. The drainage basin of the Nile covers 3,254,555 square kilometers (1,256,591 sq mi), about 10 % of the area of Africa. The Nile basin is complex, and because of this, the discharge at any given point along the mainstem depends on many factors including weather, diversions, evaporation and evapotranspiration, and groundwater flow. Above Khartoum, the Nile is also known as the White Nile, a term also used in a limited sense to describe the section between Lake No and Khartoum. At Khartoum the river is joined by the Blue Nile. The White Nile starts in equatorial East Africa, and the Blue Nile begins in Ethiopia. Both branches are on the western flanks of the East African Rift. The source of the Nile is sometimes considered to be Lake Victoria, but the lake has feeder rivers of considerable size. The Kagera River, which flows into Lake Victoria near the Tanzanian town of Bukoba, is the longest feeder, although sources do not agree on which is the longest tributary of the Kagera and hence the most distant source of the Nile itself. It is either the Ruvyironza, which emerges in Bururi Province, Burundi, or the Nyabarongo, which flows from Nyungwe Forest in Rwanda. The two feeder rivers meet near Rusumo Falls on the Rwanda - Tanzania border. In 2010, an exploration party went to a place described as the source of the Rukarara tributary, and by hacking a path up steep jungle - choked mountain slopes in the Nyungwe forest found (in the dry season) an appreciable incoming surface flow for many kilometres upstream, and found a new source, giving the Nile a length of 6,758 km (4,199 mi). Gish Abay is reportedly the place where the "holy water '' of the first drops of the Blue Nile develop. The Nile leaves Lake Nyanza (Victoria) at Ripon Falls near Jinja, Uganda, as the Victoria Nile. It flows north for some 130 kilometers (81 mi), to Lake Kyoga. The last part of the approximately 200 kilometers (120 mi) river section starts from the western shores of the lake and flows at first to the west until just south of Masindi Port, where the river turns north, then makes a great half circle to the east and north until Karuma Falls. For the remaining part it flows merely westerly through the Murchison Falls until it reaches the very northern shores of Lake Albert where it forms a significant river delta. The lake itself is on the border of DR Congo, but the Nile is not a border river at this point. After leaving Lake Albert, the river continues north through Uganda and is known as the Albert Nile. The Nile river flows into South Sudan just south of Nimule, where it is known as the Bahr al Jabal ("Mountain River ''). Just south of the town it has the confluence with the Achwa River. The Bahr al Ghazal, itself 716 kilometers (445 mi) long, joins the Bahr al Jabal at a small lagoon called Lake No, after which the Nile becomes known as the Bahr al Abyad, or the White Nile, from the whitish clay suspended in its waters. When the Nile floods it leaves a rich silty deposit which fertilizes the soil. The Nile no longer floods in Egypt since the completion of the Aswan Dam in 1970. An anabranch river, the Bahr el Zeraf, flows out of the Nile 's Bahr al Jabal section and rejoins the White Nile. The flow rate of the Bahr al Jabal at Mongalla, South Sudan is almost constant throughout the year and averages 1,048 m / s (37,000 cu ft / s). After Mongalla, the Bahr Al Jabal enters the enormous swamps of the Sudd region of South Sudan. More than half of the Nile 's water is lost in this swamp to evaporation and transpiration. The average flow rate of the White Nile at the tails of the swamps is about 510 m / s (18,000 cu ft / s). From here it soon meets with the Sobat River at Malakal. On an annual basis, the White Nile upstream of Malakal contributes about fifteen percent of the total outflow of the Nile. The average flow of the White Nile at Lake Kawaki Malakal, just below the Sobat River, is 924 m / s (32,600 cu ft / s); the peak flow is approximately 1,218 m / s (43,000 cu ft / s) in October and minimum flow is about 609 m / s (21,500 cu ft / s) in April. This fluctuation is due to the substantial variation in the flow of the Sobat, which has a minimum flow of about 99 m / s (3,500 cu ft / s) in March and a peak flow of over 680 m / s (24,000 cu ft / s) in October. During the dry season (January to June) the White Nile contributes between 70 percent and 90 percent of the total discharge from the Nile. Below Renk the White Nile enters Sudan, it flows north to Khartoum and meets the Blue Nile. The course of the Nile in Sudan is distinctive. It flows over six groups of cataracts, from the sixth at Sabaloka just north of Khartoum northward to Abu Hamed. Due to the tectonic uplift of the Nubian Swell, the river is then diverted to flow for over 300 km south - west following the structure of the Central African Shear Zone embracing the Bayuda Desert. At Al Dabbah it resumes its northward course towards the first Cataract at Aswan forming the ' S ' - shaped Great Bend of the Nile already mentioned by Eratosthenes. In the north of Sudan the river enters Lake Nasser (known in Sudan as Lake Nubia), the larger part of which is in Egypt. Below the Aswan High Dam, at the northern limit of Lake Nasser, the Nile resumes its historic course. North of Cairo, the Nile splits into two branches (or distributaries) that feed the Mediterranean: the Rosetta Branch to the west and the Damietta to the east, forming the Nile Delta. Below the confluence with the Blue Nile the only major tributary is the Atbara River, roughly halfway to the sea, which originates in Ethiopia north of Lake Tana, and is around 800 kilometers (500 mi) long. The Atbara flows only while there is rain in Ethiopia and dries very rapidly. During the dry period of January to June, it typically dries up.) north of Khartoum. The Blue Nile (Amharic: ዓባይ, ʿĀbay) springs from Lake Tana in the Ethiopian Highlands. The Blue Nile flows about 1,400 kilometres to Khartoum, where the Blue Nile and White Nile join to form the Nile. Ninety percent of the water and ninety - six percent of the transported sediment carried by the Nile originates in Ethiopia, with fifty - nine percent of the water from the Blue Nile (the rest being from the Tekezé, Atbarah, Sobat, and small tributaries). The erosion and transportation of silt only occurs during the Ethiopian rainy season in the summer, however, when rainfall is especially high on the Ethiopian Plateau; the rest of the year, the great rivers draining Ethiopia into the Nile (Sobat, Blue Nile, Tekezé, and Atbarah) have a weaker flow. In harsh and arid seasons and droughts the blue Nile dries out completely. The flow of the Blue Nile varies considerably over its yearly cycle and is the main contribution to the large natural variation of the Nile flow. During the dry season the natural discharge of the Blue Nile can be as low as 113 m / s (4,000 cu ft / s), although upstream dams regulate the flow of the river. During the wet season the peak flow of the Blue Nile often exceeds 5,663 m / s (200,000 cu ft / s) in late August (a difference of a factor of 50). Before the placement of dams on the river the yearly discharge varied by a factor of 15 at Aswan. Peak flows of over 8,212 m / s (290,000 cu ft / s) occurred during late August and early September, and minimum flows of about 552 m / s (19,500 cu ft / s) occurred during late April and early May. The Bahr al Ghazal and the Sobat River are the two most important tributaries of the White Nile in terms of discharge. The Bahr al Ghazal 's drainage basin is the largest of any of the Nile 's sub-basins, measuring 520,000 square kilometers (200,000 sq mi) in size, but it contributes a relatively small amount of water, about 2 m / s (71 cu ft / s) annually, due to tremendous volumes of water being lost in the Sudd wetlands. The Sobat River, which joins the Nile a short distance below Lake No, drains about half as much land, 225,000 km (86,900 sq mi), but contributes 412 cubic meters per second (14,500 cu ft / s) annually to the Nile. When in flood the Sobat carries a large amount of sediment, adding greatly to the White Nile 's color. The Yellow Nile is a former tributary that connected the Ouaddaï Highlands of eastern Chad to the Nile River Valley c. 8000 to c. 1000 BC. Its remains are known as the Wadi Howar. The wadi passes through Gharb Darfur near the northern border with Chad and meets up with the Nile near the southern point of the Great Bend. The Nile (iteru in Ancient Egyptian) has been the lifeline of civilization in Egypt since the Stone Age, with most of the population and all of the cities of Egypt resting along those parts of the Nile valley lying north of Aswan. However, the Nile used to run much more westerly through what is now Wadi Hamim and Wadi al Maqar in Libya and flow into the Gulf of Sidra. As sea level rose at the end of the most recent ice age, the stream which is now the northern Nile pirated the ancestral Nile near Asyut, this change in climate also led to the creation of the current Sahara desert, around 3400 BC. The present Nile is at least the fifth river that has flowed north from the Ethiopian Highlands. Satellite imagery was used to identify dry watercourses in the desert to the west of the Nile. An Eonile canyon, now filled by surface drift, represents an ancestral Nile called the Eonile that flowed during the later Miocene (23 -- 5.3 million years before present). The Eonile transported clastic sediments to the Mediterranean; several natural gas fields have been discovered within these sediments. During the late - Miocene Messinian salinity crisis, when the Mediterranean Sea was a closed basin and evaporated to the point of being empty or nearly so, the Nile cut its course down to the new base level until it was several hundred metres below world ocean level at Aswan and 2,400 m (7,900 ft) below Cairo. This created a very long and deep canyon which was filled with sediment when the Mediterranean was recreated. At some point the sediments raised the riverbed sufficiently for the river to overflow westward into a depression to create Lake Moeris. Lake Tanganyika drained northwards into the Nile until the Virunga Volcanoes blocked its course in Rwanda. The Nile was much longer at that time, with its furthest headwaters in northern Zambia. There are two theories about the age of the integrated Nile. One is that the integrated drainage of the Nile is of young age, and that the Nile basin was formerly broken into series of separate basins, only the most northerly of which fed a river following the present course of the Nile in Egypt and Sudan. Rushdi Said postulated that Egypt itself supplied most of the waters of the Nile during the early part of its history. The other theory is that the drainage from Ethiopia via rivers equivalent to the Blue Nile, the Atbara and the Takazze flowed to the Mediterranean via the Egyptian Nile since well back into Tertiary times. Salama suggested that during the Paleogene and Neogene Periods (66 million to 2.588 million years ago) a series of separate closed continental basins each occupied one of the major parts of the Sudanese Rift System: Mellut rift, White Nile rift, Blue Nile rift, Atbara rift and Sag El Naam rift. The Mellut Rift Basin is nearly 12 kilometers (7.5 mi) deep at its central part. This rift is possibly still active, with reported tectonic activity in its northern and southern boundaries. The Sudd swamps which form the central part of the basin may still be subsiding. The White Nile Rift System, although shallower than the Bahr el Arab rift, is about 9 kilometers (5.6 mi) deep. Geophysical exploration of the Blue Nile Rift System estimated the depth of the sediments to be 5 -- 9 kilometers (3.1 -- 5.6 mi). These basins were not interconnected until their subsidence ceased, and the rate of sediment deposition was enough to fill and connect them. The Egyptian Nile connected to the Sudanese Nile, which captures the Ethiopian and Equatorial headwaters during the current stages of tectonic activity in the Eastern, Central and Sudanese Rift Systems. The connection of the different Niles occurred during cyclic wet periods. The River Atbara overflowed its closed basin during the wet periods that occurred about 100,000 to 120,000 years ago. The Blue Nile connected to the main Nile during the 70,000 -- 80,000 years B.P. wet period. The White Nile system in Bahr El Arab and White Nile Rifts remained a closed lake until the connection of the Victoria Nile to the main system some 12,500 years ago. The Greek historian Herodotus wrote that "Egypt was the gift of the Nile ''. An unending source of sustenance, it provided a crucial role in the development of Egyptians civilization. Silt deposits from the Nile made the surrounding land fertile because the river overflowed its banks annually. The Ancient Egyptians cultivated and traded wheat, flax, papyrus and other crops around the Nile. Wheat was a crucial crop in the famine - plagued Middle East. This trading system secured Egypt 's diplomatic relationships with other countries, and contributed to economic stability. Far - reaching trade has been carried on along the Nile since ancient times. A tune, Hymn to the Nile, was created and sung by the ancient Egyptian peoples about the flooding of the Nile River and all of the miracles it brought to Ancient Egyptian civilization. Water buffalo were introduced from Asia, and Assyrians introduced camels in the 7th century BC. These animals were killed for meat, and were domesticated and used for ploughing -- or in the camels ' case, carriage. Water was vital to both people and livestock. The Nile was also a convenient and efficient means of transportation for people and goods. The Nile was an important part of ancient Egyptian spiritual life. Hapi was the god of the annual floods, and both he and the pharaoh were thought to control the flooding. The Nile was considered to be a causeway from life to death and the afterlife. The east was thought of as a place of birth and growth, and the west was considered the place of death, as the god Ra, the Sun, underwent birth, death, and resurrection each day as he crossed the sky. Thus, all tombs were west of the Nile, because the Egyptians believed that in order to enter the afterlife, they had to be buried on the side that symbolized death. As the Nile was such an important factor in Egyptian life, the ancient calendar was even based on the 3 cycles of the Nile. These seasons, each consisting of four months of thirty days each, were called Akhet, Peret, and Shemu. Akhet, which means inundation, was the time of the year when the Nile flooded, leaving several layers of fertile soil behind, aiding in agricultural growth. Peret was the growing season, and Shemu, the last season, was the harvest season when there were no rains. Owing to their failure to penetrate the sudd wetlands of South Sudan, the upper reaches of the Nile remained largely unknown to the ancient Greeks and Romans. Various expeditions failed to determine the river 's source. Agatharcides records that in the time of Ptolemy II Philadelphus, a military expedition had penetrated far enough along the course of the Blue Nile to determine that the summer floods were caused by heavy seasonal rainstorms in the Ethiopian Highlands, but no European of antiquity is known to have reached Lake Tana. The Tabula Rogeriana depicted the source as three lakes in 1154. Europeans began to learn about the origins of the Nile in the 15th and 16th centuries, when travelers to Ethiopia visited Lake Tana and the source of the Blue Nile in the mountains south of the lake. Although James Bruce claimed to be the first European to have visited the headwaters, modern writers give the credit to the Jesuit Pedro Páez. Páez 's account of the source of the Nile is a long and vivid account of Ethiopia. It was published in full only in the early 20th century, although it was featured in works of Páez 's contemporaries, including Baltazar Téllez, Athanasius Kircher and by Johann Michael Vansleb. Europeans had been resident in Ethiopia since the late 15th century, and one of them may have visited the headwaters even earlier without leaving a written trace. The Portuguese João Bermudes published the first description of the Tis Issat Falls in his 1565 memoirs, compared them to the Nile Falls alluded to in Cicero 's De Republica. Jerónimo Lobo describes the source of the Blue Nile, visiting shortly after Pedro Páez. Telles also used his account. The White Nile was even less understood. The ancients mistakenly believed that the Niger River represented the upper reaches of the White Nile. For example, Pliny the Elder wrote that the Nile had its origins "in a mountain of lower Mauretania '', flowed above ground for "many days '' distance, then went underground, reappeared as a large lake in the territories of the Masaesyli, then sank again below the desert to flow underground "for a distance of 20 days ' journey till it reaches the nearest Ethiopians. '' A merchant named Diogenes reported that the Nile 's water attracted game such as buffalo. Lake Victoria was first sighted by Europeans in 1858 when the British explorer John Hanning Speke reached its southern shore while traveling with Richard Francis Burton to explore central Africa and locate the great lakes. Believing he had found the source of the Nile on seeing this "vast expanse of open water '' for the first time, Speke named the lake after the then Queen of the United Kingdom. Burton, recovering from illness and resting further south on the shores of Lake Tanganyika, was outraged that Speke claimed to have proved his discovery to be the true source of the Nile when Burton regarded this as still unsettled. A very public quarrel ensued, which sparked a great deal of intense debate within the scientific community and interest by other explorers keen to either confirm or refute Speke 's discovery. British explorer and missionary David Livingstone pushed too far west and entered the Congo River system instead. It was ultimately Welsh - American explorer Henry Morton Stanley who confirmed Speke 's discovery, circumnavigating Lake Victoria and reporting the great outflow at Ripon Falls on the Lake 's northern shore. European involvement in Egypt goes back to the time of Napoleon. Laird Shipyard of Liverpool sent an iron steamer to the Nile in the 1830s. With the completion of the Suez Canal and the British takeover of Egypt in the 1870s, more British river steamers followed. The Nile is the area 's natural navigation channel, giving access to Khartoum and Sudan by steamer. The Siege of Khartoum was broken with purpose - built sternwheelers shipped from England and steamed up the river to retake the city. After this came regular steam navigation of the river. With British Forces in Egypt in the First World War and the inter-war years, river steamers provided both security and sightseeing to the Pyramids and Thebes. Steam navigation remained integral to the two countries as late as 1962. Sudan steamer traffic was a lifeline as few railways or roads were built in that country. Most paddle steamers have been retired to shorefront service, but modern diesel tourist boats remain on the river. The Nile has long been used to transport goods along its length. Winter winds blow south, up river, so ships could sail up river, and down river using the flow of the river. While most Egyptians still live in the Nile valley, the 1970 completion of the Aswan High Dam ended the summer floods and their renewal of the fertile soil, fundamentally changing farming practices. The Nile supports much of the population living along its banks, enabling Egyptians to live in otherwise inhospitable regions of the Sahara. The rivers 's flow is disturbed at several points by the Cataracts of the Nile, which are sections of faster - flowing water with many small islands, shallow water, and rocks, which form an obstacle to navigation by boats. The Sudd wetlands in Sudan also forms a formidable navigation obstacle and impede water flow, to the extent that Sudan had once attempted to canalize (the Jonglei Canal) to bypass the swamps. Nile cities include Khartoum, Aswan, Luxor (Thebes), and the Giza -- Cairo conurbation. The first cataract, the closest to the mouth of the river, is at Aswan, north of the Aswan Dam. This part of the river is a regular tourist route, with cruise ships and traditional wooden sailing boats known as feluccas. Many cruise ships ply the route between Luxor and Aswan, stopping at Edfu and Kom Ombo along the way. Security concerns have limited cruising on the northernmost portion for many years. A computer simulation study to plan the economic development of the Nile was directed by H.A.W. Morrice and W.N. Allan, for the Ministry of Hydro - power of the Republic of the Sudan, during 1955 -- 1957 Morrice was their Hydrological Adviser, and Allan his predecessor. M.P. Barnett directed the software development and computer operations. The calculations were enabled by accurate monthly inflow data collected for 50 years. The underlying principle was the use of over-year storage, to conserve water from rainy years for use in dry years. Irrigation, navigation and other needs were considered. Each computer run postulated a set of reservoirs and operating equations for the release of water as a function of the month and the levels upstream. The behavior that would have resulted given the inflow data was modeled. Over 600 models were run. Recommendations were made to the Sudanese authorities. The calculations were run on an IBM 650 computer. Simulation studies to design water resources are discussed further in the article on hydrology transport models, that have been used since the 1980s to analyze water quality. Despite the development of many reservoirs, drought during the 1980s led to widespread starvation in Ethiopia and Sudan, but Egypt was nourished by water impounded in Lake Nasser. Drought has proven to be a major cause of fatality in the Nile river basin. According to a report by the Strategic Foresight Group around 170 million people have been affected by droughts in the last century with half a million lives lost. From the 70 incidents of drought which took place between 1900 and 2012, 55 incidents took place in Ethiopia, Sudan, South Sudan, Kenya and Tanzania. The Nile 's water has affected the politics of East Africa and the Horn of Africa for many decades. Countries including Uganda, Sudan, Ethiopia and Kenya have complained about Egyptian domination of its water resources. The Nile Basin Initiative promotes a peaceful cooperation among those states. Several attempts have been made to establish agreements between the countries sharing the Nile waters. It is very difficult to have all these countries agree with each other given the self - interest of each country and their political, strategic, and social differences. On 14 May 2010 at Entebbe, Ethiopia, Rwanda, Tanzania and Uganda signed a new agreement on sharing the Nile water even though this agreement raised strong opposition from Egypt and Sudan. Ideally, such international agreements should promote equitable and efficient usage of the Nile basin 's water resources. Without a better understanding about the availability of the future water resources of the Nile, it is possible that conflicts could arise between these countries relying on the Nile for their water supply, economic and social developments. In 1951, the American John Goddard together with two French explorers became the first to successfully navigate the entire Nile river from its source in Burundi at the potential headsprings of the Kagera River in Burundi to its mouth on the Mediterranean Sea, a journey of approximately 6800 kilometers. Their 9 - month journey is described in the book ' Kayaks down the Nile '. The White Nile Expedition, led by South African national Hendrik Coetzee, navigated the White Nile 's entire length of approximately 3,700 kilometres (2,300 mi). The expedition began at the White Nile 's beginning at Lake Victoria in Uganda, on 17 January 2004 and arrived safely at the Mediterranean in Rosetta, four and a half months later. On the 30th of April 2005 a team led by South Africans Peter Meredith and Hendrik Coetzee, following again in the footsteps of John Goddard, navigated the major remote source of the White Nile, the Akagera river that starts as the Ruvyironza in Bururi Province, Burundi, and ends at Lake Victoria, Uganda. In April 2006, the Ascend the Nile Expedition including two explorers from Britain and one from New Zealand ascended the river from its mouth at Rosetta to one of its sources in Rwanda 's Nyungwe Forest. The Team including Cam McLeay, Neil McGrigor and Garth MacIntyre spent 70 days travelling to the Rwandese source of the Nile covering approximately 6800 kilometres. During the Expedition they were ambushed by the LRA (Lord 's Resistance Army) led by the notorious Joseph Kony, however post-attack six months later they returned to complete the expedition. They measured the length of the river with the help of GPS and claimed to have found the furthest source. Due to the unscientific approach of their expedition, their reluctance to release the GPS data, and not having measured the other contender for the true source of the Nile in Burundi, controversy has ensued. The Blue Nile Expedition, led by geologist Pasquale Scaturro and his partner, kayaker and documentary filmmaker Gordon Brown became the first people to descend the entire Blue Nile, from Lake Tana in Ethiopia to the beaches of Alexandria on the Mediterranean. Their approximately 5,230 kilometres (3,250 mi) journey took 114 days: from 25 December 2003 to 28 April 2004. Though their expedition included others, Brown and Scaturro were the only ones to complete the entire journey. Although they descended whitewater manually the team used outboard motors for much of their journey. On 29 January 2005 Canadian Les Jickling and New Zealander Mark Tanner completed the first human powered transit of Ethiopia 's Blue Nile. Their journey of over 5,000 kilometres (3,100 mi) took five months. They recount that they paddled through two war zones, regions notorious for bandits, and were arrested at gunpoint. The following bridges cross the Blue Nile and connect Khartoum to Khartoum North: The following bridges cross the White Nile and connect Khartoum to Omdurman: the following bridges cross from Omdurman: to Khartoum North: The following bridges cross to Tuti from Khartoum states three cities Other bridges Riverboat on the Nile, Egypt 1900 Marsh along the Nile A river boat crossing the Nile in Uganda Murchison Falls in Uganda, between Lake Victoria and Lake Kyoga The Nile in Luxor The Nile flows through Cairo, here contrasting ancient customs of daily life with the modern city of today. The following is an annotated bibliography of key written documents for the Western exploration of the Nile. 17th century 18th century 1800 -- 1850 1850 -- 1900
when does the dragon ball super special come out
List of Dragon Ball Super episodes - wikipedia Dragon Ball Super is a Japanese anime television series produced by Toei Animation that began airing on July 5, 2015 on Fuji TV. It is the first Dragon Ball television series featuring a new storyline in 18 years. Storywise, the series retells the events of the last two Dragon Ball Z films, Battle of Gods and Resurrection ' F ', which themselves follow the events of Dragon Ball Z. Afterwards, the series proceeds to tell an original story about the exploration of other universes, the reemergence of Future Trunks, and a new threat to his Earth known as Goku Black and a Supreme Kai from Universe 10 named Zamasu. Later, the Z Fighters participate in a universal tournament held by Zeno - sama to decide the fate of multiple universes. If they lose in the universal tournament, then their entire universe will be erased. Thirteen pieces of theme music are used: two opening themes and eleven ending themes. For the first 76 episodes, the opening theme is "Chōzetsu ☆ Dynamic! '' (超絶 ☆ ダイナミック!, Chōzetsu Dainamikku, "Excellent Dynamic! '') performed by Kazuya Yoshii of The Yellow Monkey. The lyrics were penned by Yukinojo Mori who has written numerous songs for the Dragon Ball series. Beginning with episode 77, the second opening theme is "Limit - Break x Survivor '' (限界 突破 × サバイバー, Genkai Toppa x Sabaibā) by Kiyoshi Hikawa. Mori wrote the lyrics for the rock number "Genkai Toppa x Survivor '' and Takafumi Iwasaki composed the music. For the first 12 episodes, the ending theme is "Hello Hello Hello '' (ハロー ハロー ハロー, Harō Harō Harō) by Japanese rock band Good Morning America. The second ending theme song for episodes 13 to 25 is "Starring Star '' (スターリング スター, Sutāringu Sutā) by Key Talk. The third ending theme song for episodes 26 to 36 is "Usubeni '' (薄 紅, "Light Pink '') by Lacco Tower. The fourth ending theme song for episodes 37 to 49 is "Forever Dreaming '' (フォーエバー ドリーミング, Fōebā Dorīmingu) by Czecho No Republic. The fifth ending theme song for episodes 50 to 59 is "Yokayoka Dance '' (よかよか ダンス, Yokayoka Dansu, "It 's Fine Dance '') by idol group Batten Showjo Tai. The sixth ending theme for episodes 60 to 72 is "Chao Han Music '' (炒飯 MUSIC, Chāhan Myūjikku) by Arukara. The seventh ending theme from episodes 73 to 83 is "Aku no Tenshi to Seigi no Akuma '' (悪 の 天使 と 正義 の 悪魔, An Evil Angel and the Righteous Devil) by THE COLLECTORS. The eighth ending theme from episodes 84 to 96 is "Boogie Back '' by Miyu Innoue. The ninth ending theme from episodes 97 to 108 is "Haruka '' by Lacco Tower. Beginning with episode 109, the tenth ending theme is "By A 70cm Square Window '' by the rock band RottenGraffty. The eleventh ending theme is "Lagrima '' by OnePixecel. The anime episodes are being released on Japanese Blu - ray and DVD sets that contain twelve episodes each. The first set was released on December 2, 2015. The second set was released on March 2, 2016. The third set was released on July 2, 2016. The fourth set was released on October 10, 2016. Dragon Ball Super received an English - language dub that premiered on the Toonami channel in Southeast Asia and India in January 2017. The series has been aired in Israel on Nickelodeon and in Portugal on SIC. Toei Animation Europe announced that Dragon Ball Super would be broadcast in France, Italy, Spain, and English - speaking Africa in Fall 2016. An official English sub of the series would be simulcasted legally on Crunchyroll, Daisuki.net, and Anime Lab beginning October 22, 2016. Funimation announced the company acquired the rights to the series and will be producing an English dub. As well as officially announcing the dub, it was also announced they will be simulcasting the series on their streaming platform, FunimationNow. Funimation 's English dub of Dragon Ball Super started airing on Adult Swim 's Toonami block starting January 7, 2017. The Supreme Kais are surprised that the universe is still intact following the battle, but they fear the worst is still to come. On Earth, Vegeta, Whis, and the others are still standing by while watching the battle. Whis is surprised by the Super Saiyan God 's power and its ability to keep up with his trainee. In space, Goku struggles to keep up with Beerus ' attack, which ultimately ends up in a massive explosion that blinds everyone on Earth. Shortly after, the light clears out, which reveals everything to be as it was prior to the explosion. Beerus explains that he used his full power to negate the explosion, which saved the universe. Seeing it as a perfect opportunity to boast, Mr. Satan arranges to have himself be falsely credited with saving the planet yet again. Despite being at his limit, Goku remains calm, which annoys the God of Destruction. Beerus thinks Goku might have a strategy that he has been hiding, which Goku promptly denies. Goku says that everything he had been doing was improvised as they fought. The Gods quickly power up and continue fighting, but this time both are at their limit. As soon as they start, Goku loses his Super Saiyan God aura and reverts to the ordinary Super Saiyan form. Upon noticing that, Beerus decides to quit. He thinks it is pointless to fight an ordinary Super Saiyan. However, Goku does not notice and keeps going at it. Whis is able to sense Goku 's mortal energy. Whis assumes that the battle has concluded and that Goku has lost. However, Piccolo begs to differ. Surprised that Goku is still able to hit him even after losing his Super Saiyan God form, Beerus surmises that Goku 's body has adjusted to the Super Saiyan God power. This made him stronger in his ordinary form. With or without the Super Saiyan God form, Goku proclaims that it is still him that Beerus is up against. Beerus and Goku resume their battle of Gods. To counter Goku 's increase in power, Kefla powers up to Super Saiyan 2, and the two of them face off. Goku still easily dodges Kefla 's attacks, but his own attacks are not enough to take her down. When Goku launches his attacks, it interferes with his concentration and prevents him from using Ultra Instinct to its full potential. Jiren senses the energy from their battle, which prompts him to awaken from his meditation and rejoin Toppo and Dyspo. Vegeta realizes that Ultra Instinct is the level of skill that Whis was training him and Goku to attain. Vegeta decides that he must reach it too. Goku begins running low on stamina. He declares that he will end the fight with his next attack. Kefla panics and unleashes a multitude of deadly energy beams. Her ultimate attack devastates the ring, but Goku easily dodges her blasts while charging a Kamehameha. Goku jumps into the air. Kefla focuses all of her power into a single blast and launches it at him. She takes advantage of his apparent inability to dodge. However, he back flips and uses the charge up energy to slide over her attack and launches his Kamehameha at point - blank range. Goku blasts Kefla out of the ring and eliminates her. Her Potara earrings shatter, and she splits back into Kale and Caulifla. With both of them eliminated, Saonel and Pirina are the only remaining warriors from Team Universe 6.
who came in second in the belmont stakes
Belmont Stakes top three finishers - wikipedia This is a listing of the horses that finished in either first, second, or third place and the number of starters in the Belmont Stakes, the third leg of the United States Triple Crown of Thoroughbred Racing run at 1 1 / 2 miles on dirt for three - year - olds at Belmont Park in Elmont, New York. A † designates a Triple Crown Winner. A ‡ designates a Filly. Note: D. Wayne Lukas swept the 1995 Triple Crown with two different horses.
nazareth love hurts other recordings of this song
Love Hurts - wikipedia "Love Hurts '' is a song written and composed by the American songwriter Boudleaux Bryant. First recorded by The Everly Brothers in July 1960, the song is also well known from a 1975 international hit version by the Scottish hard rock band Nazareth and in the UK by a top five hit in 1975 by the English singer Jim Capaldi. The song was introduced in December 1960 as an album track on A Date with The Everly Brothers, but was never released as a single (A-side or B - side) by the Everlys. The first hit version of the song was by Roy Orbison, who earned Australian radio play, hitting the Top Five of that country 's singles charts in 1961. A recording by Emmylou Harris and Gram Parsons was included on Parsons ' posthumously released Grievous Angel album. After Parsons ' 1973 death, Harris made the song a staple of her repertoire, and has included it in her concert set lists from the 1970s to the present. Harris has since re-recorded the song twice. The most successful recording of the song was by hard rock band Nazareth, who took the song to the U.S. Top 10 in 1975 and hit number one in Norway and the Netherlands. In the UK the most successful version of the song was by former Traffic member Jim Capaldi, who took it to number four in the charts in November 1975 during an 11 - week run. The song was also covered by Cher in 1975 for her album Stars. Cher re-recorded the song in 1991 for her album of the same name. Rod Stewart recorded the song in 2006 for his album Still the Same... Great Rock Classics of Our Time which was No. 1 on the Billboard 200 chart. Roy Orbison covered "Love Hurts '' in 1961 and issued it as the B - side to "Running Scared ''. While "Running Scared '' was an international hit, the B - side only picked up significant airplay in Australia. Consequently, chart figures for Australia show "Running Scared '' / "Love Hurts '' as a double A-Side, both sides peaking at No. 5. This makes Orbison 's recording of "Love Hurts '' the first version to be a hit. Jim Capaldi reached number 4 in the UK charts with his interpretation of "Love Hurts '' in November 1975, which was to prove his highest charting UK single. Described by Rolling Stone as having "a sense of pain very different from Roy Orbison 's. '' the single also charted in the US, Germany, and Sweden. Performed as a power ballad, the Nazareth version was the most popular version of the song and the only rendition of "Love Hurts '' to become a hit single in the United States, reaching No. 8 on the Billboard Hot 100 in early 1976. Billboard ranked it as the No. 23 song for 1976. As part of the "Hot Tracks (EP) '' it also reached No. 15 in the UK in 1977. Nazareth 's version was an international hit, peaking at No. 1 in Canada, the Netherlands, Belgium, South Africa and Norway, and remains the best - known recording of the song. The Nazareth single was so successful in Norway that it charted for 61 weeks on the Norwegian charts (VG - lista Top 10), including 14 weeks at No. 1, making it the top single of all time in that country. A later recording by Nazareth, featuring the Munich Philharmonic Orchestra, peaked at No. 89 in Germany. The lyrics of the song remained unchanged on all versions up until Nazareth 's 1975 recording, where the original line "love is like a stove / it burns you when it 's hot '' was changed to "love is like a flame / it burns you when it 's hot ''. Cher used this Nazareth version for her 1991 re-recording of the song for her album of the same name. The Nazareth version has been featured in the movies Dazed and Confused, Detroit Rock City, Together, Click, and Halloween, and season 3 of Gotham, among others. The Nazareth version of the song is included in the soundtrack to the 2000 Swedish film Together and can be heard in several scenes. It was edited for use in a late - ' 90s Gatorade TV commercial. Other companies to have used the song in advertising include Southwest Airlines, Molson, Nissan (for the Altima), Zurich (worldwide ' True Love ' advertising), and Toyota in Australia. A cover of the this version was sung by Courtney Taylor - Taylor, lead singer of The Dandy Warhols, in "Cheatty Cheatty Bang Bang '', a second season episode of Veronica Mars. A cover was sung by Nan Vernon for the film Halloween II. A very similar version was performed in the "That ' 70s Musical '' episode of That ' 70s Show. shipments figures based on certification alone Cher also recorded the song in 1975 but did not have a hit with it at the time. She recorded a second version in 1991 for the album of the same name. The single became a minor hit in the UK in December 1991. Cher performed the song on the following concert tours:
the long term source of energy that powers the sun is
Sun - wikipedia The Sun is the star at the center of the Solar System. It is a nearly perfect sphere of hot plasma, with internal convective motion that generates a magnetic field via a dynamo process. It is by far the most important source of energy for life on Earth. Its diameter is about 1.39 million kilometers, i.e. 109 times that of Earth, and its mass is about 330,000 times that of Earth, accounting for about 99.86 % of the total mass of the Solar System. About three quarters of the Sun 's mass consists of hydrogen (~ 73 %); the rest is mostly helium (~ 25 %), with much smaller quantities of heavier elements, including oxygen, carbon, neon, and iron. The Sun is a G - type main - sequence star (G2V) based on its spectral class. As such, it is informally referred to as a yellow dwarf. It formed approximately 4.6 billion years ago from the gravitational collapse of matter within a region of a large molecular cloud. Most of this matter gathered in the center, whereas the rest flattened into an orbiting disk that became the Solar System. The central mass became so hot and dense that it eventually initiated nuclear fusion in its core. It is thought that almost all stars form by this process. The Sun is roughly middle - aged; it has not changed dramatically for more than four billion years, and will remain fairly stable for more than another five billion years. After hydrogen fusion in its core has diminished to the point at which it is no longer in hydrostatic equilibrium, the core of the Sun will experience a marked increase in density and temperature while its outer layers expand to eventually become a red giant. It is calculated that the Sun will become sufficiently large to engulf the current orbits of Mercury and Venus, and render Earth uninhabitable. The enormous effect of the Sun on Earth has been recognized since prehistoric times, and the Sun has been regarded by some cultures as a deity. The synodic rotation of Earth and its orbit around the Sun are the basis of solar calendars, one of which is the predominant calendar in use today. The English proper name Sun developed from Old English sunne and may be related to south. Cognates to English sun appear in other Germanic languages, including Old Frisian sunne, sonne, Old Saxon sunna, Middle Dutch sonne, modern Dutch zon, Old High German sunna, modern German Sonne, Old Norse sunna, and Gothic sunnō. All Germanic terms for the Sun stem from Proto - Germanic * sunnōn. The English weekday name Sunday stems from Old English (Sunnandæg; "Sun 's day '', from before 700) and is ultimately a result of a Germanic interpretation of Latin dies solis, itself a translation of the Greek ἡμέρα ἡλίου (hēméra hēlíou). The Latin name for the Sun, Sol, is not common in general English language use; the adjectival form is the related word solar. The term sol is also used by planetary astronomers to refer to the duration of a solar day on another planet, such as Mars. A mean Earth solar day is approximately 24 hours, whereas a mean Martian ' sol ' is 24 hours, 39 minutes, and 35.244 seconds. Solar deities play a major role in many world religions and mythologies. The ancient Sumerians believed that the sun was Utu, the god of justice and twin brother of Inanna, the Queen of Heaven, who was identified as the planet Venus. Later, Utu was identified with the East Semitic god Shamash. Utu was regarded as a helper - deity, who aided those in distress, and, in iconography, he is usually portrayed with a long beard and clutching a saw, which represented his role as the dispenser of justice. From at least the 4th Dynasty of Ancient Egypt, the Sun was worshipped as the god Ra, portrayed as a falcon - headed divinity surmounted by the solar disk, and surrounded by a serpent. In the New Empire period, the Sun became identified with the dung beetle, whose spherical ball of dung was identified with the Sun. In the form of the Sun disc Aten, the Sun had a brief resurgence during the Amarna Period when it again became the preeminent, if not only, divinity for the Pharaoh Akhenaton. In Proto - Indo - European religion, the sun was personified as the goddess * Seh ul. Derivatives of this goddess in Indo - European languages include the Old Norse Sól, Sanskrit Surya, Gaulish Sulis, Lithuanian Saulė, and Slavic Solntse. In ancient Greek religion, the sun deity was the male god Helios, but traces of an earlier female solar deity are preserved in Helen of Troy. In later times, Helios was syncretized with Apollo. In the Bible, Malachi 4: 2 mentions the "Sun of Righteousness '' (sometimes translated as the "Sun of Justice ''), which some Christians have interpreted as a reference to the Messiah (Christ). In ancient Roman culture, Sunday was the day of the Sun god. It was adopted as the Sabbath day by Christians who did not have a Jewish background. The symbol of light was a pagan device adopted by Christians, and perhaps the most important one that did not come from Jewish traditions. In paganism, the Sun was a source of life, giving warmth and illumination to mankind. It was the center of a popular cult among Romans, who would stand at dawn to catch the first rays of sunshine as they prayed. The celebration of the winter solstice (which influenced Christmas) was part of the Roman cult of the unconquered Sun (Sol Invictus). Christian churches were built with an orientation so that the congregation faced toward the sunrise in the East. Tonatiuh, the Aztec god of the sun, was usually depicted holding arrows and a shield and was closely associated with the practice of human sacrifice. The sun goddess Amaterasu is the most important deity in the Shinto religion, and she is believed to be the direct ancestor of all Japanese emperors. The Sun is a G - type main - sequence star that comprises about 99.86 % of the mass of the Solar System. The Sun has an absolute magnitude of + 4.83, estimated to be brighter than about 85 % of the stars in the Milky Way, most of which are red dwarfs. The Sun is a Population I, or heavy - element - rich, star. The formation of the Sun may have been triggered by shockwaves from one or more nearby supernovae. This is suggested by a high abundance of heavy elements in the Solar System, such as gold and uranium, relative to the abundances of these elements in so - called Population II, heavy - element - poor, stars. The heavy elements could most plausibly have been produced by endothermic nuclear reactions during a supernova, or by transmutation through neutron absorption within a massive second - generation star. The Sun is by far the brightest object in the Earth 's sky, with an apparent magnitude of − 26.74. This is about 13 billion times brighter than the next brightest star, Sirius, which has an apparent magnitude of − 1.46. The mean distance of the Sun 's center to Earth 's center is approximately 1 astronomical unit (about 150,000,000 km; 93,000,000 mi), though the distance varies as Earth moves from perihelion in January to aphelion in July. At this average distance, light travels from the Sun 's horizon to Earth 's horizon in about 8 minutes and 19 seconds, while light from the closest points of the Sun and Earth takes about two seconds less. The energy of this sunlight supports almost all life on Earth by photosynthesis, and drives Earth 's climate and weather. The Sun does not have a definite boundary, but its density decreases exponentially with increasing height above the photosphere. For the purpose of measurement, however, the Sun 's radius is considered to be the distance from its center to the edge of the photosphere, the apparent visible surface of the Sun. By this measure, the Sun is a near - perfect sphere with an oblateness estimated at about 9 millionths, which means that its polar diameter differs from its equatorial diameter by only 10 kilometres (6.2 mi). The tidal effect of the planets is weak and does not significantly affect the shape of the Sun. The Sun rotates faster at its equator than at its poles. This differential rotation is caused by convective motion due to heat transport and the Coriolis force due to the Sun 's rotation. In a frame of reference defined by the stars, the rotational period is approximately 25.6 days at the equator and 33.5 days at the poles. Viewed from Earth as it orbits the Sun, the apparent rotational period of the Sun at its equator is about 28 days. The solar constant is the amount of power that the Sun deposits per unit area that is directly exposed to sunlight. The solar constant is equal to approximately 7003136800000000000 ♠ 1,368 W / m (watts per square meter) at a distance of one astronomical unit (AU) from the Sun (that is, on or near Earth). Sunlight on the surface of Earth is attenuated by Earth 's atmosphere, so that less power arrives at the surface (closer to 7003100000000000000 ♠ 1,000 W / m) in clear conditions when the Sun is near the zenith. Sunlight at the top of Earth 's atmosphere is composed (by total energy) of about 50 % infrared light, 40 % visible light, and 10 % ultraviolet light. The atmosphere in particular filters out over 70 % of solar ultraviolet, especially at the shorter wavelengths. Solar ultraviolet radiation ionizes Earth 's dayside upper atmosphere, creating the electrically conducting ionosphere. The Sun 's color is white, with a CIE color - space index near (0.3, 0.3), when viewed from space or when the Sun is high in the sky. When measuring all the photons emitted, the Sun is actually emitting more photons in the green portion of the spectrum than any other. When the Sun is low in the sky, atmospheric scattering renders the Sun yellow, red, orange, or magenta. Despite its typical whiteness, most people mentally picture the Sun as yellow; the reasons for this are the subject of debate. The Sun is a G2V star, with G2 indicating its surface temperature of approximately 5,778 K (5,505 ° C, 9,941 ° F), and V that it, like most stars, is a main - sequence star. The average luminance of the Sun is about 1.88 giga candela per square metre, but as viewed through Earth 's atmosphere, this is lowered to about 1.44 Gcd / m. However, the luminance is not constant across the disk of the Sun (limb darkening). The Sun is composed primarily of the chemical elements hydrogen and helium; they account for 74.9 % and 23.8 % of the mass of the Sun in the photosphere, respectively. All heavier elements, called metals in astronomy, account for less than 2 % of the mass, with oxygen (roughly 1 % of the Sun 's mass), carbon (0.3 %), neon (0.2 %), and iron (0.2 %) being the most abundant. The Sun inherited its chemical composition from the interstellar medium out of which it formed. The hydrogen and helium in the Sun were produced by Big Bang nucleosynthesis, and the heavier elements were produced by stellar nucleosynthesis in generations of stars that completed their stellar evolution and returned their material to the interstellar medium before the formation of the Sun. The chemical composition of the photosphere is normally considered representative of the composition of the primordial Solar System. However, since the Sun formed, some of the helium and heavy elements have gravitationally settled from the photosphere. Therefore, in today 's photosphere the helium fraction is reduced, and the metallicity is only 84 % of what it was in the protostellar phase (before nuclear fusion in the core started). The protostellar Sun 's composition is believed to have been 71.1 % hydrogen, 27.4 % helium, and 1.5 % heavier elements. Today, nuclear fusion in the Sun 's core has modified the composition by converting hydrogen into helium, so the innermost portion of the Sun is now roughly 60 % helium, with the abundance of heavier elements unchanged. Because heat is transferred from the Sun 's core by radiation rather than by convection (see Radiative zone below), none of the fusion products from the core have risen to the photosphere. The reactive core zone of "hydrogen burning '', where hydrogen is converted into helium, is starting to surround an inner core of "helium ash ''. This development will continue and will eventually cause the Sun to leave the main sequence, to become a red giant. The solar heavy - element abundances described above are typically measured both using spectroscopy of the Sun 's photosphere and by measuring abundances in meteorites that have never been heated to melting temperatures. These meteorites are thought to retain the composition of the protostellar Sun and are thus not affected by settling of heavy elements. The two methods generally agree well. In the 1970s, much research focused on the abundances of iron - group elements in the Sun. Although significant research was done, until 1978 it was difficult to determine the abundances of some iron - group elements (e.g. cobalt and manganese) via spectrography because of their hyperfine structures. The first largely complete set of oscillator strengths of singly ionized iron - group elements were made available in the 1960s, and these were subsequently improved. In 1978, the abundances of singly ionized elements of the iron group were derived. Various authors have considered the existence of a gradient in the isotopic compositions of solar and planetary noble gases, e.g. correlations between isotopic compositions of neon and xenon in the Sun and on the planets. Prior to 1983, it was thought that the whole Sun has the same composition as the solar atmosphere. In 1983, it was claimed that it was fractionation in the Sun itself that caused the isotopic - composition relationship between the planetary and solar - wind - implanted noble gases. The core of the Sun extends from the center to about 20 -- 25 % of the solar radius. It has a density of up to 7005150000000000000 ♠ 150 g / cm (about 150 times the density of water) and a temperature of close to 15.7 million kelvins (K). By contrast, the Sun 's surface temperature is approximately 5,800 K. Recent analysis of SOHO mission data favors a faster rotation rate in the core than in the radiative zone above. Through most of the Sun 's life, energy has been produced by nuclear fusion in the core region through a series of steps called the p -- p (proton -- proton) chain; this process converts hydrogen into helium. Only 0.8 % of the energy generated in the Sun comes from the CNO cycle, though this proportion is expected to increase as the Sun becomes older. The core is the only region in the Sun that produces an appreciable amount of thermal energy through fusion; 99 % of the power is generated within 24 % of the Sun 's radius, and by 30 % of the radius, fusion has stopped nearly entirely. The remainder of the Sun is heated by this energy as it is transferred outwards through many successive layers, finally to the solar photosphere where it escapes into space as sunlight or the kinetic energy of particles. The proton -- proton chain occurs around 7037919999999999999 ♠ 9.2 × 10 times each second in the core, converting about 3.7 × 10 protons into alpha particles (helium nuclei) every second (out of a total of ~ 8.9 × 10 free protons in the Sun), or about 6.2 × 10 kg / s. Fusing four free protons (hydrogen nuclei) into a single alpha particle (helium nucleus) releases around 0.7 % of the fused mass as energy, so the Sun releases energy at the mass -- energy conversion rate of 4.26 million metric tons per second (which requires 600 metric megatons of hydrogen), for 384.6 yottawatts (7026384600000000000 ♠ 3.846 × 10 W), or 9.192 × 10 megatons of TNT per second. Theoretical models of the Sun 's interior indicate a power density of approximately 276.5 W / m, a value that more nearly approximates that of reptile metabolism or a compost pile than of a thermonuclear bomb. The fusion rate in the core is in a self - correcting equilibrium: a slightly higher rate of fusion would cause the core to heat up more and expand slightly against the weight of the outer layers, reducing the density and hence the fusion rate and correcting the perturbation; and a slightly lower rate would cause the core to cool and shrink slightly, increasing the density and increasing the fusion rate and again reverting it to its present rate. From the core out to about 0.7 solar radii, thermal radiation is the primary means of energy transfer. The temperature drops from approximately 7 million to 2 million kelvins with increasing distance from the core. This temperature gradient is less than the value of the adiabatic lapse rate and hence can not drive convection, which explains why the transfer of energy through this zone is by radiation instead of thermal convection. Ions of hydrogen and helium emit photons, which travel only a brief distance before being reabsorbed by other ions. The density drops a hundredfold (from 20 g / cm to 0.2 g / cm) from 0.25 solar radii to the 0.7 radii, the top of the radiative zone. The radiative zone and the convective zone are separated by a transition layer, the tachocline. This is a region where the sharp regime change between the uniform rotation of the radiative zone and the differential rotation of the convection zone results in a large shear between the two -- a condition where successive horizontal layers slide past one another. Presently, it is hypothesized (see Solar dynamo) that a magnetic dynamo within this layer generates the Sun 's magnetic field. The Sun 's convection zone extends from 0.7 solar radii (200,000 km) to near the surface. In this layer, the solar plasma is not dense enough or hot enough to transfer the heat energy of the interior outward via radiation. Instead, the density of the plasma is low enough to allow convective currents to develop and move the Sun 's energy outward towards its surface. Material heated at the tachocline picks up heat and expands, thereby reducing its density and allowing it to rise. As a result, an orderly motion of the mass develops into thermal cells that carry the majority of the heat outward to the Sun 's photosphere above. Once the material diffusively and radiatively cools just beneath the photospheric surface, its density increases, and it sinks to the base of the convection zone, where it again picks up heat from the top of the radiative zone and the convective cycle continues. At the photosphere, the temperature has dropped to 5,700 K and the density to only 0.2 g / m (about 1 / 6,000 the density of air at sea level). The thermal columns of the convection zone form an imprint on the surface of the Sun giving it a granular appearance called the solar granulation at the smallest scale and supergranulation at larger scales. Turbulent convection in this outer part of the solar interior sustains "small - scale '' dynamo action over the near - surface volume of the Sun. The Sun 's thermal columns are Bénard cells and take the shape of hexagonal prisms. The visible surface of the Sun, the photosphere, is the layer below which the Sun becomes opaque to visible light. Above the photosphere visible sunlight is free to propagate into space, and almost all of its energy escapes the Sun entirely. The change in opacity is due to the decreasing amount of H ions, which absorb visible light easily. Conversely, the visible light we see is produced as electrons react with hydrogen atoms to produce H ions. The photosphere is tens to hundreds of kilometers thick, and is slightly less opaque than air on Earth. Because the upper part of the photosphere is cooler than the lower part, an image of the Sun appears brighter in the center than on the edge or limb of the solar disk, in a phenomenon known as limb darkening. The spectrum of sunlight has approximately the spectrum of a black - body radiating at about 6,000 K, interspersed with atomic absorption lines from the tenuous layers above the photosphere. The photosphere has a particle density of ~ 10 m (about 0.37 % of the particle number per volume of Earth 's atmosphere at sea level). The photosphere is not fully ionized -- the extent of ionization is about 3 %, leaving almost all of the hydrogen in atomic form. During early studies of the optical spectrum of the photosphere, some absorption lines were found that did not correspond to any chemical elements then known on Earth. In 1868, Norman Lockyer hypothesized that these absorption lines were caused by a new element that he dubbed helium, after the Greek Sun god Helios. Twenty - five years later, helium was isolated on Earth. During a total solar eclipse, when the disk of the Sun is covered by that of the Moon, parts of the Sun 's surrounding atmosphere can be seen. It is composed of four distinct parts: the chromosphere, the transition region, the corona and the heliosphere. The coolest layer of the Sun is a temperature minimum region extending to about 7005500000000000000 ♠ 500 km above the photosphere, and has a temperature of about 7003410000000000000 ♠ 4,100 K. This part of the Sun is cool enough to allow the existence of simple molecules such as carbon monoxide and water, which can be detected via their absorption spectra. The chromosphere, transition region, and corona are much hotter than the surface of the Sun. The reason is not well understood, but evidence suggests that Alfvén waves may have enough energy to heat the corona. Above the temperature minimum layer is a layer about 7006200000000000000 ♠ 2,000 km thick, dominated by a spectrum of emission and absorption lines. It is called the chromosphere from the Greek root chroma, meaning color, because the chromosphere is visible as a colored flash at the beginning and end of total solar eclipses. The temperature of the chromosphere increases gradually with altitude, ranging up to around 7004200000000000000 ♠ 20,000 K near the top. In the upper part of the chromosphere helium becomes partially ionized. Above the chromosphere, in a thin (about 200 km) transition region, the temperature rises rapidly from around 20,000 K in the upper chromosphere to coronal temperatures closer to 1,000,000 K. The temperature increase is facilitated by the full ionization of helium in the transition region, which significantly reduces radiative cooling of the plasma. The transition region does not occur at a well - defined altitude. Rather, it forms a kind of nimbus around chromospheric features such as spicules and filaments, and is in constant, chaotic motion. The transition region is not easily visible from Earth 's surface, but is readily observable from space by instruments sensitive to the extreme ultraviolet portion of the spectrum. The corona is the next layer of the Sun. The low corona, near the surface of the Sun, has a particle density around 10 m to 10 m. The average temperature of the corona and solar wind is about 1,000,000 -- 2,000,000 K; however, in the hottest regions it is 8,000,000 -- 20,000,000 K. Although no complete theory yet exists to account for the temperature of the corona, at least some of its heat is known to be from magnetic reconnection. The corona is the extended atmosphere of the Sun, which has a volume much larger than the volume enclosed by the Sun 's photosphere. A flow of plasma outward from the Sun into interplanetary space is the solar wind. The heliosphere, the tenuous outermost atmosphere of the Sun, is filled with the solar wind plasma. This outermost layer of the Sun is defined to begin at the distance where the flow of the solar wind becomes superalfvénic -- that is, where the flow becomes faster than the speed of Alfvén waves, at approximately 20 solar radii (0.1 AU). Turbulence and dynamic forces in the heliosphere can not affect the shape of the solar corona within, because the information can only travel at the speed of Alfvén waves. The solar wind travels outward continuously through the heliosphere, forming the solar magnetic field into a spiral shape, until it impacts the heliopause more than 50 AU from the Sun. In December 2004, the Voyager 1 probe passed through a shock front that is thought to be part of the heliopause. In late 2012 Voyager 1 recorded a marked increase in cosmic ray collisions and a sharp drop in lower energy particles from the solar wind, which suggested that the probe had passed through the heliopause and entered the interstellar medium. High - energy gamma - ray photons initially released with fusion reactions in the core are almost immediately absorbed by the solar plasma of the radiative zone, usually after traveling only a few millimeters. Re-emission happens in a random direction and usually at a slightly lower energy. With this sequence of emissions and absorptions, it takes a long time for radiation to reach the Sun 's surface. Estimates of the photon travel time range between 10,000 and 170,000 years. In contrast, it takes only 2.3 seconds for the neutrinos, which account for about 2 % of the total energy production of the Sun, to reach the surface. Because energy transport in the Sun is a process that involves photons in thermodynamic equilibrium with matter, the time scale of energy transport in the Sun is longer, on the order of 30,000,000 years. This is the time it would take the Sun to return to a stable state, if the rate of energy generation in its core were suddenly changed. Neutrinos are also released by the fusion reactions in the core, but, unlike photons, they rarely interact with matter, so almost all are able to escape the Sun immediately. For many years measurements of the number of neutrinos produced in the Sun were lower than theories predicted by a factor of 3. This discrepancy was resolved in 2001 through the discovery of the effects of neutrino oscillation: the Sun emits the number of neutrinos predicted by the theory, but neutrino detectors were missing ⁄ of them because the neutrinos had changed flavor by the time they were detected. The Sun has a magnetic field that varies across the surface of the Sun. Its polar field is 1 -- 2 gauss (0.0001 -- 0.0002 T), whereas the field is typically 3,000 gauss (0.3 T) in features on the Sun called sunspots and 10 -- 100 gauss (0.001 -- 0.01 T) in solar prominences. The magnetic field also varies in time and location. The quasi-periodic 11 - year solar cycle is the most prominent variation in which the number and size of sunspots waxes and wanes. Sunspots are visible as dark patches on the Sun 's photosphere, and correspond to concentrations of magnetic field where the convective transport of heat is inhibited from the solar interior to the surface. As a result, sunspots are slightly cooler than the surrounding photosphere, and, so, they appear dark. At a typical solar minimum, few sunspots are visible, and occasionally none can be seen at all. Those that do appear are at high solar latitudes. As the solar cycle progresses towards its maximum, sunspots tend form closer to the solar equator, a phenomenon known as Spörer 's law. The largest sunspots can be tens of thousands of kilometers across. An 11 - year sunspot cycle is half of a 22 - year Babcock -- Leighton dynamo cycle, which corresponds to an oscillatory exchange of energy between toroidal and poloidal solar magnetic fields. At solar - cycle maximum, the external poloidal dipolar magnetic field is near its dynamo - cycle minimum strength, but an internal toroidal quadrupolar field, generated through differential rotation within the tachocline, is near its maximum strength. At this point in the dynamo cycle, buoyant upwelling within the convective zone forces emergence of toroidal magnetic field through the photosphere, giving rise to pairs of sunspots, roughly aligned east -- west and having footprints with opposite magnetic polarities. The magnetic polarity of sunspot pairs alternates every solar cycle, a phenomenon known as the Hale cycle. During the solar cycle 's declining phase, energy shifts from the internal toroidal magnetic field to the external poloidal field, and sunspots diminish in number and size. At solar - cycle minimum, the toroidal field is, correspondingly, at minimum strength, sunspots are relatively rare, and the poloidal field is at its maximum strength. With the rise of the next 11 - year sunspot cycle, differential rotation shifts magnetic energy back from the poloidal to the toroidal field, but with a polarity that is opposite to the previous cycle. The process carries on continuously, and in an idealized, simplified scenario, each 11 - year sunspot cycle corresponds to a change, then, in the overall polarity of the Sun 's large - scale magnetic field. The solar magnetic field extends well beyond the Sun itself. The electrically conducting solar wind plasma carries the Sun 's magnetic field into space, forming what is called the interplanetary magnetic field. In an approximation known as ideal magnetohydrodynamics, plasma particles only move along the magnetic field lines. As a result, the outward - flowing solar wind stretches the interplanetary magnetic field outward, forcing it into a roughly radial structure. For a simple dipolar solar magnetic field, with opposite hemispherical polarities on either side of the solar magnetic equator, a thin current sheet is formed in the solar wind. At great distances, the rotation of the Sun twists the dipolar magnetic field and corresponding current sheet into an Archimedean spiral structure called the Parker spiral. The interplanetary magnetic field is much stronger than the dipole component of the solar magnetic field. The Sun 's dipole magnetic field of 50 -- 400 μT (at the photosphere) reduces with the inverse - cube of the distance to about 0.1 nT at the distance of Earth. However, according to spacecraft observations the interplanetary field at Earth 's location is around 5 nT, about a hundred times greater. The difference is due to magnetic fields generated by electrical currents in the plasma surrounding the Sun. The Sun 's magnetic field leads to many effects that are collectively called solar activity. Solar flares and coronal - mass ejections tend to occur at sunspot groups. Slowly changing high - speed streams of solar wind are emitted from coronal holes at the photospheric surface. Both coronal - mass ejections and high - speed streams of solar wind carry plasma and interplanetary magnetic field outward into the Solar System. The effects of solar activity on Earth include auroras at moderate to high latitudes and the disruption of radio communications and electric power. Solar activity is thought to have played a large role in the formation and evolution of the Solar System. With solar - cycle modulation of sunspot number comes a corresponding modulation of space weather conditions, including those surrounding Earth where technological systems can be affected. Long - term secular change in sunspot number is thought, by some scientists, to be correlated with long - term change in solar irradiance, which, in turn, might influence Earth 's long - term climate. For example, in the 17th century, the solar cycle appeared to have stopped entirely for several decades; few sunspots were observed during a period known as the Maunder minimum. This coincided in time with the era of the Little Ice Age, when Europe experienced unusually cold temperatures. Earlier extended minima have been discovered through analysis of tree rings and appear to have coincided with lower - than - average global temperatures. A recent theory claims that there are magnetic instabilities in the core of the Sun that cause fluctuations with periods of either 41,000 or 100,000 years. These could provide a better explanation of the ice ages than the Milankovitch cycles. The Sun today is roughly halfway through the most stable part of its life. It has not changed dramatically for over four billion years, and will remain fairly stable for more than five billion more. However, after hydrogen fusion in its core has stopped, the Sun will undergo severe changes, both internally and externally. The Sun formed about 4.6 billion years ago from the collapse of part of a giant molecular cloud that consisted mostly of hydrogen and helium and that probably gave birth to many other stars. This age is estimated using computer models of stellar evolution and through nucleocosmochronology. The result is consistent with the radiometric date of the oldest Solar System material, at 4.567 billion years ago. Studies of ancient meteorites reveal traces of stable daughter nuclei of short - lived isotopes, such as iron - 60, that form only in exploding, short - lived stars. This indicates that one or more supernovae must have occurred near the location where the Sun formed. A shock wave from a nearby supernova would have triggered the formation of the Sun by compressing the matter within the molecular cloud and causing certain regions to collapse under their own gravity. As one fragment of the cloud collapsed it also began to rotate because of conservation of angular momentum and heat up with the increasing pressure. Much of the mass became concentrated in the center, whereas the rest flattened out into a disk that would become the planets and other Solar System bodies. Gravity and pressure within the core of the cloud generated a lot of heat as it accreted more matter from the surrounding disk, eventually triggering nuclear fusion. Thus, the Sun was born. The Sun is about halfway through its main - sequence stage, during which nuclear fusion reactions in its core fuse hydrogen into helium. Each second, more than four million tonnes of matter are converted into energy within the Sun 's core, producing neutrinos and solar radiation. At this rate, the Sun has so far converted around 100 times the mass of Earth into energy, about 0.03 % of the total mass of the Sun. The Sun will spend a total of approximately 10 billion years as a main - sequence star. The Sun is gradually becoming hotter during its time on the main sequence, because the helium atoms in the core occupy less volume than the hydrogen atoms that were fused. The core is therefore shrinking, allowing the outer layers of the Sun to move closer to the centre and experience a stronger gravitational force, according to the inverse - square law. This stronger force increases the pressure on the core, which is resisted by a gradual increase in the rate at which fusion occurs. This process speeds up as the core gradually becomes denser. It is estimated that the Sun has become 30 % brighter in the last 4.5 billion years. At present, it is increasing in brightness by about 1 % every 100 million years. The Sun does not have enough mass to explode as a supernova. Instead it will exit the main sequence in approximately 5 billion years and start to turn into a red giant. As a red giant, the Sun will grow so large that it will engulf Mercury, Venus, and probably Earth. Even before it becomes a red giant, the luminosity of the Sun will have nearly doubled, and Earth will receive as much sunlight as Venus receives today. Once the core hydrogen is exhausted in 5.4 billion years, the Sun will expand into a subgiant phase and slowly double in size over about half a billion years. It will then expand more rapidly over about half a billion years until it is over two hundred times larger than today and a couple of thousand times more luminous. This then starts the red - giant - branch phase where the Sun will spend around a billion years and lose around a third of its mass. After the red - giant branch the Sun has approximately 120 million years of active life left, but much happens. First, the core, full of degenerate helium ignites violently in the helium flash, where it is estimated that 6 % of the core, itself 40 % of the Sun 's mass, will be converted into carbon within a matter of minutes through the triple - alpha process. The Sun then shrinks to around 10 times its current size and 50 times the luminosity, with a temperature a little lower than today. It will then have reached the red clump or horizontal branch, but a star of the Sun 's mass does not evolve blueward along the horizontal branch. Instead, it just becomes moderately larger and more luminous over about 100 million years as it continues to burn helium in the core. When the helium is exhausted, the Sun will repeat the expansion it followed when the hydrogen in the core was exhausted, except that this time it all happens faster, and the Sun becomes larger and more luminous. This is the asymptotic - giant - branch phase, and the Sun is alternately burning hydrogen in a shell or helium in a deeper shell. After about 20 million years on the early asymptotic giant branch, the Sun becomes increasingly unstable, with rapid mass loss and thermal pulses that increase the size and luminosity for a few hundred years every 100,000 years or so. The thermal pulses become larger each time, with the later pulses pushing the luminosity to as much as 5,000 times the current level and the radius to over 1 AU. According to a 2008 model, Earth 's orbit is shrinking due to tidal forces (and, eventually, drag from the lower chromosphere), so that it will be engulfed by the Sun near the tip of the red giant branch phase, 1 and 3.8 million years after Mercury and Venus have respectively suffered the same fate. Models vary depending on the rate and timing of mass loss. Models that have higher mass loss on the red - giant branch produce smaller, less luminous stars at the tip of the asymptotic giant branch, perhaps only 2,000 times the luminosity and less than 200 times the radius. For the Sun, four thermal pulses are predicted before it completely loses its outer envelope and starts to make a planetary nebula. By the end of that phase -- lasting approximately 500,000 years -- the Sun will only have about half of its current mass. The post-asymptotic - giant - branch evolution is even faster. The luminosity stays approximately constant as the temperature increases, with the ejected half of the Sun 's mass becoming ionised into a planetary nebula as the exposed core reaches 30,000 K. The final naked core, a white dwarf, will have a temperature of over 100,000 K, and contain an estimated 54.05 % of the Sun 's present day mass. The planetary nebula will disperse in about 10,000 years, but the white dwarf will survive for trillions of years before fading to a hypothetical black dwarf. The Sun lies close to the inner rim of the Milky Way 's Orion Arm, in the Local Interstellar Cloud or the Gould Belt, at a distance of 7.5 -- 8.5 kpc (25,000 -- 28,000 light - years) from the Galactic Center. The Sun is contained within the Local Bubble, a space of rarefied hot gas, possibly produced by the supernova remnant Geminga. The distance between the local arm and the next arm out, the Perseus Arm, is about 6,500 light - years. The Sun, and thus the Solar System, is found in what scientists call the galactic habitable zone. The Apex of the Sun 's Way, or the solar apex, is the direction that the Sun travels relative to other nearby stars. This motion is towards a point in the constellation Hercules, near the star Vega. Of the 50 nearest stellar systems within 17 light - years from Earth (the closest being the red dwarf Proxima Centauri at approximately 4.2 light - years), the Sun ranks fourth in mass. The Sun orbits the center of the Milky Way, and it is presently moving in the direction of the constellation of Cygnus. The Sun 's orbit around the Milky Way is roughly elliptical with orbital perturbations due to the non-uniform mass distribution in Milky Way, such as that in and between the galactic spiral arms. In addition, the Sun oscillates up and down relative to the galactic plane approximately 2.7 times per orbit. It has been argued that the Sun 's passage through the higher density spiral arms often coincides with mass extinctions on Earth, perhaps due to increased impact events. It takes the Solar System about 225 -- 250 million years to complete one orbit through the Milky Way (a galactic year), so it is thought to have completed 20 -- 25 orbits during the lifetime of the Sun. The orbital speed of the Solar System about the center of the Milky Way is approximately 251 km / s (156 mi / s). At this speed, it takes around 1,190 years for the Solar System to travel a distance of 1 light - year, or 7 days to travel 1 AU. The Milky Way is moving with respect to the cosmic microwave background radiation (CMB) in the direction of the constellation Hydra with a speed of 550 km / s, and the Sun 's resultant velocity with respect to the CMB is about 370 km / s in the direction of Crater or Leo. The temperature of the photosphere is approximately 6,000 K, whereas the temperature of the corona reaches 1,000,000 -- 2,000,000 K. The high temperature of the corona shows that it is heated by something other than direct heat conduction from the photosphere. It is thought that the energy necessary to heat the corona is provided by turbulent motion in the convection zone below the photosphere, and two main mechanisms have been proposed to explain coronal heating. The first is wave heating, in which sound, gravitational or magnetohydrodynamic waves are produced by turbulence in the convection zone. These waves travel upward and dissipate in the corona, depositing their energy in the ambient matter in the form of heat. The other is magnetic heating, in which magnetic energy is continuously built up by photospheric motion and released through magnetic reconnection in the form of large solar flares and myriad similar but smaller events -- nanoflares. Currently, it is unclear whether waves are an efficient heating mechanism. All waves except Alfvén waves have been found to dissipate or refract before reaching the corona. In addition, Alfvén waves do not easily dissipate in the corona. Current research focus has therefore shifted towards flare heating mechanisms. Theoretical models of the Sun 's development suggest that 3.8 to 2.5 billion years ago, during the Archean eon, the Sun was only about 75 % as bright as it is today. Such a weak star would not have been able to sustain liquid water on Earth 's surface, and thus life should not have been able to develop. However, the geological record demonstrates that Earth has remained at a fairly constant temperature throughout its history, and that the young Earth was somewhat warmer than it is today. One theory among scientists is that the atmosphere of the young Earth contained much larger quantities of greenhouse gases (such as carbon dioxide, methane) than are present today, which trapped enough heat to compensate for the smaller amount of solar energy reaching it. However, examination of Archaean sediments appears inconsistent with the hypothesis of high greenhouse concentrations. Instead, the moderate temperature range may be explained by a lower surface albedo brought about by less continental area and the "lack of biologically induced cloud condensation nuclei ''. This would have led to increased absorption of solar energy, thereby compensating for the lower solar output. The enormous effect of the Sun on Earth has been recognized since prehistoric times, and the Sun has been regarded by some cultures as a deity. The Sun has been an object of veneration in many cultures throughout human history. Humanity 's most fundamental understanding of the Sun is as the luminous disk in the sky, whose presence above the horizon creates day and whose absence causes night. In many prehistoric and ancient cultures, the Sun was thought to be a solar deity or other supernatural entity. Worship of the Sun was central to civilizations such as the ancient Egyptians, the Inca of South America and the Aztecs of what is now Mexico. In religions such as Hinduism, the Sun is still considered a god. Many ancient monuments were constructed with solar phenomena in mind; for example, stone megaliths accurately mark the summer or winter solstice (some of the most prominent megaliths are located in Nabta Playa, Egypt; Mnajdra, Malta and at Stonehenge, England); Newgrange, a prehistoric human - built mount in Ireland, was designed to detect the winter solstice; the pyramid of El Castillo at Chichén Itzá in Mexico is designed to cast shadows in the shape of serpents climbing the pyramid at the vernal and autumnal equinoxes. The Egyptians portrayed the god Ra as being carried across the sky in a solar barque, accompanied by lesser gods, and to the Greeks, he was Helios, carried by a chariot drawn by fiery horses. From the reign of Elagabalus in the late Roman Empire the Sun 's birthday was a holiday celebrated as Sol Invictus (literally "Unconquered Sun '') soon after the winter solstice, which may have been an antecedent to Christmas. Regarding the fixed stars, the Sun appears from Earth to revolve once a year along the ecliptic through the zodiac, and so Greek astronomers categorized it as one of the seven planets (Greek planetes, "wanderer ''); the naming of the days of the weeks after the seven planets dates to the Roman era. In the early first millennium BC, Babylonian astronomers observed that the Sun 's motion along the ecliptic is not uniform, though they did not know why; it is today known that this is due to the movement of Earth in an elliptic orbit around the Sun, with Earth moving faster when it is nearer to the Sun at perihelion and moving slower when it is farther away at aphelion. One of the first people to offer a scientific or philosophical explanation for the Sun was the Greek philosopher Anaxagoras. He reasoned that it was not the chariot of Helios, but instead a giant flaming ball of metal even larger than the land of the Peloponnesus and that the Moon reflected the light of the Sun. For teaching this heresy, he was imprisoned by the authorities and sentenced to death, though he was later released through the intervention of Pericles. Eratosthenes estimated the distance between Earth and the Sun in the 3rd century BC as "of stadia myriads 400 and 80000 '', the translation of which is ambiguous, implying either 4,080,000 stadia (755,000 km) or 804,000,000 stadia (148 to 153 million kilometers or 0.99 to 1.02 AU); the latter value is correct to within a few percent. In the 1st century AD, Ptolemy estimated the distance as 1,210 times the radius of Earth, approximately 7.71 million kilometers (0.0515 AU). The theory that the Sun is the center around which the planets orbit was first proposed by the ancient Greek Aristarchus of Samos in the 3rd century BC, and later adopted by Seleucus of Seleucia (see Heliocentrism). This view was developed in a more detailed mathematical model of a heliocentric system in the 16th century by Nicolaus Copernicus. Observations of sunspots were recorded during the Han Dynasty (206 BC -- AD 220) by Chinese astronomers, who maintained records of these observations for centuries. Averroes also provided a description of sunspots in the 12th century. The invention of the telescope in the early 17th century permitted detailed observations of sunspots by Thomas Harriot, Galileo Galilei and other astronomers. Galileo posited that sunspots were on the surface of the Sun rather than small objects passing between Earth and the Sun. Arabic astronomical contributions include Albatenius ' discovery that the direction of the Sun 's apogee (the place in the Sun 's orbit against the fixed stars where it seems to be moving slowest) is changing. (In modern heliocentric terms, this is caused by a gradual motion of the aphelion of the Earth 's orbit). Ibn Yunus observed more than 10,000 entries for the Sun 's position for many years using a large astrolabe. From an observation of a transit of Venus in 1032, the Persian astronomer and polymath Avicenna concluded that Venus is closer to Earth than the Sun. In 1672 Giovanni Cassini and Jean Richer determined the distance to Mars and were thereby able to calculate the distance to the Sun. In 1666, Isaac Newton observed the Sun 's light using a prism, and showed that it is made up of light of many colors. In 1800, William Herschel discovered infrared radiation beyond the red part of the solar spectrum. The 19th century saw advancement in spectroscopic studies of the Sun; Joseph von Fraunhofer recorded more than 600 absorption lines in the spectrum, the strongest of which are still often referred to as Fraunhofer lines. In the early years of the modern scientific era, the source of the Sun 's energy was a significant puzzle. Lord Kelvin suggested that the Sun is a gradually cooling liquid body that is radiating an internal store of heat. Kelvin and Hermann von Helmholtz then proposed a gravitational contraction mechanism to explain the energy output, but the resulting age estimate was only 20 million years, well short of the time span of at least 300 million years suggested by some geological discoveries of that time. In 1890 Joseph Lockyer, who discovered helium in the solar spectrum, proposed a meteoritic hypothesis for the formation and evolution of the Sun. Not until 1904 was a documented solution offered. Ernest Rutherford suggested that the Sun 's output could be maintained by an internal source of heat, and suggested radioactive decay as the source. However, it would be Albert Einstein who would provide the essential clue to the source of the Sun 's energy output with his mass - energy equivalence relation E = mc. In 1920, Sir Arthur Eddington proposed that the pressures and temperatures at the core of the Sun could produce a nuclear fusion reaction that merged hydrogen (protons) into helium nuclei, resulting in a production of energy from the net change in mass. The preponderance of hydrogen in the Sun was confirmed in 1925 by Cecilia Payne using the ionization theory developed by Meghnad Saha, an Indian physicist. The theoretical concept of fusion was developed in the 1930s by the astrophysicists Subrahmanyan Chandrasekhar and Hans Bethe. Hans Bethe calculated the details of the two main energy - producing nuclear reactions that power the Sun. In 1957, Margaret Burbidge, Geoffrey Burbidge, William Fowler and Fred Hoyle showed that most of the elements in the universe have been synthesized by nuclear reactions inside stars, some like the Sun. The first satellites designed to observe the Sun were NASA 's Pioneers 5, 6, 7, 8 and 9, which were launched between 1959 and 1968. These probes orbited the Sun at a distance similar to that of Earth, and made the first detailed measurements of the solar wind and the solar magnetic field. Pioneer 9 operated for a particularly long time, transmitting data until May 1983. In the 1970s, two Helios spacecraft and the Skylab Apollo Telescope Mount provided scientists with significant new data on solar wind and the solar corona. The Helios 1 and 2 probes were U.S. -- German collaborations that studied the solar wind from an orbit carrying the spacecraft inside Mercury 's orbit at perihelion. The Skylab space station, launched by NASA in 1973, included a solar observatory module called the Apollo Telescope Mount that was operated by astronauts resident on the station. Skylab made the first time - resolved observations of the solar transition region and of ultraviolet emissions from the solar corona. Discoveries included the first observations of coronal mass ejections, then called "coronal transients '', and of coronal holes, now known to be intimately associated with the solar wind. In 1980, the Solar Maximum Mission was launched by NASA. This spacecraft was designed to observe gamma rays, X-rays and UV radiation from solar flares during a time of high solar activity and solar luminosity. Just a few months after launch, however, an electronics failure caused the probe to go into standby mode, and it spent the next three years in this inactive state. In 1984 Space Shuttle Challenger mission STS - 41C retrieved the satellite and repaired its electronics before re-releasing it into orbit. The Solar Maximum Mission subsequently acquired thousands of images of the solar corona before re-entering Earth 's atmosphere in June 1989. Launched in 1991, Japan 's Yohkoh (Sunbeam) satellite observed solar flares at X-ray wavelengths. Mission data allowed scientists to identify several different types of flares, and demonstrated that the corona away from regions of peak activity was much more dynamic and active than had previously been supposed. Yohkoh observed an entire solar cycle but went into standby mode when an annular eclipse in 2001 caused it to lose its lock on the Sun. It was destroyed by atmospheric re-entry in 2005. One of the most important solar missions to date has been the Solar and Heliospheric Observatory, jointly built by the European Space Agency and NASA and launched on 2 December 1995. Originally intended to serve a two - year mission, a mission extension through 2012 was approved in October 2009. It has proven so useful that a follow - on mission, the Solar Dynamics Observatory (SDO), was launched in February 2010. Situated at the Lagrangian point between Earth and the Sun (at which the gravitational pull from both is equal), SOHO has provided a constant view of the Sun at many wavelengths since its launch. Besides its direct solar observation, SOHO has enabled the discovery of a large number of comets, mostly tiny sungrazing comets that incinerate as they pass the Sun. All these satellites have observed the Sun from the plane of the ecliptic, and so have only observed its equatorial regions in detail. The Ulysses probe was launched in 1990 to study the Sun 's polar regions. It first travelled to Jupiter, to "slingshot '' into an orbit that would take it far above the plane of the ecliptic. Once Ulysses was in its scheduled orbit, it began observing the solar wind and magnetic field strength at high solar latitudes, finding that the solar wind from high latitudes was moving at about 750 km / s, which was slower than expected, and that there were large magnetic waves emerging from high latitudes that scattered galactic cosmic rays. Elemental abundances in the photosphere are well known from spectroscopic studies, but the composition of the interior of the Sun is more poorly understood. A solar wind sample return mission, Genesis, was designed to allow astronomers to directly measure the composition of solar material. The Solar Terrestrial Relations Observatory (STEREO) mission was launched in October 2006. Two identical spacecraft were launched into orbits that cause them to (respectively) pull further ahead of and fall gradually behind Earth. This enables stereoscopic imaging of the Sun and solar phenomena, such as coronal mass ejections. The Indian Space Research Organisation has scheduled the launch of a 100 kg satellite named Aditya for 2017 -- 18. Its main instrument will be a coronagraph for studying the dynamics of the Solar corona. The brightness of the Sun can cause pain from looking at it with the naked eye; however, doing so for brief periods is not hazardous for normal non-dilated eyes. Looking directly at the Sun causes phosphene visual artifacts and temporary partial blindness. It also delivers about 4 milliwatts of sunlight to the retina, slightly heating it and potentially causing damage in eyes that can not respond properly to the brightness. UV exposure gradually yellows the lens of the eye over a period of years, and is thought to contribute to the formation of cataracts, but this depends on general exposure to solar UV, and not whether one looks directly at the Sun. Long - duration viewing of the direct Sun with the naked eye can begin to cause UV - induced, sunburn - like lesions on the retina after about 100 seconds, particularly under conditions where the UV light from the Sun is intense and well focused; conditions are worsened by young eyes or new lens implants (which admit more UV than aging natural eyes), Sun angles near the zenith, and observing locations at high altitude. Viewing the Sun through light - concentrating optics such as binoculars may result in permanent damage to the retina without an appropriate filter that blocks UV and substantially dims the sunlight. When using an attenuating filter to view the Sun, the viewer is cautioned to use a filter specifically designed for that use. Some improvised filters that pass UV or IR rays, can actually harm the eye at high brightness levels. Herschel wedges, also called Solar Diagonals, are effective and inexpensive for small telescopes. The sunlight that is destined for the eyepiece is reflected from an unsilvered surface of a piece of glass. Only a very small fraction of the incident light is reflected. The rest passes through the glass and leaves the instrument. If the glass breaks because of the heat, no light at all is reflected, making the device fail - safe. Simple filters made of darkened glass allow the full intensity of sunlight to pass through if they break, endangering the observer 's eyesight. Unfiltered binoculars can deliver hundreds of times as much energy as using the naked eye, possibly causing immediate damage. It is claimed that even brief glances at the midday Sun through an unfiltered telescope can cause permanent damage. Partial solar eclipses are hazardous to view because the eye 's pupil is not adapted to the unusually high visual contrast: the pupil dilates according to the total amount of light in the field of view, not by the brightest object in the field. During partial eclipses most sunlight is blocked by the Moon passing in front of the Sun, but the uncovered parts of the photosphere have the same surface brightness as during a normal day. In the overall gloom, the pupil expands from ~ 2 mm to ~ 6 mm, and each retinal cell exposed to the solar image receives up to ten times more light than it would looking at the non-eclipsed Sun. This can damage or kill those cells, resulting in small permanent blind spots for the viewer. The hazard is insidious for inexperienced observers and for children, because there is no perception of pain: it is not immediately obvious that one 's vision is being destroyed. During sunrise and sunset, sunlight is attenuated because of Rayleigh scattering and Mie scattering from a particularly long passage through Earth 's atmosphere, and the Sun is sometimes faint enough to be viewed comfortably with the naked eye or safely with optics (provided there is no risk of bright sunlight suddenly appearing through a break between clouds). Hazy conditions, atmospheric dust, and high humidity contribute to this atmospheric attenuation. An optical phenomenon, known as a green flash, can sometimes be seen shortly after sunset or before sunrise. The flash is caused by light from the Sun just below the horizon being bent (usually through a temperature inversion) towards the observer. Light of shorter wavelengths (violet, blue, green) is bent more than that of longer wavelengths (yellow, orange, red) but the violet and blue light is scattered more, leaving light that is perceived as green. Ultraviolet light from the Sun has antiseptic properties and can be used to sanitize tools and water. It also causes sunburn, and has other biological effects such as the production of vitamin D and sun tanning. Ultraviolet light is strongly attenuated by Earth 's ozone layer, so that the amount of UV varies greatly with latitude and has been partially responsible for many biological adaptations, including variations in human skin color in different regions of the globe. The Sun has eight known planets. This includes four terrestrial planets (Mercury, Venus, Earth, and Mars), two gas giants (Jupiter and Saturn), and two ice giants (Uranus and Neptune). The Solar System also has at least five dwarf planets, an asteroid belt, numerous comets, and a large number of icy bodies which lie beyond the orbit of Neptune. Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Virgo Supercluster → Laniakea Supercluster → Pisces -- Cetus Supercluster Complex → Observable universe → Universe Each arrow (→) may be read as "within '' or "part of ''.
when did ireland get their independence from england
History of Ireland - wikipedia The first evidence of human presence in Ireland dates to about 12,000 years ago, shortly after the receding of the ice after the younger Dryas cold phase of the Quaternary ended around 9700 BCE, and heralds the beginning of Prehistoric Ireland, which includes the archaeological periods known as the Mesolithic, the Neolithic from about 4000 BCE, the Copper and Bronze Age from about 2300 BCE and Iron Age beginning about 600 BCE. Ireland 's prehistory ends with the emergence of "protohistoric '' Gaelic Ireland in the 2nd and 3rd centuries BCE. By the late 4th century CE Catholicism had begun to gradually subsume or replace the earlier Celtic polytheism. By the end of the 6th century it had introduced writing along with a predominantly monastic Celtic Christian church, profoundly altering Irish society. Viking raids and settlement from the late 8th century CE resulted in extensive cultural interchange, as well as innovation in military and transport technology. Many of Ireland 's towns were founded at this time as Viking trading posts and coinage made its first appearance. Viking penetration was limited and concentrated along coasts and rivers, and ceased to be a major threat to Gaelic culture after the battle of Clontarf in 1014. The Norman invasion in 1169 resulted again in a partial conquest of the island and marked the beginning of more than 800 years of English political and military involvement in Ireland. Initially successful, Norman gains were rolled back over succeeding centuries as a Gaelic resurgence reestablished Gaelic cultural preeminence over most of the country, apart from the walled towns and the area around Dublin known as The Pale. Reduced to the control of small pockets, the English Crown did not make another attempt to conquer the island until after the end of the Wars of the Roses. This released resources and manpower for overseas expansion, beginning in the early 16th century. Also, the European discovery of America by Christopher Columbus in 1492 meant that Ireland now occupied a position of great importance west of Britain, and therefore controlled the routes from Britain into the Atlantic, and ultimately, America. However, the nature of Ireland 's decentralised political organisation into small territories (known as túatha), martial traditions, difficult terrain and climate and lack of urban infrastructure, meant that attempts to assert Crown authority were slow and expensive. Attempts to impose the new Protestant faith were also successfully resisted by both the Gaelic and Norman - Irish. The new policy fomented the rebellion of the Hiberno - Norman Earl of Kildare Silken Thomas in 1534, keen to defend his traditional autonomy and Catholicism, and marked the beginning of the prolonged Tudor conquest of Ireland lasting from 1534 to 1603. Henry VIII proclaimed himself King of Ireland in 1541 to facilitate the project. With the failure of the English Reformation, Ireland became a battleground in the wars between Catholic Counter-Reformation and Protestant Reformation Europe for control of the north Atlantic sea routes to America. Englands attempts to either conquer or assimilate both the Hiberno - Norman lordships and the Gaelic territories into the Kingdom of Ireland provided the impetus for ongoing warfare, notable examples being the 1st Desmond Rebellion, the 2nd Desmond Rebellion and the Nine Years War. This period was marked by the Crown policies of, at first, surrender and regrant, and later, plantation, involving the arrival of thousands of English and Scottish Protestant settlers, and the displacement of both the Hiberno - Normans (or Old English as they were known by then) and the native Catholic landholders. Gaelic Ireland was finally defeated at the battle of Kinsale in 1601 which marked the collapse of the Gaelic system and the beginning of Ireland 's history as part of the British Empire. During the 17th century, this division between a Protestant landholding minority and a dispossessed Catholic majority, divided not only by religion but also by cultural origin, was intensified and conflict between them was to became a recurrent theme in Irish history. Protestant domination of Ireland under a Protestant Ascendancy was reinforced after two periods of religious war, the Irish Confederate Wars in 1641 - 52 and the Williamite war in 1689 - 91. Political power thereafter rested almost exclusively in the hands of a minority Protestant Ascendancy, while Catholics and members of dissenting Protestant denominations suffered severe political and economic privations under the Penal Laws. On 1 January 1801, in the wake of the republican United Irishmen Rebellion, the Irish Parliament was abolished and Ireland became part of a new United Kingdom of Great Britain and Ireland formed by the Acts of Union 1800. Catholics were not granted full rights until Catholic Emancipation in 1829, achieved by Daniel O'Connell. The catastrophe of the Great Famine struck Ireland in 1845 resulting in over a million deaths from starvation and disease and in a million refugees fleeing the country, mainly to America. Irish attempts to break away continued with Parnells Irish Parliamentary Party which strove from the 1880s to attain Home Rule through the parliamentary constitutional movement, eventually winning the Home Rule Act 1914, although this Act was suspended at the outbreak of World War I. In 1916 the Easter Rising organised by the IRB and carried out by members of the Irish Volunteers, the socialist Irish Citizen Army of James Connolly and 200 women from Cumann na mBan succeeded in turning public opinion against the British establishment after the execution of the leaders by British authorities. It also eclipsed the home rule movement by bringing physical force republicanism back to the forefront of Irish politics. In 1922, after the Irish War of Independence most of Ireland seceded from the United Kingdom to become the independent Irish Free State but under the Anglo - Irish Treaty the six northeastern counties, known as Northern Ireland, remained within the United Kingdom, creating the partition of Ireland. The treaty was opposed by many; their opposition led to the outbreak of the Irish Civil War, in which pro-treaty or Free State forces proved victorious. The history of Northern Ireland has since been dominated by the division of society along sectarian faultlines and conflict between (mainly Catholic) Irish nationalists and (mainly Protestant) unionists. These divisions erupted into the Troubles in the late 1960s, after civil rights marches were met with opposition by authorities. The violence escalated after the deployment of the British Army to restore order led to clashes with nationalist communities. The violence continued for 28 years until an uneasy, but largely successful peace was finally achieved with the Good Friday Agreement in 1998. What is known of pre-Christian Ireland comes from references in Roman writings, Irish poetry and myth, and archaeology. While some possible Paleolithic tools have been found, none of the finds are convincing of Paleolithic settlement in Ireland. However a bear bone found in Alice and Gwendoline Cave, County Clare, in 1903 may push back dates for the earliest human settlement of Ireland to 10,500 BCE. The bone shows clear signs of cut marks with stone tools, and has been radiocarbon dated to 12,500 years ago. The earliest confirmed inhabitants of Ireland were Mesolithic hunter - gatherers, who arrived some time around 7900 BCE. While some authors take the view that a land bridge connecting Ireland to Great Britain still existed at that time, more recent studies indicate that Ireland was separated from Britain by c. 14,000 BCE, when the climate was still cold and local ice caps persisted in parts of the country. The people remained hunter - gatherers until about 4000 BCE. It is argued this is when the first signs of agriculture started to show, leading to the establishment of a Neolithic culture, characterised by the appearance of pottery, polished stone tools, rectangular wooden houses, megalithic tombs, and domesticated sheep and cattle. Some of these tombs, as at Knowth and Dowth, are huge stone monuments and many of them, such as the Passage Tombs of Newgrange, are astronomically aligned. Four main types of Irish Megalithic Tombs have been identified: dolmens, court cairns, passage tombs and wedge - shaped gallery graves. In Leinster and Munster, individual adult males were buried in small stone structures, called cists, under earthen mounds and were accompanied by distinctive decorated pottery. This culture apparently prospered, and the island became more densely populated. Near the end of the Neolithic new types of monuments developed, such as circular embanked enclosures and timber, stone and post and pit circles. The Bronze Age, which came to Ireland around 2000 BCE, saw the production of elaborate gold and bronze ornaments, weapons and tools. There was a movement away from the construction of communal megalithic tombs to the burial of the dead in small stone cists or simple pits, which could be situated in cemeteries or in circular earth or stone built burial mounds known respectively as barrows and cairns. As the period progressed, inhumation burial gave way to cremation and by the Middle Bronze Age, remains were often placed beneath large burial urns. The Iron Age in Ireland began about 600 BCE. The period between the start of the Iron Age and the historic period (431 CE) saw the gradual infiltration of small groups of Celtic - speaking people into Ireland, with items of the continental Celtic La Tene style being found in at least the northern part of the island by about 300 BCE. The result of a gradual blending of Celtic and indigenous cultures would result in the emergence of Gaelic culture by the fifth century. It is also during the fifth century that the main over-kingdoms of In Tuisceart, Airgialla, Ulaid, Mide, Laigin, Mumhain, Cóiced Ol nEchmacht began to emerge (see Kingdoms of ancient Ireland). Within these kingdoms a rich culture flourished. The society of these kingdoms was dominated by an upper class consisting of aristocratic warriors and learned people, which possibly included Druids. Linguists realised from the 17th century onwards that the language spoken by these people, the Goidelic languages, was a branch of the Celtic languages. This is usually explained as a result of invasions by Celts from the continent. However, other research has postulated that the culture developed gradually and continuously, and that the introduction of Celtic language and elements of Celtic culture may have been a result of cultural exchange with Celtic groups in southwest continental Europe from the Neolithic to the Bronze Age. The hypothesis that the native Late Bronze Age inhabitants gradually absorbed Celtic influences has since been supported by some recent genetic research. The Romans referred to Ireland as Scotia 500 CE, and later Hibernia. Ptolemy, in 100 CE, recorded Ireland 's geography and tribes. Ireland was never a part of the Roman Empire, but Roman influence was often projected well beyond its borders. Tacitus writes that an exiled Irish prince was with Agricola in Roman Britain and would return to seize power in Ireland. Juvenal tells us that Roman "arms had been taken beyond the shores of Ireland ''. In recent years, some experts have hypothesized that Roman - sponsored Gaelic forces (or perhaps even Roman regulars) mounted some kind of invasion around 100 CE, but the exact relationship between Rome and the dynasties and peoples of Hibernia remains unclear. Irish confederations (the Scoti) attacked and some settled in Britain during the Great Conspiracy of 367. In particular, the Dál Riata settled in western Scotland and the Western Isles. The middle centuries of the first millennium CE marked great changes in Ireland. Politically, what appears to have been a prehistoric emphasis on tribal affiliation had been replaced by the 8th century by patrilineal dynasties ruling the island 's kingdoms. Many formerly powerful kingdoms and peoples disappeared. Irish pirates struck all over the coast of western Britain in the same way that the Vikings would later attack Ireland. Some of these founded entirely new kingdoms in Pictland and, to a lesser degree, in parts of Cornwall, Wales, and Cumbria. The Attacotti of south Leinster may even have served in the Roman military in the mid-to - late 300s. Perhaps it was some of the latter returning home as rich mercenaries, merchants, or slaves stolen from Britain or Gaul, that first brought the Christian faith to Ireland. Some early sources claim that there were missionaries active in southern Ireland long before St. Patrick. Whatever the route, and there were probably many, this new faith was to have the most profound effect on the Irish. Tradition maintains that in A.D. 432, St. Patrick arrived on the island and, in the years that followed, worked to convert the Irish to Christianity. St Patrick 's Confession, in Latin, written by him is the earliest Irish historical document. It gives some information about the Saint. On the other hand, according to Prosper of Aquitaine, a contemporary chronicler, Palladius was sent to Ireland by the Pope in 431 as "first Bishop to the Irish believing in Christ '', which demonstrates that there were already Christians living in Ireland. Palladius seems to have worked purely as Bishop to Irish Christians in the Leinster and Meath kingdoms, while Patrick -- who may have arrived as late as 461 -- worked first and foremost as a missionary to the pagan Irish, in the more remote kingdoms in Ulster and Connacht. Patrick is traditionally credited with preserving and codifying Irish laws and changing only those that conflicted with Christian practices. He is credited with introducing the Roman alphabet, which enabled Irish monks to preserve parts of the extensive oral literature. The historicity of these claims remains the subject of debate and there is no direct evidence linking Patrick with any of these accomplishments. The myth of Patrick, as scholars refer to it, was developed in the centuries after his death. Irish scholars excelled in the study of Latin learning and Christian theology in the monasteries that flourished shortly thereafter. Missionaries from Ireland to England and Continental Europe spread news of the flowering of learning, and scholars from other nations came to Irish monasteries. The excellence and isolation of these monasteries helped preserve Latin learning during the Early Middle Ages. The period of Insular art, mainly in the fields of illuminated manuscripts, metalworking, and sculpture flourished and produced such treasures as the Book of Kells, the Ardagh Chalice, and the many carved stone crosses that dot the island. Insular style was to be a crucial ingredient in the formation of the Romanesque and Gothic styles throughout Western Europe. Sites dating to this period include clochans, ringforts and promontory forts. Francis John Byrne describes the effect of the epidemics which occurred during this era: The plagues of the 660s and the 680s had a traumatic effect on Irish society. The golden age of the saints was over, together with the generation of kings who could fire a saga - writer 's imagination. The literary tradition looks back to the reign of the sons of Aed Slaine (Diarmait and Blathmac, who died in 665) as to the end of an era. Antiquaries, brehons, genealogists and hagiographers, felt the need to collect ancient traditions before they were totally forgotten. Many were in fact swallowed by oblivion; when we examine the writing of Tirechan we encounter obscure references to tribes which are quite unknown to the later genealogical tradition. The laws describe a... society that was obsolescent, and the meaning and use of the word moccu dies out with archaic Old Irish at the beginning of the new century. The first English involvement in Ireland took place in this period. Tullylease, Rath Melsigi and Maigh Eo na Saxain were founded by 670 for English students who wished to study or live in Ireland. In summer 684, an English expeditionary force sent by Northumbrian King Ecgfrith raided Brega. The first recorded Viking raid in Irish history occurred in 795 AD when Vikings from Norway looted the island. Early Viking raids were generally fast - paced and small in scale. These early raids interrupted the golden age of Christian Irish culture and marked the beginning of two centuries of intermittent warfare, with waves of Viking raiders plundering monasteries and towns throughout Ireland. Most of those early raiders came from western Norway. The Vikings were expert sailors, who travelled in longships, and by the early 840s, had begun to establish settlements along the Irish coasts and to spend the winter months there. The longships were technologically advanced, allowing them to travel faster through the narrow rivers. Vikings founded settlements in several places; most famously in Dublin. Most of the settlements were near the water, allowing the Vikings to trade using their longships. Written accounts from this time (early to mid 840s) show that the Vikings were moving further inland to attack (often using rivers) and then retreating to their coastal headquarters. In 852, the Vikings landed in Dublin Bay and established a fortress. Dublin became the centre for trade of many goods, especially slaves. Bringing back new ideas and motivations, they began settling more permanently. In the tenth century an earthen bank was constructed around the city with a second larger bank built outside that in the eleventh century. On the interior of the town, an extensive series of defenses have been excavated at Fishamble Street, Dublin. The site featured nine waterfronts, including two possible flood banks and two positive defensive embankments during the Viking Age. The early embankments were non-defensive, being only one meter high, and it is uncertain how much of the site they encircled. After several generations a group of mixed Irish and Norse ethnic background arose, the Gall - Gaels, ' (Gall being the Old Irish word for foreign). The second wave of Vikings made stations at winter - bases called longphorts to serve as control centres to exert a more localized force on the island through raiding. The third wave in 917 established towns as not only control centres, but also as centres of trade to enter into Irish economy and greater Western Europe. Returning to Dublin, they set up a market town. Over the next century a great period of economic growth would spread across the pastoral country. The Vikings introduced the concept of international trade to the Irish, as well as popularized a silver economy with local trade and the first minting of coins in 997. In 902 Máel Finnia mac Flannacain of Brega and Cerball mac Muirecáin of Leinster joined forces against Dublin, and "The heathens were driven from Ireland, i.e. from the fortress of Áth Cliath (Dublin) ''. They were allowed by the Saxons to settle in Wirral, England, but would however later return to retake Dublin. The Vikings never achieved total domination of Ireland, often fighting for and against various Irish kings. The Battle of Clontarf in 1014 began the decline of Viking power in Ireland but the towns which Vikings had founded continued to flourish, and trade became an important part of the Irish economy. By the 12th century, Ireland was divided politically into a shifting hierarchy of petty kingdoms and over-kingdoms. Power was exercised by the heads of a few regional dynasties vying against each other for supremacy over the whole island. One of these men, King Diarmait Mac Murchada of Leinster was forcibly exiled by the new High King, Ruaidri mac Tairrdelbach Ua Conchobair of the Western kingdom of Connacht. Fleeing to Aquitaine, Diarmait obtained permission from Henry II to recruit Norman knights to regain his kingdom. The first Norman knight landed in Ireland in 1167, followed by the main forces of Normans, Welsh and Flemings. Several counties were restored to the control of Diarmait, who named his son - in - law, the Norman Richard de Clare, known as Strongbow, heir to his kingdom. This troubled King Henry, who feared the establishment of a rival Norman state in Ireland. Accordingly, he resolved to establish his authority. In 1177 Prince John Lackland was made Lord of Ireland by his father Henry II of England at the Council of Oxford. With the authority of the papal bull Laudabiliter from Adrian IV, Henry landed with a large fleet at Waterford in 1171, becoming the first King of England to set foot on Irish soil. Henry awarded his Irish territories to his younger son John with the title Dominus Hiberniae ("Lord of Ireland ''). When John unexpectedly succeeded his brother as King John of England, the "Lordship of Ireland '' fell directly under the English Crown. The Normans initially controlled the entire east coast, from Waterford to eastern Ulster, and penetrated a considerable distance inland as well. The counties were ruled by many smaller kings. The first Lord of Ireland was King John, who visited Ireland in 1185 and 1210 and helped consolidate the Norman - controlled areas, while ensuring that the many Irish kings swore fealty to him. Throughout the thirteenth century the policy of the English Kings was to weaken the power of the Norman Lords in Ireland. For example, King John encouraged Hugh de Lacy to destabilise and then overthrow the Lord of Ulster, before naming him as the first Earl of Ulster. The Hiberno - Norman community suffered from a series of invasions that ceased the spread of their settlement and power. Politics and events in Gaelic Ireland served to draw the settlers deeper into the orbit of the Irish. By 1261 the weakening of the Normans had become manifest when Fineen MacCarthy defeated a Norman army at the Battle of Callann. The war continued between the different lords and earls for about 100 years, causing much destruction, especially around Dublin. In this chaotic situation, local Irish lords won back large amounts of land that their families had lost since the conquest and held them after the war was over. The Black Death arrived in Ireland in 1348. Because most of the English and Norman inhabitants of Ireland lived in towns and villages, the plague hit them far harder than it did the native Irish, who lived in more dispersed rural settlements. After it had passed, Gaelic Irish language and customs came to dominate the country again. The English - controlled territory shrank to a fortified area around Dublin (the Pale), whose rulers had little real authority outside (beyond the Pale). By the end of the 15th century, central English authority in Ireland had almost disappeared. England 's attentions were diverted by the Wars of the Roses. The Lordship of Ireland lay in the hands of the powerful Fitzgerald Earl of Kildare, who dominated the country by means of military force and alliances with Irish lords and clans. Around the country, local Gaelic and Gaelicised lords expanded their powers at the expense of the English government in Dublin but the power of the Dublin government was seriously curtailed by the introduction of Poynings ' Law in 1494. According to this act the Irish Parliament was essentially put under the control of the Westminster Parliament. From 1536, Henry VIII decided to conquer Ireland and bring it under crown control. The Fitzgerald dynasty of Kildare, who had become the effective rulers of Ireland in the 15th century, had become unreliable allies of the Tudor monarchs. They had invited Burgundian troops into Dublin to crown the Yorkist pretender, Lambert Simnel as King of England in 1487. Again in 1536, Silken Thomas Fitzgerald went into open rebellion against the crown. Having put down this rebellion, Henry resolved to bring Ireland under English government control so the island would not become a base for future rebellions or foreign invasions of England. In 1541, he upgraded Ireland from a lordship to a full Kingdom. Henry was proclaimed King of Ireland at a meeting of the Irish Parliament that year. This was the first meeting of the Irish Parliament to be attended by the Gaelic Irish chieftains as well as the Hiberno - Norman aristocracy. With the institutions of government in place, the next step was to extend the control of the English Kingdom of Ireland over all of its claimed territory. This took nearly a century, with various English administrations either negotiating or fighting with the independent Irish and Old English lords. The Spanish Armada in Ireland suffered heavy losses during an extraordinary season of storms in the autumn of 1588. Among the survivors was Captain Francisco de Cuellar, who gave a remarkable account of his experiences on the run in Ireland. The re-conquest was completed during the reigns of Elizabeth and James I, after several brutal conflicts. (See the Desmond Rebellions, 1569 -- 73 and 1579 -- 83, and the Nine Years War, 1594 -- 1603, for details.) After this point, the English authorities in Dublin established real control over Ireland for the first time, bringing a centralised government to the entire island, and successfully disarmed the native lordships. In 1614 the Catholic majority in the Irish Parliament was overthrown through the creation of numerous new boroughs which were dominated by the new settlers. However, the English were not successful in converting the Catholic Irish to the Protestant religion and the brutal methods used by crown authority (including resorting to martial law) to bring the country under English control, heightened resentment of English rule. From the mid-16th to the early 17th century, crown governments had carried out a policy of land confiscation and colonisation known as Plantations. Scottish and English Protestant colonists were sent to the provinces of Munster, Ulster and the counties of Laois and Offaly. These Protestant settlers replaced the Irish Catholic landowners who were removed from their lands. These settlers formed the ruling class of future British appointed administrations in Ireland. Several Penal Laws, aimed at Catholics, Baptists and Presbyterians, were introduced to encourage conversion to the established (Anglican) Church of Ireland. The 17th century was perhaps the bloodiest in Ireland 's history. Two periods of war (1641 -- 53 and 1689 -- 91) caused huge loss of life. The ultimate dispossession of most of the Irish Catholic landowning class was engineered, and recusants were subordinated under the Penal Laws. During the 17th century, Ireland was convulsed by eleven years of warfare, beginning with the Rebellion of 1641, when Irish Catholics rebelled against the domination of English and Protestant settlers. The Catholic gentry briefly ruled the country as Confederate Ireland (1642 -- 1649) against the background of the Wars of the Three Kingdoms until Oliver Cromwell reconquered Ireland in 1649 -- 1653 on behalf of the English Commonwealth. Cromwell 's conquest was the most brutal phase of the war. By its close, up to more than a half of Ireland 's pre-war population was killed or exiled as slaves, where many died due to harsh conditions. As retribution for the rebellion of 1641, the better - quality remaining lands owned by Irish Catholics were confiscated and given to British settlers. Several hundred remaining native landowners were transplanted to Connacht. Ireland became the main battleground after the Glorious Revolution of 1688, when the Catholic James II left London and the English Parliament replaced him with William of Orange. The wealthier Irish Catholics backed James to try to reverse the Penal Laws and land confiscations, whereas Protestants supported William and Mary in this "Glorious Revolution '' to preserve their property in the country. James and William fought for the Kingdom of Ireland in the Williamite War, most famously at the Battle of the Boyne in 1690, where James 's outnumbered forces were defeated. From the 15th to the 18th century, Irish, English, Scots and Welsh prisoners were transported for forced labor in the Caribbean to work off their term of punishment. Even larger numbers came voluntarily as indentured servants. In the 18th century they were sent to the American colonies, and in the early 19th century to Australia. The Irish were dehumanised by the English, described as "savages, '' so making their displacement appear all the more justified. In 1654 the British parliament gave Oliver Cromwell a free hand to banish Irish "undesirables ''. Cromwell rounded up Catholics throughout the Irish countryside and placed them on ships bound for the Caribbean, mainly the island of Barbados. By 1655, 12,000 political prisoners had been forcibly shipped to Barbados and into indentured servitude. The majority of the people of Ireland were Catholic peasants; they were very poor and largely inert politically during the eighteenth century, as many of their leaders converted to Protestantism to avoid severe economic and political penalties. Nevertheless, there was a growing Catholic cultural awakening underway. There were two Protestant groups. The Presbyterians in Ulster in the North lived in much better economic conditions, but had virtually no political power. Power was held by a small group of Anglo - Irish families, who were loyal to the Anglican Church of Ireland. They owned the great bulk of the farmland, where the work was done by the Catholic peasants. Many of these families lived in England and were absentee landlords, whose loyalty was basically to England. The Anglo - Irish who lived in Ireland became increasingly identified as Irish nationalists, and were resentful of the English control of their island. Their spokesmen, such as Jonathan Swift and Edmund Burke, sought more local control. Jacobite resistance in Ireland was finally ended after the Battle of Aughrim in July 1691. The Penal Laws that had been relaxed somewhat after the Restoration were reinforced more thoroughly after this war, as the infant Anglo - Irish Ascendency wanted to ensure that the Irish Roman Catholics would not be in a position to repeat their rebellions. Power was held by the 5 % who were Protestants belonging to the Church of Ireland. They controlled all major sectors of the Irish economy, the bulk of the farmland, the legal system, local government and held strong majorities in both houses of the Irish Parliament. They strongly distrusted the Presbyterians in Ulster, and were convinced that the Catholics should have minimal rights. They did not have full political control because the government in London had superior authority and treated Ireland like a backward colony. When the American colonies revolted in the 1770s, the Ascendency wrested multiple concessions to strengthen its power. They did not seek independence because they knew they were heavily outnumbered and ultimately depended upon the British Army to guarantee their security. Subsequent Irish antagonism toward England was aggravated by the economic situation of Ireland in the 18th century. Some absentee landlords managed their estates inefficiently, and food tended to be produced for export rather than for domestic consumption. Two very cold winters near the end of the Little Ice Age led directly to a famine between 1740 and 1741, which killed about 400,000 people and caused over 150,000 Irish to leave the island. In addition, Irish exports were reduced by the Navigation Acts from the 1660s, which placed tariffs on Irish products entering England, but exempted English goods from tariffs on entering Ireland. Despite this most of the 18th century was relatively peaceful in comparison with the preceding two centuries, and the population doubled to over four million. By the 18th century, the Anglo - Irish ruling class had come to see Ireland, not England, as their native country. A Parliamentary faction led by Henry Grattan agitated for a more favourable trading relationship with Great Britain and for greater legislative independence for the Irish Parliament. However, reform in Ireland stalled over the more radical proposals toward enfranchising Irish Catholics. This was partially enabled in 1793, but Catholics could not yet become members of the Irish Parliament, or become government officials. Some were attracted to the more militant example of the French Revolution of 1789. Presbyterians and Dissenters too faced persecution on a lesser scale, and in 1791 a group of dissident Protestant individuals, all of whom but two were Presbyterians, held the first meeting of what would become the Society of the United Irishmen. Originally they sought to reform the Irish Parliament which was controlled by those belonging to the state church; seek Catholic Emancipation; and help remove religion from politics. When their ideals seemed unattainable they became more determined to use force to overthrow British rule and found a non-sectarian republic. Their activity culminated in the Irish Rebellion of 1798, which was bloodily suppressed. Ireland was a separate kingdom ruled by King George III of Britain; he set policy for Ireland through his appointment of the Lord Lieutenant of Ireland or viceroy. In practice, the viceroys lived in England and the affairs in the island were largely controlled by an elite group of Irish Protestants known as "undertakers. '' The system changed in 1767, with the appointment of an English politician who became a very strong Viceroy. George Townshend served 1767 - 72 and was in residence in The Castle in Dublin. Townsend had the strong support of both the King and the British cabinet in London, and all major decisions were basically made in London. The Ascendancy complained, and obtained a series of new laws in the 1780s that made the Irish Parliament effective and independent of the British Parliament, although still under the supervision of the king and his Privy Council. Largely in response to the 1798 rebellion, Irish self - government was ended altogether by the provisions of the Acts of Union 1800 (which abolished the Irish Parliament of that era). In 1800, following the Irish Rebellion of 1798, the Irish and the British parliaments enacted the Acts of Union. The merger created a new political entity called United Kingdom of Great Britain and Ireland with effect from 1 January 1801. Part of the agreement forming the basis of union was that the Test Act would be repealed to remove any remaining discrimination against Roman Catholics, Presbyterians, Baptists and other dissenter religions in the newly United Kingdom. However, King George III, invoking the provisions of the Act of Settlement 1701 controversially and adamantly blocked attempts by Prime Minister William Pitt the Younger. Pitt resigned in protest, but his successor Henry Addington and his new cabinet failed to legislate to repeal or change the Test Act. This was followed by the first Irish Reform Act 1832, which allowed Catholic members of parliament but raised the property qualification to £ 10 effectively removing the poorer Irish freeholders from the franchise. In 1823 an enterprising Catholic lawyer, Daniel O'Connell, known in Ireland as ' The Liberator ' began an ultimately successful Irish campaign to achieve emancipation, and to be seated in the Parliament. This culminated in O'Connell's successful election in the Clare by - election, which revived the parliamentary efforts at reform. The Catholic Relief Act 1829 was eventually approved by the UK parliament under the leadership of the Dublin - born Prime Minister, the Arthur Wellesley, 1st Duke of Wellington. This indefatigable Anglo - Irish statesman, a former Chief Secretary for Ireland, and hero of the Napoleonic Wars, successfully guided the legislation through both houses of Parliament. By threatening to resign, he persuaded King George IV to sign the bill into law in 1829. The continuing obligation of Roman Catholics to fund the established Church of Ireland, however, led to the sporadic skirmishes of the Tithe War of 1831 -- 38. The Church was disestablished by the Gladstone government in 1867. The continuing enactment of parliamentary reform during the ensuing administrations further extended the initially limited franchise. Daniel O'Connell M.P. later led the Repeal Association in an unsuccessful campaign to undo the Act of Union 1800. The Great Irish Famine (An Gorta Mór) was the second of Ireland 's "Great Famines ''. It struck the country during 1845 -- 49, with potato blight, exacerbated by the political factors of the time leading to mass starvation and emigration. The impact of emigration in Ireland was severe; the population dropped from over 8 million before the Famine to 4.4 million in 1911. Gaelic or Irish, once the island 's spoken language, declined in use sharply in the nineteenth century as a result of the Famine and the creation of the National School education system, as well as hostility to the language from leading Irish politicians of the time; it was largely replaced by English. Outside mainstream nationalism, a series of violent rebellions by Irish republicans took place in 1803, under Robert Emmet; in 1848 a rebellion by the Young Irelanders, most prominent among them, Thomas Francis Meagher; and in 1867, another insurrection by the Irish Republican Brotherhood. All failed, but physical force nationalism remained an undercurrent in the nineteenth century. A central issue throughout the 19th and early 20th century was land ownership. A small group of about 10,000 English families owned practically all the farmland; Most were permanent residents of England, and seldom presented the land. They rented it out to Irish tenant farmers. Falling behind in rent payments meant eviction, and very bad feelings -- often violence. The late 19th century witnessed major land reform, spearheaded by the Land League under Michael Davitt demanding what became known as the 3 Fs; Fair rent, free sale, fixity of tenure. Parliament passed laws in 1870, 1881, 1903 and 1909 that enabled most tenant farmers to purchase their lands, and lowered the rents of the others. From 1870 and as a result of the Land War agitations and subsequent Plan of Campaign of the 1880s, various British governments introduced a series of Irish Land Acts. William O'Brien played a leading role in the 1902 Land Conference to pave the way for the most advanced social legislation in Ireland since the Union, the Wyndham Land Purchase Act of 1903. This Act set the conditions for the break - up of large estates and gradually devolved to rural landholders, and tenants ' ownership of the lands. It effectively ended the era of the absentee landlord, finally resolving the Irish Land Question. In the 1870s the issue of Irish self - government again became a major focus of debate under Charles Stewart Parnell, founder of the Irish Parliamentary Party. Prime Minister Gladstone made two unsuccessful attempts to pass Home Rule in 1886 and 1893. Parnell 's leadership ended when he was implicated in a divorce scandal that gained international publicity in 1890. He had been secretly living for years with Katherine O'Shea, the long - separated wife of a fellow Irish MP. Disaster came quickly: Gladstone and the Liberal Party refused to cooperate with him; his party split; the Irish Catholic bishops led the successful effort to crush his minority faction at by - elections. Parnell fought for control to the end, but his body was collapsing and he died in 1891 at age 45. After the introduction of the Local Government (Ireland) Act 1898 which broke the power of the landlord - dominated "Grand Juries '', passing for the first time democratic control of local affairs into the hands of the people through elected Local County Councils, the debate over full Home Rule led to tensions between Irish nationalists and Irish unionists (those who favoured maintenance of the Union). Most of the island was predominantly nationalist, Catholic and agrarian. The northeast, however, was predominantly unionist, Protestant and industrialised. Unionists feared a loss of political power and economic wealth in a predominantly rural, nationalist, Catholic home - rule state. Nationalists believed they would remain economically and politically second - class citizens without self - government. Out of this division, two opposing sectarian movements evolved, the Protestant Orange Order and the Catholic Ancient Order of Hibernians. Home Rule became certain when in 1910 the Irish Parliamentary Party (IPP) under John Redmond held the balance of power in Commons and the third Home Rule Bill was introduced in 1912. Unionist resistance was immediate with the formation of the Ulster Volunteers. In turn the Irish Volunteers were established to oppose them and enforce the introduction of self - government. In September 1914, just as the First World War broke out, the UK Parliament passed the Third Home Rule Act to establish self - government for Ireland, but was suspended for the duration of the war. To ensure implementation of Home Rule after the war, nationalist leaders and the IPP under Redmond supported with Ireland 's participation in the British and Allied war effort under the Triple Entente against the expansion of Central Powers. The core of the Irish Volunteers were against this decision, but the majority left to form the National Volunteers who enlisted in Irish regiments of the New British Army, the 10th and 16th (Irish) Divisions, their Northern counterparts in the 36th (Ulster) Division. Before the war ended, Britain made two concerted efforts to implement Home Rule, one in May 1916 and again with the Irish Convention during 1917 -- 1918, but the Irish sides (Nationalist, Unionist) were unable to agree to terms for the temporary or permanent exclusion of Ulster from its provisions. The period 1916 -- 1921 was marked by political violence and upheaval, ending in the partition of Ireland and independence for 26 of its 32 counties. A failed militant attempt was made to gain separate independence for Ireland with the 1916 Easter Rising, an insurrection in Dublin. Though support for the insurgents was small, the violence used in its suppression led to a swing in support of the rebels. In addition, the unprecedented threat of Irishmen being conscripted to the British Army in 1918 (for service on the Western Front as a result of the German Spring Offensive) accelerated this change. In the December 1918 elections Sinn Féin, the party of the rebels, won three - quarters of all seats in Ireland, twenty - seven MPs of which assembled in Dublin on 21 January 1919 to form a 32 - county Irish Republic Parliament, the first Dáil Éireann unilaterally declaring sovereignty over the entire island. Unwilling to negotiate any understanding with Britain short of complete independence, the Irish Republican Army, the army of the newly declared Irish Republic, waged a guerilla war (the Irish War of Independence) from 1919 to 1921. In the course of the fighting and amid much acrimony, the Fourth Government of Ireland Act 1920 implemented Home Rule while separating the island into what the British government 's Act termed "Northern Ireland '' and "Southern Ireland ''. In July 1921 the Irish and British governments agreed to a truce that halted the war. In December 1921 representatives of both governments signed an Anglo - Irish Treaty. The Irish delegation was led by Arthur Griffith and Michael Collins. This abolished the Irish Republic and created the Irish Free State, a self - governing Dominion of the Commonwealth of Nations in the manner of Canada and Australia. Under the Treaty, Northern Ireland could opt out of the Free State and stay within the United Kingdom: it promptly did so. In 1922 both parliaments ratified the Treaty, formalising independence for the 26 - county Irish Free State (which renamed itself Ireland in 1937, and declared itself a republic in 1949); while the 6 - county Northern Ireland, gaining Home Rule for itself, remained part of the United Kingdom. For most of the next 75 years, each territory was strongly aligned to either Catholic or Protestant ideologies, although this was more marked in the six counties of Northern Ireland. The treaty to sever the Union divided the republican movement into anti-Treaty (who wanted to fight on until an Irish Republic was achieved) and pro-Treaty supporters (who accepted the Free State as a first step towards full independence and unity). Between 1922 and 1923 both sides fought the bloody Irish Civil War. The new Irish Free State government defeated the anti-Treaty remnant of the Irish Republican Army, imposing multiple executions. This division among nationalists still colours Irish politics today, specifically between the two leading Irish political parties, Fianna Fáil and Fine Gael. The new Irish Free State (1922 -- 37) existed against the backdrop of the growth of dictatorships in mainland Europe and a major world economic downturn in 1929. In contrast with many contemporary European states it remained a democracy. Testament to this came when the losing faction in the Irish civil war, Éamon de Valera 's Fianna Fáil, was able to take power peacefully by winning the 1932 general election. Nevertheless, until the mid-1930s, considerable parts of Irish society saw the Free State through the prism of the civil war, as a repressive, British - imposed state. It was only the peaceful change of government in 1932 that signalled the final acceptance of the Free State on their part. In contrast to many other states in the period, the Free State remained financially solvent as a result of low government expenditure, despite the Economic War with Britain. However, unemployment and emigration were high. The population declined to a low of 2.7 million recorded in the 1961 census. The Roman Catholic Church had a powerful influence over the Irish state for much of its history. The clergy 's influence meant that the Irish state had very conservative social policies, forbidding, for example, divorce, contraception, abortion, pornography as well as encouraging the censoring and banning of many books and films. In addition the Church largely controlled the State 's hospitals, schools and remained the largest provider of many other social services. With the partition of Ireland in 1922, 92.6 % of the Free State 's population were Catholic while 7.4 % were Protestant. By the 1960s the Protestant population had fallen by half. Although emigration was high among all the population, due to a lack of economic opportunity, the rate of Protestant emigration was disproportionate in this period. Many Protestants left the country in the early 1920s, either because they felt unwelcome in a predominantly Catholic and nationalist state, because they were afraid due to the burning of Protestant homes (particularly of the old landed class) by republicans during the civil war, because they regarded themselves as British and did not wish to live in an independent Irish state, or because of the economic disruption caused by the recent violence. The Catholic Church had also issued a decree, known as Ne Temere, whereby the children of marriages between Catholics and Protestants had to be brought up as Catholics. From 1945, the emigration rate of Protestants fell and they became less likely to emigrate than Catholics. In 1937 a new Constitution re-established the state as Ireland (or Éire in Irish). The state remained neutral throughout World War II (see Irish neutrality), which saved it from much of the horrors of the war, although tens of thousands volunteered to serve in the British forces. Ireland was also impacted by food rationing, and coal shortages; peat production became a priority during this time. Though nominally neutral, recent studies have suggested a far greater level of involvement by the South with the Allies than was realised, with D Day 's date set on the basis of secret weather information on Atlantic storms supplied by Ireland. For more detail on 1939 -- 45, see main article The Emergency. In 1949 the state was formally declared a republic and it left the British Commonwealth. In the 1960s, Ireland underwent a major economic change under reforming Taoiseach (prime minister) Seán Lemass and Secretary of the Department of Finance T.K. Whitaker, who produced a series of economic plans. Free second - level education was introduced by Donogh O'Malley as Minister for Education in 1968. From the early 1960s, Ireland sought admission to the European Economic Community but, because 90 % of exports were to the United Kingdom market, it did not do so until the UK did, in 1973. Global economic problems in the 1970s, augmented by a set of misjudged economic policies followed by governments, including that of Taoiseach Jack Lynch, caused the Irish economy to stagnate. The Troubles in Northern Ireland discouraged foreign investment. Devaluation was enabled when the Irish Pound, or Punt, was established as a separate currency in 1979, breaking the link with the UK 's sterling. However, economic reforms in the late 1980s, helped by investment from the European Community, led to the emergence of one of the world 's highest economic growth rates, with mass immigration (particularly of people from Asia and Eastern Europe) as a feature of the late 1990s. This period came to be known as the Celtic Tiger and was focused on as a model for economic development in the former Eastern Bloc states, which entered the European Union in the early 2000s (decade). Property values had risen by a factor of between four and ten between 1993 and 2006, in part fuelling the boom. Irish society adopted relatively liberal social policies during this period. Divorce was legalised, homosexuality decriminalised, and abortion in limited cases was allowed by the Irish Supreme Court in the X Case legal judgement. Major scandals in the Roman Catholic Church, both sexual and financial, coincided with a widespread decline in religious practice, with weekly attendance at Roman Catholic Mass dropping by half in twenty years. A series of tribunals set up from the 1990s have investigated alleged malpractices by politicians, the Catholic clergy, judges, hospitals and the Gardaí (police). Ireland 's new found prosperity ended abruptly in 2008 when the banking system collapsed due to the Irish property bubble bursting. Some 25 - 26 % of GDP was needed to bail out failing Irish banks and force banking sector consolidation. This was the largest banking bailout for any country in history, in comparison only 7 -- 8 % of GDP was needed to bail out failing Finnish banks in its banking crisis in the 1990s. This resulted in a major financial and political crisis as Ireland entered a recession. Emigration rose to 1989 levels as the unemployment rate rose from 4.2 % in 2007 to reach 14.6 % as of February 2012. However, since 2014, Ireland has seen strong economic growth, dubbed as the "Celtic Phoenix ''. The 1920 Government of Ireland Bill created the state of Northern Ireland, which consisted of the six northeastern counties of Londonderry, Tyrone, Fermanagh, Antrim, Down and Armagh. From 1921 to 1972, Northern Ireland was governed by a Unionist government, based at Stormont in east Belfast. Unionist leader and first Prime Minister, James Craig, declared that it would be "a Protestant State for a Protestant People ''. Craig 's goal was to form and preserve Protestant authority in the new state which was above all an effort to secure a unionist majority. In 1926 the majority of the population in the province were Presbyterian and Anglican therefore solidifying Craig 's Protestant political power. The Ulster Unionist Party thereafter formed every government until 1972. Discrimination against the minority nationalist community in jobs and housing, and their total exclusion from political power due to the majoritarian electoral system, led to the emergence of the Northern Ireland Civil Rights Association in the late 1960s, inspired by Martin Luther King 's civil rights movement in the United States of America. The military forces of the Northern Protestants and Northern Catholics (IRA) turned to brutal acts of violence to establish power. As time went on it became clear that these two rival states would bring about a civil war. After the Second World War, keeping the cohesion within Stormont seemed impossible; increased economic pressures, solidified Catholic unity, and British involvement ultimately led to Stormont 's collapse. As the civil rights movement of the United States gained worldwide acknowledgement, Catholics rallied together to achieve a similar socio - political recognition. This resulted in the formation of various organisations such as the Northern Ireland Civil Rights Association (NICRA) in 1967 and the Campaign for Social Justice (CSJ) in 1964. Non-violent protest became an increasingly important factor in mobilising Catholic sympathies and opinion and thus more effective in generating support than actively violent groups such as the IRA. However, these non-violent protests posed a problem to Northern Ireland 's prime minister Terrance O'Neil (1963) because it hampered his efforts to persuade Catholics in Northern Ireland that they too, like their Protestant counterparts, belong within the United Kingdom. Despite O'Neil's reforming efforts there was growing discontent amongst both Catholics and Unionists. In October 1968 a peaceful civil rights march in Derry turned violent as police brutally beat protesters. The outbreak was televised by international media, and as a result the march was highly publicised which further confirmed the socio - political turmoil in Ireland. A violent counter-reaction from conservative unionists led to civil disorder, notably the Battle of the Bogside and the Northern Ireland riots of August 1969. To restore order, British troops were deployed to the streets of Northern Ireland at that time. The violent outbreaks in the late 1960s encouraged and helped strengthen military groups such as the IRA, who posited themselves as the protectors of the working class Catholics who were vulnerable to police and civilian brutality. During the late sixties and early seventies recruitment into the IRA organisation dramatically increased as street and civilian violence worsened. The interjection from the British troops proved to be insufficient to quell the violence and thus solidified the IRA 's growing military importance. On 30 January 1972 the worst tensions came to a head with the events of Bloody Sunday. Paratroops opened fire on civil rights protesters in Derry, killing 13 unarmed civilians. Bloody Friday, Bloody Sunday, and other violent acts in the early 1970s came to be known as the Troubles. The Stormont parliament was prorogued in 1972 and abolished in 1973. Paramilitary private armies such as the Provisional Irish Republican Army, resulted from a split within the IRA, the Official IRA and Irish National Liberation Army fought against the Ulster Defence Regiment and the Ulster Volunteer Force. Moreover, the British army and the (largely Protestant) Royal Ulster Constabulary (RUC) also took part in the chaos that resulted in the deaths of over 3,000 men, women and children, civilians and military. Most of the violence took place in Northern Ireland, but some also spread to England and across the Irish border. For the next 271⁄2 years, with the exception of five months in 1974, Northern Ireland was under "direct rule '' with a Secretary of State for Northern Ireland in the British Cabinet responsible for the departments of the Northern Ireland government. Direct Rule was designed to be a temporary solution until Northern Ireland was capable of governing itself again. Principal acts were passed by the Parliament of the United Kingdom in the same way as for much of the rest of the UK, but many smaller measures were dealt with by Order in Council with minimal parliamentary scrutiny. Attempts were made to establish a power - sharing executive, representing both the nationalist and unionist communities, by the Northern Ireland Constitution Act of 1973 and the Sunningdale Agreement in December 1973. Both acts however did little to create cohesion between Northern Ireland and the Republic of Ireland. The Constitution Act of 1973 formalised the UK government 's affirmation of reunification of Ireland by consent only; therefore ultimately delegating the authoritative power of the border question from Stormont to the people of Northern Ireland (and the Republic of Ireland). Conversely, the Sunningdale Agreement included a "provision of a Council of Ireland which held the right to execute executive and harmonizing functions ''. Most significantly, the Sunningdale Agreement brought together political leaders from Northern Ireland, the Republic of Ireland and the UK to deliberate for the first time since 1925. The Northern Ireland Constitutional Convention and Jim Prior 's 1982 assembly were also temporarily implemented; however all failed to either reach consensus or operate in the longer term. During the 1970s British policy concentrated on defeating the Provisional Irish Republican Army (IRA) by military means including the policy of Ulsterisation (requiring the RUC and British Army reserve Ulster Defence Regiment to be at the forefront of combating the IRA). Although IRA violence decreased it was obvious that no military victory was on hand in either the short or medium terms. Even Catholics who generally rejected the IRA were unwilling to offer support to a state that seemed to remain mired in sectarian discrimination, and the Unionists were not interested in Catholic participation in running the state in any case. In the 1980s the IRA attempted to secure a decisive military victory based on massive arms shipments from Libya. When this failed, senior republican figures began to look to broaden the struggle from purely military means. In time this began a move towards military cessation. In 1986 the Irish and British governments signed the Anglo Irish Agreement signalling a formal partnership in seeking a political solution. The Anglo - Irish Agreement (AIA) recognised the Irish government 's right to be consulted and heard as well as guaranteed equality of treatment and recognition of the Irish and British identities of the two communities. The agreement also stated that the two governments must implement a cross-border co-operation. Socially and economically Northern Ireland suffered the worst levels of unemployment in the UK and although high levels of public spending ensured a slow modernisation of public services and moves towards equality, progress was slow in the 1970s and 1980s. Only in the 1990s, when progress toward peace became tangible, did the economic situation brighten. By then the demographics of Northern Ireland had undergone significant change, and more than 40 % of the population was Catholic. More recently, the Belfast Agreement ("Good Friday Agreement '') of 10 April 1998 brought -- on 2 December 1999 -- a degree of power sharing to Northern Ireland, giving both unionists and nationalists control of limited areas of government. However, both the power - sharing Executive and the elected Assembly were suspended between January and May 2000, and from October 2002 until April 2007, following breakdowns in trust between the political parties involving outstanding issues, including "decommissioning '' of paramilitary weapons, policing reform and the removal of British army bases. In new elections in 2003, the moderate Ulster Unionist and (nationalist) Social Democrat and Labour parties lost their dominant positions to the more hard - line Democratic Unionist and (nationalist) Sinn Féin parties. On 28 July 2005, the Provisional IRA announced the end of its armed campaign and on 25 September 2005 international weapons inspectors supervised the full disarmament of the PIRA. Eventually, devolution was restored in April 2007. Ireland 's economy became more diverse and sophisticated than ever before by integrating itself into the global economy. In 1973, Ireland acceded to the European Economic Community (EEC), precursor to the European Community (EC) and European Union (EU), at the same time as the UK. By the beginning of the 1990s Ireland had transformed itself into a modern industrial economy and generated substantial national income that benefited the entire nation. Although dependence on agriculture still remained high, Ireland 's industrial economy produced sophisticated goods that rivalled international competition. Ireland 's international economic boom of the 1990s led to its being called a Celtic Tiger. The Catholic Church, which once exercised great power, found its influence on socio - political issues in Ireland much reduced. Irish bishops were no longer able to advise and influence the public on how to exercise their political rights. Modern Ireland 's detachment of the Church from ordinary life can be explained by the increasing disinterest in Church doctrine by younger generations and the questionable morality of the Church 's representatives. A highly publicised case was that of Eamonn Casey, the Bishop of Galway, who resigned abruptly in 1992 after it was revealed that he had had an affair with an American woman and had fathered a child. Further controversies and scandals arose concerning paedophile and child - abusing priests. As a result, many in the Irish public began to question the credibility and effectiveness of the Catholic Church. In 2011 Ireland closed its embassy at the Vatican, an apparent result of this growing trend. The national flag of Ireland is a tricolour of green, white and gold. This flag, which bears the colours green for Irish Catholics, orange for Irish Protestants, and white for the desired peace between them, dates to the mid-19th century. The tricolour was first unfurled in public by Young Irelander Thomas Francis Meagher who, using the symbolism of the flag, explained his vision as follows: "The white in the centre signifies a lasting truce between the "Orange '' and the "Green, '' and I trust that beneath its folds the hands of the Irish Protestant and the Irish Catholic may be clasped in generous and heroic brotherhood ". Fellow nationalist John Mitchel said of it: "I hope to see that flag one day waving as our national banner. '' After its use in the 1916 Rising it became widely accepted by nationalists as the national flag, and was used officially by the Irish Republic (1919 -- 21) and the Irish Free State (1922 -- 37). In 1937 when the Constitution of Ireland was introduced, the tricolour was formally confirmed as the national flag: "The national flag is the tricolour of green, white and orange. '' While the tricolour today is the official flag of Ireland, it is not an official flag in Northern Ireland although it is sometimes used unofficially. The only official flag representing Northern Ireland is the Union Flag of the United Kingdom of Great Britain and Northern Ireland however its use is controversial. The Ulster Banner is sometimes used unofficially as a de facto regional flag for Northern Ireland. Since Partition, there has been no universally - accepted flag to represent the entire island. As a provisional solution for certain sports fixtures, the Flag of the Four Provinces enjoys a certain amount of general acceptance and popularity. Historically a number of flags have been used, including: St Patrick 's Saltire was formerly used to represent the island of Ireland by the all - island Irish Rugby Football Union (IRFU), before adoption of the four - provinces flag. The Gaelic Athletic Association (GAA) uses the tricolour to represent the whole island. Ireland has a very large historiography, contributed by scholars in Ireland, North America, and Britain. There has been both a standard interpretation and, since the late 1930s, a good deal of revisionism. One of the most important themes has always been Irish nationalism -- what Alfred Markey, calls: Nationalism has led to numerous monographs and debates. A great deal of attention has focused on the Irish revolutionary period, 1912 - 23. Starting in 2012 a series of conferences on "Reflecting on a decade of War and Revolution in Ireland 1912 - 1923: Historians and Public History '' brought together hundreds of academics, teachers, and the general public. Ireland in some ways was the first acquisition of the British Empire. Marshall says historians continue to debate whether Ireland should be considered part of the British Empire. Recent work by historians pays special attention to continuing Imperial aspects of Irish history, Atlantic Ocean history, and the role of migration in forming the Irish diaspora across the Empire and North America. As historiography evolves, new approaches have been applied to the Irish situation. Studies of women, and gender relationships more generally, had been rare before 1990; they now are commonplace with over 3000 books and articles. Postcolonialism is an approach in several academic disciplines that seeks to analyze, explain, and respond to the cultural legacies of colonialism and imperialism. The emphasis is usually on the human consequences of controlling a country and establishing settlers for the economic exploitation of the native people and their land.
how much did chelsea sell de bruyne to wolfsburg
Kevin De Bruyne - wikipedia Kevin De Bruyne (Dutch pronunciation: (ˈkɛvɪn də ˈbrœynə); born 28 June 1991) is a Belgian professional footballer who plays as a midfielder for English club Manchester City and the Belgian national team. His playing style has frequently led to the media, coaches, and colleagues ranking him among the best players in Europe, and has often been described as a "complete '' footballer. He was ranked the fourth - best footballer in the world by The Guardian in 2017. De Bruyne began his career at Genk, where he was a regular player when they won the 2010 -- 11 Belgian Pro League. In 2012, he joined English club Chelsea, where he was used sparingly and then loaned to Werder Bremen. He signed with Wolfsburg for £ 18 million in 2014, and in 2015 he was named Footballer of the Year in Germany. Later that year, he joined Manchester City for a club record £ 54 million. De Bruyne made his full international debut in 2010, and has earned over 50 caps for Belgium. He was part of the Belgian squad that reached the quarter - finals of the 2014 FIFA World Cup and UEFA Euro 2016. De Bruyne began his career with hometown club KVV Drongen in 2003. Two years later, he joined Gent and moved to Genk in 2005. De Bruyne continued his development in their youth set - up and was rewarded for his progress by being promoted to the first team squad in 2008. De Bruyne made his first team debut for Genk in a 3 -- 0 defeat at Charleroi on 9 May 2009. Having established himself in the team the following season, on 7 February 2010, De Bruyne scored his first goal for the club, which secured all three points for Genk in a 1 -- 0 win against Standard Liège. He scored five goals and made 16 assists in 32 league matches during the 2010 -- 11 season as Genk were crowned Belgian champions for the third time. On 29 October 2011, De Bruyne scored his first hat - trick for Genk against Club Brugge, which ended in a 5 -- 4 win for Genk. On 28 January 2012, De Bruyne scored a brace against OH Leuven in a 5 -- 0 win. On 18 February 2012, De Bruyne scored his first goal back at Genk following his agreed transfer to Chelsea and also assisted the other goal in a 1 -- 2 away win against Mons. De Bruyne ended the season by wrapping up the scoring in a 3 -- 1 victory over Gent. He finished the league campaign with eight goals from 28 appearances. On 31 January 2012, on the winter transfer deadline day, Premier League club Chelsea and Genk announced the permanent signing of De Bruyne, with the fee rumoured to be in the region of £ 7 million. He signed a five - and - a-half - year contract at Stamford Bridge, but would stay at Genk for the remainder of the 2011 -- 12 season. De Bruyne told the club website, "To come to a team like Chelsea is a dream but now I have to work hard to achieve the level that 's necessary. '' On 18 July 2012, De Bruyne made his debut for Chelsea in a friendly match against Major League Soccer (MLS) side Seattle Sounders FC in a 4 -- 2 win. De Bruyne also played the first half against Ligue 1 giants Paris Saint - Germain at Yankee Stadium, New York. On 2 August 2012, Chelsea announced that De Bruyne was to join Werder Bremen in the Bundesliga on a season - long loan deal after having successfully completed a medical. De Bruyne scored his first goal for Bremen in a 3 -- 2 defeat to Hannover 96 on 15 September, netting from 11 yards out after being played in by Eljero Elia. De Bruyne continued his good form, scoring in Bremen 's next game, a 2 -- 2 draw with VfB Stuttgart, on 23 September. De Bruyne got back on the score sheet on 18 November, scoring the winning goal -- despite his team being down to 10 men -- as Bremen came from a goal down to defeat Fortuna Düsseldorf 2 -- 1. De Bruyne scored his first goal in over two months on 4 May 2013, since netting a consolation goal in Bayern Munich 's 6 -- 1 hammering of Bremen, putting his side up 2 -- 0 at home to TSG 1899 Hoffenheim before a late brace from Sven Schipplock meant that the game finished 2 -- 2. He followed this up with a goal in Bremen 's next match, securing a place in the Bundesliga for the next season with a 1 -- 1 draw against Eintracht Frankfurt on 11 May. After a successful loan spell in the Bundesliga with Werder Bremen, De Bruyne was linked with a move to stay in Germany with either Borussia Dortmund or Bayer Leverkusen. Incoming manager José Mourinho, however, assured De Bruyne he was a part of Chelsea 's plan for the future, and the player officially returned to Chelsea on 1 July 2013. De Bruyne injured a knee while scoring his first goal for Chelsea, in a pre-season friendly game against a Malaysia XI, but was fit to make his competitive debut on the opening day of the 2013 -- 14 Premier League season against Hull City, and made an assist for the first goal in a 2 -- 0 win. On 18 January 2014, Wolfsburg signed De Bruyne for a fee of £ 18 million (€ 22 million). On 25 January 2014, he made his debut for Wolfsburg in a 3 -- 1 home loss against Hannover 96. On 12 April 2014, De Bruyne assisted 2 goals in their 4 -- 1 home win against 1. FC Nürnberg. After a week he scored his first goal for Wolfsburg in 3 -- 1 away win against Hamburger SV. He also scored in the last two matches of the Bundesliga helping his team to win against VfB Stuttgart and Borussia Mönchengladbach. De Bruyne scored his first goal of the 2014 -- 15 season on 2 October 2014, volleying in a clearance from outside the box to salvage a 1 -- 1 draw against Lille in the Europa League. In the third group match away to Krasnodar on 23 October, De Bruyne scored twice as Wolfsburg secured their first win in the competition with a 4 -- 2 victory. On 30 January 2015, he scored another brace in a 4 -- 1 home win against Bayern Munich, their first Bundesliga defeat since April 2014. On 1 March 2015, De Bruyne assisted three goals in a 5 -- 3 win over his former club Werder Bremen. On 12 March 2015, De Bruyne scored two goals in a 3 -- 1 first - leg Europa League round - of - 16 victory over Internazionale. On 15 March 2015, he scored one goal and assisted another two in 3 -- 0 victory over SC Freiburg. De Bruyne ended the league season with 10 goals and 21 assists, the latter a new Bundesliga record, as Wolfsburg finished second in the Bundesliga and qualified for the 2015 -- 16 UEFA Champions League. On 30 May 2015, he started and scored in the 2015 DFB - Pokal Final as Die Wölfe defeated Borussia Dortmund 3 -- 1 at the Olympiastadion in Berlin. De Bruyne ended his breakout season with 16 goals and 27 assists in all competitions, and was named the 2015 Footballer of the Year in Germany. De Bruyne began the season by winning the 2015 DFL - Supercup against Bayern Munich, providing the cross for Nicklas Bendtner 's 89th - minute equaliser for a 1 -- 1 draw and then scoring in the subsequent penalty shootout. On 8 August 2015, he continued his good form by scoring his first goal of the season, and providing two assists in a 4 -- 1 win at Stuttgarter Kickers in the first round of the DFB - Pokal. In August, De Bruyne, in the midst of transfer speculation, insisted that he would not force Wolfsburg to sell him, but admitted that he could not ignore interest from Manchester City, saying: "If an offer does come, I will hear about it and how much it is, but I have not yet heard anything... I would not go to England just to prove that I can play there. I do not have to go to England... If I go there it 's because for me and for my family it is a good choice. That 's the key for me. '' On 10 August, it was reported that Manchester City had made a second bid for De Bruyne worth £ 47 million. Wolfsburg sporting director, Klaus Allofs, stated that the club would fight to keep him, saying "I think some other clubs have definitely turned Kevin 's head... Some huge figures are doing the rounds and I can understand why Kevin is leaving everything open. '' On 27 August, it was reported that Manchester City had made a bid for De Bruyne worth £ 58 million. Klaus Allofs said that City had made an "astonishing '' wage offer to De Bruyne. On 30 August 2015, Manchester City announced the arrival of De Bruyne on a six - year contract, for a reported club - record fee of £ 55 million (€ 75 million) making him the second most expensive transfer in British football history after Ángel Di María 's move to Manchester United in 2014. He made his debut for the team in the Premier League on 12 September against Crystal Palace, replacing injured Sergio Agüero in the 25th minute. On 19 September, he scored his first goal for the club against West Ham United in first half stoppage time in an eventual 2 -- 1 loss. He went on to score in a 4 -- 1 League Cup win against Sunderland, on 22 September and a 4 -- 1 loss to Tottenham Hotspur in the Premier League on 26 September. On 3 October, he scored in the team 's 6 -- 1 win against Newcastle United. On 2 October, De Bruyne was announced as one of the players on the longlist for the prestigious FIFA Ballon d'Or award, alongside such teammates as Sergio Agüero and Yaya Touré. Just 18 days later, on 20 October he was revealed by FIFA as one of the players on the 23 - man shortlist for the Ballon d'Or. On 21 October, De Bruyne scored an injury - time winner against Sevilla in the UEFA Champions League, to take City within one point of group leaders Juventus, with three games remaining. On 1 December, he scored a brace in a 4 -- 1 win over Hull City to send Manchester City through to the semi-finals of the Football League Cup. On 27 January 2016, De Bruyne scored one in a League Cup semi-final 3 -- 1 victory over Everton, but sustained an injury to his right knee that would keep him out of the team for two months. On 2 April, De Bruyne made his return from injury in a 4 -- 0 win against Bournemouth at Dean Court, scoring the team 's second goal in the twelfth minute. Four days later, he scored the opening goal in a 2 -- 2 draw with Paris Saint - Germain in the UEFA Champions League quarter - final first - leg at the Parc des Princes. On 12 April, De Bruyne scored the winning goal against Paris Saint - Germain, advancing Manchester to the Champions League semi-finals, for the first time in the club 's history, on an aggregate score of 3 -- 2. Writing in The Independent, Mark Ogden said, "It was a stunning goal from the Belgian, who took a touch to control the ball before curling it beyond Kevin Trapp from the edge of the penalty area. '' De Bruyne 's next goal came on 8 May 2016 in a 2 -- 2 draw with Arsenal, although the result left City 's Champions League qualification hopes out of their own hands. -- Pep Guardiola on 17 September 2016 describing De Bruyne after his brilliant performances for City On 10 September 2016, De Bruyne scored and assisted in the first Manchester derby of the season which City won 2 -- 1 and was awarded the Man of the Match. On 17 September 2016, De Bruyne was awarded the Man of the Match, in a 4 -- 0 win over Bournemouth. De Bruyne scored the first, assisted the fourth, and provided key passes on both the second and third goals. After the international break, Manchester City drew their next game, against Everton, played on 15 October 2016 with the scoreline finishing at 1 -- 1. Agüero and De Bruyne both missed their penalties while Nolito came off the bench to equalise for City. On 1 November De Bruyne scored from a free kick in the team 's 3 -- 1 win over FC Barcelona. On 21 January 2017, De Bruyne was involved in both of City 's goals, as he netted once himself and also assisted Leroy Sané 's, in a 2 -- 2 home draw with Tottenham Hotspur; he was subsequently named Man of the Match. On 19 March 2017, De Bruyne displayed an excellent performance in a 1 -- 1 draw against Liverpool at the Etihad Stadium, where he set up a goal for Agüero. De Bruyne set up both Agüero 's and Gabriel Jesus ' goals, on 9 September 2017, in a 5 -- 0 home victory over Liverpool. On 30 September 2017, he scored his first goal of the 2017 -- 18 Premier League season, as the Citizens overcame Chelsea with a 1 -- 0 scoreline at Stamford Bridge. On 22 January 2018, De Bruyne signed a new contract with City, keeping him at the club until 2023. Early in De Bruyne 's career it was rumoured that his mother had been born in Ealing and that he was therefore eligible to play for the England national team, but in fact his mother was born in Burundi and moved to Ealing as a child. Due to his mother 's place of birth, he was also eligible to play for the Burundi national team. De Bruyne was capped by Belgium at under - 18, under - 19, and under - 21 level. He made his debut for the Belgian senior team on 11 August 2010 in an international friendly against Finland in Turku; the game ended in a 1 -- 0 loss for Belgium. De Bruyne became a regular member of Belgium 's team during the 2014 FIFA World Cup qualification campaign, where he scored four goals as the Red Devils qualified for their first major tournament in 12 years. On 13 May 2014, he was named in Belgium 's squad for the 2014 FIFA World Cup. In their first game of the tournament, against Algeria in Belo Horizonte, De Bruyne assisted Marouane Fellaini 's equaliser and was named man of the match by FIFA. In the round of 16, De Bruyne scored Belgium 's opening goal in the third minute of extra time as they defeated the United States 2 -- 1. On 10 October 2014, De Bruyne scored twice in a 6 -- 0 rout of Andorra in UEFA Euro 2016 qualifying, equalling the team 's record victory in a European qualifier set in 1986. In June 2016, De Bruyne played at UEFA Euro 2016. De Bruyne plays mainly as a central or an attacking midfielder but can also operate as a winger or second striker. He is often described as one of the best modern day advanced playmakers due to his technique, wide range of passing, and long - range shooting skills. De Bruyne speaks Dutch, French and English fluently. He has one sister, Stefanie De Bruyne. His mother is English, but was born in Burundi and has also lived in the Ivory Coast. In a 2013 interview, De Bruyne said: "My mother has an English mentality, but I am fully Belgian. '' His hometown Drongen, a submunicipality of the city of Ghent, is situated in Flanders, the Dutch - speaking part of Belgium. In 2013 the Daily Mail alleged that Chelsea teammate Thibaut Courtois had had an affair with De Bruyne 's then - girlfriend Caroline Lijnen. De Bruyne attacked Courtois when training with the Belgian international team, but reconciled with Courtois after. Since 2014, De Bruyne has been in a relationship with Michèle Lacroix, who announced on 28 September 2015 that she was pregnant with the couple 's baby son. Mason Milian De Bruyne was born on 10 March 2016. De Bruyne and Lacroix married in June 2017. In 2015 De Bruyne bought his first car, a Mercedes, having previously relied on club vehicles, in preparation for the birth of his first child. His autobiography, entitled Keep It Simple (ISBN 9789089314826), was published in October 2014. Genk VfL Wolfsburg Manchester City Individual Notes Citations
what is the most popular religion in united states
Religion in the United States - Wikipedia Religion in the United States (2016) Religion in the United States is characterized by a diversity of religious beliefs and practices. Various religious faiths have flourished within the United States. A majority of Americans report that religion plays a very important role in their lives, a proportion unique among developed countries. Historically, the United States has always been marked by religious pluralism and diversity, beginning with various native beliefs of the pre-colonial time. In colonial times, Anglicans, Catholics and mainline Protestants, as well as Jews, arrived from Europe. Eastern Orthodoxy has been present since the Russian colonization of Alaska. Various dissenting Protestants, who left the Church of England, greatly diversified the religious landscape. The Great Awakenings gave birth to multiple evangelical Protestant denominations; membership in Methodist and Baptist churches increased drastically in the Second Great Awakening. In the 18th century, deism found support among American upper classes and thinkers. The Episcopal Church, splitting from the Church of England, came into being in the American Revolution. New Protestant branches like Adventism emerged; Restorationists and other Christians like the Jehovah 's Witnesses, the Latter Day Saint movement, Churches of Christ and Church of Christ, Scientist, as well as Unitarian and Universalist communities all spread in the 19th century. Pentecostalism emerged in the early 20th century as a result of the Azusa Street Revival. Scientology emerged in the 1950s. Unitarian Universalism resulted from the merge of Unitarian and Universalist churches in the 20th century. Since the 1990s, the religious share of Christians has decreased due to secularization, while Buddhism, Hinduism, Islam, and other religions have spread. Protestantism, historically dominant, ceased to be the religious category of the majority in the early 2010s. Christianity is the largest religion in the United States with the various Protestant Churches having the most adherents. In 2016, Christians represent 73.7 % of the total population, 48.9 % identifying as Protestants, 23.0 % as Catholics, and 1.8 % as Mormons, and are followed by people having no religion with 18.2 % of the total population. Judaism is the second - largest religion in the U.S., practised by 2.1 % of the population, followed by Islam with 0.8 %. Mississippi is the most religious state in the country, with 63 % of its adult population described as very religious, saying that religion is important to them and attending religious services almost every week, while New Hampshire, with only 20 % of its adult population described as very religious, is the least religious state. The most religious region of the United States is American Samoa (99.3 % religious). From early colonial days, when some English and German settlers moved in search of religious freedom, America has been profoundly influenced by religion. That influence continues in American culture, social life, and politics. Several of the original Thirteen Colonies were established by settlers who wished to practice their own religion within a community of like - minded people: the Massachusetts Bay Colony was established by English Puritans (Congregationalists), Pennsylvania by British Quakers, Maryland by English Catholics, and Virginia by English Anglicans. Despite these, and as a result of intervening religious strife and preference in England the Plantation Act 1740 would set official policy for new immigrants coming to British America until the American Revolution. The text of the First Amendment to the country 's Constitution states that "Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances. '' It guarantees the free exercise of religion while also preventing the government from establishing a state religion. However, the states were not bound by the provision and as late as the 1830s Massachusetts provided tax money to local Congregational churches. The Supreme Court since the 1940s has interpreted the Fourteenth Amendment as applying the First Amendment to the state and local governments. President John Adams and a unanimous Senate endorsed the Treaty of Tripoli in 1797 that stated: "the Government of the United States of America is not, in any sense, founded on the Christian religion. '' Expert researchers and authors have referred to the United States as a "Protestant nation '' or "founded on Protestant principles, '' specifically emphasizing its Calvinist heritage. The modern official motto of the United States of America, as established in a 1956 law signed by President Dwight D. Eisenhower, is "In God We Trust ''. The phrase first appeared on U.S. coins in 1864. According to a 2002 survey by the Pew Research Center, nearly 6 in 10 Americans said that religion plays an important role in their lives, compared to 33 % in Great Britain, 27 % in Italy, 21 % in Germany, 12 % in Japan, and 11 % in France. The survey report stated that the results showed America having a greater similarity to developing nations (where higher percentages say that religion plays an important role) than to other wealthy nations, where religion plays a minor role. In 1963, 90 % of U.S. adults claimed to be Christians while only 2 % professed no religious identity. In 2016, 73.7 % identified as Christians while 18.2 % claimed no religious affiliation. The United States federal government was the first national government to have no official state - endorsed religion. However, some states had established religions in some form until the 1830s. Modeling the provisions concerning religion within the Virginia Statute for Religious Freedom, the framers of the Constitution rejected any religious test for office, and the First Amendment specifically denied the federal government any power to enact any law respecting either an establishment of religion or prohibiting its free exercise, thus protecting any religious organization, institution, or denomination from government interference. The decision was mainly influenced by European Rationalist and Protestant ideals, but was also a consequence of the pragmatic concerns of minority religious groups and small states that did not want to be under the power or influence of a national religion that did not represent them. The most popular religion in the U.S. is Christianity, comprising the majority of the population (73.7 % of adults in 2016). According to the Association of Statisticians of American Religious Bodies newsletter published March 2017, based on data from 2010, Christians were the largest religious population in all 3,143 counties in the country. Roughly 48.9 % of Americans are Protestants, 23.0 % are Catholics, 1.8 % are Mormons (the name commonly used to refer to members of The Church of Jesus Christ of Latter - day Saints). Christianity was introduced during the period of European colonization. According to a 2012 review by the National Council of Churches, the five largest denominations are: The Southern Baptist Convention, with over 16 million adherents, is the largest of more than 200 distinctly named Protestant denominations. In 2007, members of evangelical churches comprised 26 % of the American population, while another 18 % belonged to mainline Protestant churches, and 7 % belonged to historically black churches. A 2015 study estimates some 450,000 Christian believers from a Muslim background in the country, most of them belonging to some form of Protestantism. In 2010 there were approximately 180,000 Arab Americans and about 130,000 Iranian Americans who converted from Islam to Christianity. Dudley Woodbury, a Fulbright scholar of Islam, estimates that 20,000 Muslims convert to Christianity annually in the United States. Historians agree that members of mainline Protestant denominations have played leadership roles in many aspects of American life, including politics, business, science, the arts, and education. They founded most of the country 's leading institutes of higher education. According to Harriet Zuckerman, 72 % of American Nobel Prize Laureates between 1901 and 1972, have identified from Protestant background. Episcopalians and Presbyterians tend to be considerably wealthier and better educated than most other religious groups, and numbers of the most wealthy and affluent American families as the Vanderbilts and Astors, Rockefeller, Du Pont, Roosevelt, Forbes, Whitneys, Morgans and Harrimans are Mainline Protestant families, though those affiliated with Judaism are the wealthiest religious group in the United States and those affiliated with Catholicism, owing to sheer size, have the largest number of adherents of all groups in the top income bracket. Some of the first colleges and universities in America, including Harvard, Yale, Princeton, Columbia, Dartmouth, Williams, Bowdoin, Middlebury, and Amherst, all were founded by mainline Protestant denominations. By the 1920s most had weakened or dropped their formal connection with a denomination. James Hunter argues that: Beginning around 1600 European settlers introduced Anglican and Puritans religion, as well as Baptist, Presbyterian, Lutheran, Quaker, and Moravian denominations. Beginning in the 16th century, the Spanish (and later the French and English) introduced Catholicism. From the 19th century to the present, Catholics moved to the US in large numbers due to immigration of Italians, Hispanics, Portuguese, French, Polish, Irish, Highland Scots, Dutch, Flemish, Hungarians, Germans, Lebanese (Maronite), and other ethnic groups. During the 19th century, two main branches of Eastern Christianity also arrived to America. Eastern Orthodoxy was brought to America by Greek, Russian, Ukrainian, Serbian, and other immigrant groups, mainly from Eastern Europe. In the same time, several immigrant groups from the Middle East, mainly Armenians, Copts and Syriacs, brought Oriental Orthodoxy to America. Several Christian groups were founded in America during the Great Awakenings. Interdenominational evangelicalism and Pentecostalism emerged; new Protestant denominations such as Adventism; non-denominational movements such as the Restoration Movement (which over time separated into the Churches of Christ, the Christian churches and churches of Christ, and the Christian Church (Disciples of Christ)); Jehovah 's Witnesses (called "Bible Students '' in the latter part of the 19th century); and The Church of Jesus Christ of Latter - day Saints (Mormonism). The strength of various sects varies greatly in different regions of the country, with rural parts of the South having many evangelicals but very few Catholics (except Louisiana and the Gulf Coast, and from among the Hispanic community, both of which consist mainly of Catholics), while urbanized areas of the north Atlantic states and Great Lakes, as well as many industrial and mining towns, are heavily Catholic, though still quite mixed, especially due to the heavily Protestant African - American communities. In 1990, nearly 72 % of the population of Utah was Mormon, as well as 26 % of neighboring Idaho. Lutheranism is most prominent in the Upper Midwest, with North Dakota having the highest percentage of Lutherans (35 % according to a 2001 survey). The largest religion, Christianity, has proportionately diminished since 1990. While the absolute number of Christians rose from 1990 to 2008, the percentage of Christians dropped from 86 % to 76 %. A nationwide telephone interview of 1,002 adults conducted by The Barna Group found that 70 % of American adults believe that God is "the all - powerful, all - knowing creator of the universe who still rules it today '', and that 9 % of all American adults and 0.5 % young adults hold to what the survey defined as a "biblical worldview ''. Episcopalian, Presbyterian, Eastern Orthodox and United Church of Christ members have the highest number of graduate and post-graduate degrees per capita of all Christian denominations in the United States, as well as the most high - income earners. However, owing to the sheer size or demographic head count of Catholics, more individual Catholics have graduate degrees and are in the highest income brackets than have or are individuals of any other religious community. After Christianity, Judaism is the next largest religious affiliation in the US, though this identification is not necessarily indicative of religious beliefs or practices. There are between 5.3 and 6.6 million Jews. A significant number of people identify themselves as American Jews on ethnic and cultural grounds, rather than religious ones. For example, 19 % of self - identified American Jews do not believe God exists. The 2001 ARIS study projected from its sample that there are about 5.3 million adults in the American Jewish population: 2.83 million adults (1.4 % of the U.S. adult population) are estimated to be adherents of Judaism; 1.08 million are estimated to be adherents of no religion; and 1.36 million are estimated to be adherents of a religion other than Judaism. ARIS 2008 estimated about 2.68 million adults (1.2 %) in the country identify Judaism as their faith. According to a 2017 study, Judaism is the religion of approximately 2 % of the American population. Jews have been present in what is now the US since the 17th century, and specifically allowed since the British colonial Plantation Act 1740. Although small Western European communities initially developed and grew, large - scale immigration did not take place until the late 19th century, largely as a result of persecutions in parts of Eastern Europe. The Jewish community in the United States is composed predominantly of Ashkenazi Jews whose ancestors emigrated from Central and Eastern Europe. There are, however, small numbers of older (and some recently arrived) communities of Sephardi Jews with roots tracing back to 15th century Iberia (Spain, Portugal, and North Africa). There are also Mizrahi Jews (from the Middle East, Caucasia and Central Asia), as well as much smaller numbers of Ethiopian Jews, Indian Jews, Kaifeng Jews and others from various smaller Jewish ethnic divisions. Approximately 25 % of the Jewish American population lives in New York City. According to the Association of Statisticians of American Religious Bodies newsletter published March, 2017, based on data from 2010, Jews were the largest minority religion in 231 counties out of the 3143 counties in the country. According to a 2014 survey conducted by the Pew Forum on Religion and Public life, 1.7 % of adults in the U.S. identify Judaism as their religion. Among those surveyed, 44 % said they were Reform Jews, 22 % said they were Conservative Jews, and 14 % said they were Orthodox Jews. According to the 1990 National Jewish Population Survey, 38 % of Jews were affiliated with the Reform tradition, 35 % were Conservative, 6 % were Orthodox, 1 % were Reconstructionists, 10 % linked themselves to some other tradition, and 10 % said they are "just Jewish ''. The Pew Research Center report on American Judaism released in October 2013 revealed that 22 % of Jewish Americans say they have "no religion '' and the majority of respondents do not see religion as the primary constituent of Jewish identity. 62 % believe Jewish identity is based primarily in ancestry and culture, only 15 % in religion. Among Jews who gave Judaism as their religion, 55 % based Jewish identity on ancestry and culture, and 66 % did not view belief in God as essential to Judaism. A 2009 study estimated the Jewish population (including both those who define themselves as Jewish by religion and those who define themselves as Jewish in cultural or ethnic terms) to be between 6.0 and 6.4 million. According to a study done in 2000 there were an estimated 6.14 million Jewish people in the country, about 2 % of the population. According to the 2001 National Jewish Population Survey, 4.3 million American Jewish adults have some sort of strong connection to the Jewish community, whether religious or cultural. Jewishness is generally considered an ethnic identity as well as a religious one. Among the 4.3 million American Jews described as "strongly connected '' to Judaism, over 80 % have some sort of active engagement with Judaism, ranging from attendance at daily prayer services on one end of the spectrum to attending Passover Seders or lighting Hanukkah candles on the other. The survey also discovered that Jews in the Northeast and Midwest are generally more observant than Jews in the South or West. Reflecting a trend also observed among other religious groups, Jews in the Northwestern United States are typically the least observant of tradition. The Jewish American community has higher household incomes than average, and is one of the best educated religious communities in the United States. Islam is the third largest religion in number in the United States, after Christianity and Judaism, representing the 0.8 % of the population in 2016. According to the Association of Statisticians of American Religious Bodies newsletter published in March 2017, based on data from 2010, Muslims were the largest minority religion in 392 counties out of the 3143 counties in the country. Islam in America effectively began with the arrival of African slaves. It is estimated that about 10 % of African slaves transported to the United States were Muslim. Most, however, became Christians, and the United States did not have a significant Muslim population until the arrival of immigrants from Arab and East Asian Muslim areas. According to some experts, Islam later gained a higher profile through the Nation of Islam, a religious group that appealed to black Americans after the 1940s; its prominent converts included Malcolm X and Muhammad Ali. The first Muslim elected in Congress was Keith Ellison in 2006, followed by André Carson in 2008. Research indicates that Muslims in the United States are generally more assimilated and prosperous than their counterparts in Europe. Like other subcultural and religious communities, the Islamic community has generated its own political organizations and charity organizations. The United States has perhaps the second largest Bahá'í community in the world. First mention of the faith in the U.S. was at the inaugural Parliament of World Religions, which was held at the Columbian Exposition in Chicago in 1893. In 1894, Ibrahim George Kheiralla, a Syrian Bahá'í immigrant, established a community in the U.S. He later left the main group and founded a rival movement. According to the Association of Statisticians of American Religious Bodies newsletter published March, 2017, based on data from 2010, Bahá'ís were the largest minority religion in 80 counties out of the 3143 counties in the country. Rastafarians began migrating to the United States in the 1950s, ' 60s and ' 70s from the religion 's 1930s birthplace, Jamaica. Marcus Garvey, who is considered a prophet by many Rastafarians, rose to prominence and cultivated many of his ideas in the United States. Buddhism entered the US during the 19th century with the arrival of the first immigrants from East Asia. The first Buddhist temple was established in San Francisco in 1853 by Chinese Americans. During the late 19th century Buddhist missionaries from Japan travelled to the US. During the same time period, US intellectuals started to take interest in Buddhism. The first prominent US citizen to publicly convert to Buddhism was Henry Steel Olcott in 1880 who is still honored in Sri Lanka for these efforts. An event that contributed to the strengthening of Buddhism in the US was the Parliament of the World 's Religions in 1893, which was attended by many Buddhist delegates sent from India, China, Japan, Vietnam, Thailand and Sri Lanka. The early 20th century was characterized by a continuation of tendencies that had their roots in the 19th century. The second half, by contrast, saw the emergence of new approaches, and the move of Buddhism into the mainstream and making itself a mass and social religious phenomenon. According to a 2016 study, Buddhists are approximately 1 % of the American population. According to the Association of Statisticians of American Religious Bodies newsletter published March, 2017, based on data from 2010, Buddhists were the largest minority religion in 186 counties out of the 3143 counties in the country. Hinduism is the fourth largest faith in the United States, representing approximately 1 % of the population in 2016. The first time Hinduism entered the U.S. is not clearly identifiable. However, large groups of Hindus have immigrated from India and other Asian countries since the enactment of the Immigration and Nationality Act of 1965. During the 1960s and 1970s Hinduism exercised fascination contributing to the development of New Age thought. During the same decades the International Society for Krishna Consciousness (a Vaishnavite Hindu reform organization) was founded in the US. In 2001, there were an estimated 766,000 Hindus in the US, about 0.2 % of the total population. According to the Association of Statisticians of American Religious Bodies newsletter published March, 2017, based on data from 2010, Hindus were the largest minority religion in 92 counties out of the 3143 counties in the country. In 2004 the Hindu American Foundation -- a national institution protecting rights of the Hindu community of U.S. -- was founded. American Hindus have one of the highest rates of educational attainment and household income among all religious communities, and tend to have lower divorce rates. Adherents of Jainism first arrived in the United States in the 20th century. The most significant time of Jain immigration was in the early 1970s. The United States has since become a center of the Jain Diaspora. The Federation of Jain Associations in North America is an umbrella organization of local American and Canadian Jain congregations to preserve, practice, and promote Jainism and the Jain way of life. Sikhism is a religion originating from the Indian subcontinent which was introduced into the United States when, around the turn of the 20th century, Sikhs started emigrating to the United States in significant numbers to work on farms in California. They were the first community to come from India to the US in large numbers. The first Sikh Gurdwara in America was built in Stockton, California, in 1912. In 2007, there were estimated to be between 250,000 and 500,000 Sikhs living in the United States, with the largest populations living on the East and West Coasts, with additional populations in Detroit, Chicago, and Austin. The United States also has a number of non-Punjabi converts to Sikhism. In 2004 there were an estimated 56,000 Taoists in the US. Taoism was popularized throughout the world through the writings and teachings of Lao Tzu and other Taoists as well as the practice of Qigong, Tai Chi Chuan and other Chinese martial arts. In 2016, approximately 18.2 % of the Americans declared to be not religiously affiliated. A 2001 survey directed by Dr. Ariela Keysar for the City University of New York indicated that, amongst the more than 100 categories of response, "no religious identification '' had the greatest increase in population in both absolute and percentage terms. This category included atheists, agnostics, humanists, and others with no stated religious preferences. Figures are up from 14.3 million in 1990 to 34.2 million in 2008, representing an increase from 8 % of the total population in 1990 to 15 % in 2008. A nationwide Pew Research study published in 2008 put the figure of unaffiliated persons at 16.1 %, while another Pew study published in 2012 was described as placing the proportion at about 20 % overall and roughly 33 % for the 18 -- 29 - year - old demographic. In a 2006 nationwide poll, University of Minnesota researchers found that despite an increasing acceptance of religious diversity, atheists were generally distrusted by other Americans, who trusted them less than Muslims, recent immigrants and other minority groups in "sharing their vision of American society ''. They also associated atheists with undesirable attributes such as amorality, criminal behavior, rampant materialism and cultural elitism. However, the same study also reported that "The researchers also found acceptance or rejection of atheists is related not only to personal religiosity, but also to one 's exposure to diversity, education and political orientation -- with more educated, East and West Coast Americans more accepting of atheists than their Midwestern counterparts. '' Some surveys have indicated that doubts about the existence of the divine were growing quickly among Americans under 30. On 24 March 2012, American atheists sponsored the Reason Rally in Washington, D.C., followed by the American Atheist Convention in Bethesda, Maryland. Organizers called the estimated crowd of 8,000 -- 10,000 the largest - ever US gathering of atheists in one place. In the United States, Enlightenment philosophy (which itself was heavily inspired by deist ideals) played a major role in creating the principle of religious freedom, expressed in Thomas Jefferson 's letters and included in the First Amendment to the United States Constitution. American Founding Fathers, or Framers of the Constitution, who were especially noted for being influenced by such philosophy of deism include Thomas Jefferson, Benjamin Franklin, Cornelius Harnett, Gouverneur Morris, and Hugh Williamson. Their political speeches show distinct deistic influence. Other notable Founding Fathers may have been more directly deist. These include Thomas Paine, James Madison, possibly Alexander Hamilton, and Ethan Allen. Various polls have been conducted to determine Americans ' actual beliefs regarding a god: "Spiritual but not religious '' (SBNR) is self - identified stance of spirituality that takes issue with organized religion as the sole or most valuable means of furthering spiritual growth. Spirituality places an emphasis upon the wellbeing of the "mind - body - spirit, '' so holistic activities such as tai chi, reiki, and yoga are common within the SBNR movement. In contrast to religion, spirituality has often been associated with the interior life of the individual. One fifth of the US public and a third of adults under the age of 30 are reportedly unaffiliated with any religion, however they identify as being spiritual in some way. Of these religiously unaffiliated Americans, 37 % classify themselves as spiritual but not religious. Native American religions historically exhibited much diversity, and are often characterized by animism or panentheism. The membership of Native American religions in the 21st century comprises about 9,000 people. Neopaganism in the United States is represented by widely different movements and organizations. The largest Neopagan religion is Wicca, followed by Neo-Druidism. Other neopagan movements include Germanic Neopaganism, Celtic Reconstructionist Paganism, Hellenic Polytheistic Reconstructionism, and Semitic neopaganism. According to the American Religious Identification Survey (ARIS), there are approximately 30,000 druids in the United States. Modern Druidism arrived in North America first in the form of fraternal Druidic organizations in the nineteenth century, and orders such as the Ancient Order of Druids in America were founded as distinct American groups as early as 1912. In 1963, the Reformed Druids of North America (RDNA) was founded by students at Carleton College, Northfield, Minnesota. They adopted elements of Neopaganism into their practices, for instance celebrating the festivals of the Wheel of the Year. Wicca advanced in North America in the 1960s by Raymond Buckland, an expatriate Briton who visited Gardner 's Isle of Man coven to gain initiation. Universal Eclectic Wicca was popularized in 1969 for a diverse membership drawing from both Dianic and British Traditional Wiccan backgrounds. Nordic Paganism Nordic Paganism is the umbrella term for polytheistic followers of the Proto - Norse period religions involving the Nordic pantheon of gods. This pantheon includes: the Æsir; Odin, Thor, Loki, Sif, Heimdallr, Baldr, and Týr to name a few, and the Vanir; Freyja, Freyr, Njörðr, Nerthus, and others. The followers of Nordic Paganism include Odinists, Tyrists, Lokians, Asatru, and practitioners of Seiðr, among other varying followers. Nordic Pagans follow the teachings of the Hávamál. This old text, along with the Prose Edda and Poetic Edda, gives the basis for Norse mythology, stories, legends, and beliefs. In popular culture, Norse mythology is well known. Fairy tale creatures, such as Elves, Dwarves, Giants, and Trolls all come from Norse mythology. These can be seen in famous works such as J.R.R. Tolkien 's Lord of the Rings. This is one of the earlier works to bring Norse concepts to life. Comic book producer Marvel Comics has taken a different approach, making Thor a modern superhero, and Loki a villain. These comics have spawned several movies and toys. Norse Mythology and the Scandinavian - Germanic lifestyle were looked at more in - depth than ever before with the popular TV series Vikings. Though not perfect on accuracy, it is very close to how these people would have lived and worshiped in the 1st and 2nd centuries. There is also a more negative light to Nordic Paganism 's popularity. Many white supremacy groups, including the Aryan Brotherhood, Sons of Odin, and the Nazis all used Nordic symbols and express Norse teachings in a skewed manner to support their beliefs that White Men are the superior race and gender of the world. Nazis famously used the symbol Sowilō, for their SS pins. Their use of symbols such as these were originally tied to Guido von List. The Aryan Brotherhood and Soldiers of Odin use the Hammer of Thor (Mjölnir) and the symbol Odal (rune), which they use to represent Odin, to mark themselves as part of the gang. This has led to many prisons banning the wearing of Mjölnir by inmates due to their gang affiliation. A group of churches which started in the 1830s in the United States is known under the banner of "New Thought ''. These churches share a spiritual, metaphysical and mystical predisposition and understanding of the Bible and were strongly influenced by the Transcendentalist movement, particularly the work of Ralph Waldo Emerson. Another antecedent of this movement was Swedenborgianism, founded on the writings of Emanuel Swedenborg in 1787. The New Thought concept was named by Emma Curtis Hopkins ("teacher of teachers '') after Hopkins broke off from Mary Baker Eddy 's Church of Christ, Scientist. The movement had been previously known as the Mental Sciences or the Christian Sciences. The three major branches are Religious Science, Unity Church and Divine Science. Unitarian Universalists (UU 's) are among the most liberal of all religious denominations in America. The shared creed includes beliefs in inherent dignity, a common search for truth, respect for beliefs of others, compassion, and social action. They are unified by their shared search for spiritual growth and by the understanding that an individual 's theology is a result of that search and not obedience to an authoritarian requirement. UU 's have historical ties to anti-war, civil rights, and LGBT rights movements, as well as providing inclusive church services for the broad spectrum of liberal Christians, liberal Jews, secular humanists, LGBT, Jewish - Christian parents and partners, Earth - centered / Wicca, and Buddhist meditation adherents. The First Amendment guarantees both the free practice of religion and the non-establishment of religion by the federal government (later court decisions have extended that prohibition to the states). The U.S. Pledge of Allegiance was modified in 1954 to add the phrase "under God '', in order to distinguish itself from the state atheism espoused by the Soviet Union. Various American presidents have often stated the importance of religion. On February 20, 1955, President Dwight D. Eisenhower stated that "Recognition of the Supreme Being is the first, the most basic, expression of Americanism. '' President Gerald Ford agreed with and repeated this statement in 1974. See also: List of U.S. states and territories by religiosity The U.S. Census does not ask about religion. Various groups have conducted surveys to determine approximate percentages of those affiliated with each religious group. Religion in the United States, Gallup, 18 + (2016) Gallup carries out since 2008 the Gallup Daily tracking survey, an unprecedented survey of 1,000 U.S. adults each day, 350 days per year. It covers political, economic, wellbeing and demographic topics. Data is weighted to match the U.S. population according to gender, age, race, Hispanic ethnicity, education, region, population density, and phone status in order to correct data for unequal selection probability. In 2016, a poll by the Public Religion Research Institute estimated that 69 % of the Americans are Christians, with 45 % professing attendance at a variety of churches that could be considered Protestant, and 20 % professing Catholic beliefs. The same study says that other non-Christian religions (including Judaism, Buddhism, Hinduism, and Islam) collectively make up about 7 % of the population. Religion in the United States (2016) A 2013 survey reported that 31 % of Americans attend religious services at least weekly. It was conducted by the Public Religion Research Institute with a margin of error of 2.5. In 2006, an online Harris Poll (they stated that the magnitude of errors can not be estimated due to sampling errors, non-response, etc.; 2,010 U.S. adults were surveyed) found that 26 % of those surveyed attended religious services "every week or more often '', 9 % went "once or twice a month '', 21 % went "a few times a year '', 3 % went "once a year '', 22 % went "less than once a year '', and 18 % never attend religious services. In a 2009 Gallup International survey, 41.6 % of American citizens said that they attended a church, synagogue, or mosque once a week or almost every week. This percentage is higher than other surveyed Western countries. Church attendance varies considerably by state and region. The figures, updated to 2014, ranged from 51 % in Utah to 17 % in Vermont. The following is the percentage of Christians and all religions in the U.S. territories as of 2010: In August 2010, 67 % of Americans said religion was losing influence, compared with 59 % who said this in 2006. Majorities of white evangelical Protestants (79 %), white mainline Protestants (67 %), black Protestants (56 %), Catholics (71 %), and the religiously unaffiliated (62 %) all agreed that religion was losing influence on American life; 53 % of the total public said this was a bad thing, while just 10 % see it as a good thing. Politicians frequently discuss their religion when campaigning, and fundamentalists and black Protestants are highly politically active. However, to keep their status as tax - exempt organizations they must not officially endorse a candidate. Historically Catholics were heavily Democratic before the 1970s, while mainline Protestants comprised the core of the Republican Party. Those patterns have faded away -- Catholics, for example, now split about 50 -- 50. However, white evangelicals since 1980 have made up a solidly Republican group that favors conservative candidates. Secular voters are increasingly Democratic. Only three presidential candidates for major parties have been Catholics, all for the Democratic party: Joe Biden is the first Catholic vice president. Joe Lieberman was the first major presidential candidate that was Jewish, on the Gore -- Lieberman campaign of 2000 (although John Kerry and Barry Goldwater both had Jewish ancestry, they were practicing Christians). Bernie Sanders ran against Hillary Clinton in the Democratic primary of 2016. He was the first major Jewish candidate to compete in the presidential primary process. However, Sanders noted during the campaign that he does not actively practice any religion. In 2006 Keith Ellison of Minnesota became the first Muslim elected to Congress; when re-enacting his swearing - in for photos, he used the copy of the Qur'an once owned by Thomas Jefferson. André Carson is the second Muslim to serve in Congress. A Gallup poll released in 2007 indicated that 53 % of Americans would refuse to vote for an atheist as president, up from 48 % in 1987 and 1999. But then the number started to drop again and reached record low 43 % in 2012 and 40 % in 2015. Mitt Romney, the Republican presidential nominee in 2012, is Mormon and a member of The Church of Jesus Christ of Latter - day Saints. He is the former governor of the state of Massachusetts, and his father George Romney was the governor of the state of Michigan. The Romneys were involved in Mormonism in their states and in the state of Utah. On January 3, 2013, Tulsi Gabbard became the first Hindu member of Congress, using a copy of the Bhagavad Gita while swearing - in. The Association of Religion Data Archives (ARDA) surveyed congregations for their memberships. Churches were asked for their membership numbers. Adjustments were made for those congregations that did not respond and for religious groups that reported only adult membership. ARDA estimates that most of the churches not responding were black Protestant congregations. Significant difference in results from other databases include the lower representation of adherents of 1) all kinds (62.7 %), 2) Christians (59.9 %), 3) Protestants (less than 36 %); and the greater number of unaffiliated (37.3 %). The United States government does not collect religious data in its census. The survey below, the American Religious Identification Survey (ARIS) of 2008, was a random digit - dialed telephone survey of 54,461 American residential households in the contiguous United States. The 1990 sample size was 113,723; 2001 sample size was 50,281. Adult respondents were asked the open - ended question, "What is your religion, if any? '' Interviewers did not prompt or offer a suggested list of potential answers. The religion of the spouse or partner was also asked. If the initial answer was "Protestant '' or "Christian '' further questions were asked to probe which particular denomination. About one third of the sample was asked more detailed demographic questions. Religious Self - Identification of the U.S. Adult Population: 1990, 2001, 2008 Figures are not adjusted for refusals to reply; investigators suspect refusals are possibly more representative of "no religion '' than any other group. Highlights: The table below shows the religious affiliations among the ethnicities in the United States, according to the Pew Forum 2014 survey. People of Black ethnicity were most likely to be part of a formal religion, with 85 % percent being Christians. Protestant denominations make up the majority of the Christians in the ethnicities.
the bird girl statue in savannah ga location
Bird Girl - wikipedia Coordinates: 32 ° 04 ′ 41 '' N 81 ° 05 ′ 42 '' W  /  32.078 ° N 81.095 ° W  / 32.078; - 81.095 Bird Girl is a sculpture made in 1936 by Sylvia Shaw Judson in Lake Forest, Illinois. It was sculpted at Ragdale, her family 's summer home, and achieved fame when it was featured on the cover of the non-fiction novel Midnight in the Garden of Good and Evil (1994). Bird Girl is cast in bronze and stands 50 inches (130 cm) tall. She is the image of a young girl wearing a simple dress and a sad or contemplative expression, with her head tilted toward her left shoulder. She stands straight, her elbows propped against her waist as she holds up two bowls out from her sides. The bowls are often described by viewers as "bird feeders ''. The sculpture was commissioned as a garden sculpture for a family in Massachusetts. A slight, eight - year - old model named Lorraine Greenman (now Lorraine Ganz) posed for the piece. Only four statues were made from the original plaster cast. The first went to the Massachusetts garden. The second was sent to Washington, D.C. and is now located in Reading, Pennsylvania. The third was purchased by a family in Lake Forest and has never relocated. The fourth and most famous statue was bought by a family in Savannah, Georgia, who named it Little Wendy and set it up at the family 's plot in Bonaventure Cemetery in Savannah, Georgia. It has since been relocated to Telfair Museums ' Jepson Center for the Arts, where it is on display for museum visitors. Judson donated the original plaster model to the Crow Island School in Winnetka, Illinois. The Bonaventure Cemetery statue sat virtually unnoticed until 1993, when Random House hired Savannah photographer Jack Leigh to shoot an image for the cover of John Berendt 's new book, Midnight in the Garden of Good and Evil (1994). At Berendt 's suggestion, Leigh searched the Bonaventure Cemetery for a suitable subject. He found the sculpture next to a grave on the Trosdal family plot, at the end of his second day of searching, and had to make the shot quickly as dusk approached. He reportedly spent ten hours in the darkroom adjusting the lighting, giving the photo a moonlit feel and accentuating the halo around the statue 's head. The cover image was an immediate hit, and Berendt called it "one of the strongest book covers I 've ever seen ''. The book, published in 1994, became an all time bestseller, and soon people began flocking to Bonaventure Cemetery to see the sculpture. Due to concern about the amount of traffic at the grave site, the Trosdal family had it removed from the cemetery and later lent it to the Telfair Museums in Savannah, for public display in their Telfair Academy building. In December 2014, the statue was moved from the Telfair Academy to the Telfair Museums ' nearby Jepson Center for the Arts, where it is currently on public display. In 1995, Judson 's daughter Alice Judson Hayes (aka Alice Ryerson Hayes) had a fifth bronze statue created from a mold. That statue was given to Ragdale, an artists ' retreat in Lake Forest. Later, an authorized fiberglass replica was made from the original plaster model for use by Macy 's in their display windows; it was later moved to a museum in Savannah. Hayes holds the copyright for the Bird Girl and has actively defended it by filing lawsuits against unauthorized reproductions, especially full - sized replicas. She destroyed the mold that was used to cast the 1995 replica, although the original plaster model still exists. Hayes has licensed smaller - scale replicas, which have sold well. She died on October 13, 2006, passing on the copyright to her daughter, painter Francie Shaw. Warner Bros. produced an eponymous film adaptation of John Berendt 's book in 1997, directed by Clint Eastwood and featuring Kevin Spacey and John Cusack. After purchasing the rights to use the sculpture 's likeness from Hayes, the studio created a fiberglass replica. The movie incorporated shots of the Bird Girl sculpture on its posters and in the film itself. After the film was completed the replica was sent to the Cliff Dwellers Club in Chicago, Illinois. Photographer Leigh sued Warner Bros. in November 1997 for copyright infringement over their shots of the Bird Girl replica in the cemetery, which were similar to Leigh 's original cover photograph. The lower court ruled that the movie 's sequences with the statue were not infringement, but an appeals court found that the photographs used for promotional purposes, such as posters, bore significant similarities and remanded the matter back to the lower court. Warner Bros. and Leigh then settled out of court for an undisclosed amount. Sylvia Shaw Judson died in 1978. Although she did not see her Bird Girl sculpture achieve fame, she was already a renowned sculptor whose pieces have been on display in such prestigious locations as the White House, the Massachusetts State House, the Philadelphia Museum of Art, and the Whitney Museum of American Art in New York City. Jack Leigh died of colon cancer on May 19, 2004, and is buried in Bonaventure Cemetery, where he took his most famous photograph.
whose life is the movie a star is born about
A Star Is Born (2018 film) - wikipedia A Star Is Born is a 2018 American music - themed romantic drama film produced and directed by Bradley Cooper (in his directorial debut) and written by Cooper, Eric Roth and Will Fetters. A remake of the 1937 film of the same name, it stars Cooper, Lady Gaga, Andrew Dice Clay, Dave Chappelle, and Sam Elliott, and follows a hard - drinking musician (Cooper) who discovers and falls in love with a young singer (Gaga). It marks the fourth remake of the original 1937 film, after the 1954 musical, the 1976 rock musical and the 2013 Bollywood romance film. Talks of a remake of A Star Is Born began in 2011, with Clint Eastwood attached to direct and Beyoncé set to star. The film was in development hell for several years with various actors approached to co-star, including Christian Bale, Leonardo DiCaprio, Will Smith, and Tom Cruise. In March 2016, Cooper signed on to star and direct, and Lady Gaga joined the cast in August 2016. Principal photography began at the Coachella Valley Music and Arts Festival in April 2017. A Star Is Born premiered at the 75th Venice International Film Festival on August 31, 2018, and was theatrically released in the United States on October 5, 2018, by Warner Bros. The film has grossed over $254 million worldwide and received critical acclaim, with praise for Cooper, Gaga and Elliott 's performances and Cooper 's direction, as well as the screenplay, cinematography and music. Jackson Maine, a famous country music singer privately battling an alcohol and drug addiction, plays a concert in California. His main support is Bobby, his manager and significantly older half - brother. After the show, Jackson happens upon a drag bar where he witnesses a performance by Ally, a waitress and singer - songwriter. Jackson is amazed by her talent; they spend the night speaking to each other, where Ally discloses to him the troubles she has faced in pursuing a professional music career. Jackson invites Ally to his next show. Despite her initial refusal she attends and, with Jackson 's encouragement, sings on stage with him. Jackson invites Ally to go on tour with him, and they form a romantic relationship. In Arizona, Ally and Jackson visit the ranch where Jackson grew up and where his father is buried, only to discover that Bobby sold the land. Angered at his betrayal, Jackson punches Bobby, who subsequently quits as his manager. Before doing so, Bobby reveals that he did inform Jackson about the sale, but the latter was too inebriated to notice. While on tour Ally meets Rez, a record producer who offers her a contract. Although visibly bothered, Jackson still supports her decision. Rez refocuses Ally away from country music and towards pop. Jackson misses one of Ally 's performances after he passes out drunk in public; he recovers at the home of his friend Noodles, and later makes up with Ally. There he proposes to Ally with a ring made from a loop of guitar string, and they are married that same day. During Ally 's performance on Saturday Night Live, Bobby reconciles with Jackson. Later, Jackson and Ally fight after he drunkenly voices his disapproval of Ally 's new image and music, which is nominated for Grammy Awards. At the Grammys, a visibly intoxicated Jackson performs in a tribute to Roy Orbison, and Ally wins the Best New Artist award. When she goes up on stage to receive her award, Jackson follows her, where he wets himself and passes out. Ally 's father Lorenzo berates Jackson and Ally helps him sober up. Jackson joins a drug rehabilitation program shortly thereafter. Jackson recovers in rehab, where he discloses to his counselor that he attempted suicide as a teenager. He tearfully apologizes to Ally for his behavior, and returns home. Ally wishes to bring Jackson to perform with her on the European leg of her tour; Rez refuses, prompting Ally to cancel the remainder of the tour so she can care for Jackson. Rez later confronts Jackson, informing him of Ally 's decision to cancel her tour and accusing him of holding Ally back. That evening, Ally lies to Jackson, and tells him that her record label has cancelled her tour so she can focus on her second album. Jackson promises that he will come to her concert that night, but after Ally leaves, he hangs himself in their garage. Ally becomes inconsolable after Jackson 's death. She is visited by Bobby, who explains Jackson 's death was his own fault and not hers. Ally takes a song that Jackson had written but never performed and sings it at a tribute concert, introducing herself as Ally Maine. Additionally, Rebecca Field appears as Gail, Shangela Laquifa Wadley as the drag bar emcee, Willam Belli as drag queen Emerald, Greg Grunberg as Jackson 's driver Phil, and Ron Rifkin as Carl. Lukas Nelson & Promise of the Real appear as Jackson 's band, Eddie Griffin appears as a local preacher and Luenell appears as a cashier. Marlon Williams, Brandi Carlile, Halsey, Alec Baldwin, and Don Roy King cameo as themselves. In January 2011, it was announced that Clint Eastwood was in talks to direct Beyoncé in a third American remake of the 1937 film A Star Is Born; however, the project was delayed due to Beyoncé 's pregnancy. In April 2012, writer Will Fetters told Collider that the script was inspired by Kurt Cobain. Talks with Christian Bale, Leonardo DiCaprio, Tom Cruise, Johnny Depp, and Will Smith to play the male lead failed to come to fruition. On October 9, 2012, Beyoncé left the project, and it was reported that Bradley Cooper was in talks to star. Eastwood was interested in Esperanza Spalding to play the female lead. On March 24, 2015, Warner Bros. announced that Cooper was in final talks to make his directorial debut with the film, and potentially to star with Beyoncé, who was again in talks to join; Cooper did become the male lead. On August 16, 2016, it was reported that Lady Gaga had officially become attached and the studio had green - lit the project to begin production early 2017. It marks the fourth remake of the original 1937 film, after the 1954 musical, the 1976 rock musical and the 2013 Bollywood romance film. On November 9, 2016, it was reported that Ray Liotta was in talks to join the film in the role of the manager to Cooper 's character, though he ultimately was not involved. On March 17, 2017, Sam Elliott joined the film, with Andrew Dice Clay entering negotiations to play Lorenzo, the father of Lady Gaga 's character. Clay was selected over Robert De Niro, John Turturro and John Travolta. In April 2017, Rafi Gavron, Michael Harney, and Rebecca Field also joined the cast. Filming began on April 17, 2017. In May, Dave Chappelle was cast in the film. In April 2018, it was announced that Halsey would have a small role. After seeing him perform at Desert Trip festival, Cooper approached Lukas Nelson (son of country music singer Willie Nelson) and asked him to help work on the film. Nelson agreed and wrote several songs, which he sent to the producers. Nelson subsequently met Lady Gaga and began writing songs with her and she, in turn, provided backing vocals on two tracks on his self - titled 2017 album. The soundtrack, performed by Gaga and Cooper, was released on October 5, 2018, by Interscope Records. The studio announced that the album "features 19 songs in a wide range of musical styles + 15 dialogue tracks that will take you on a journey that mirrors the experience of seeing the film. '' A Star Is Born had its world premiere at the Venice Film Festival on August 31, 2018. It also screened at the Toronto International Film Festival, the San Sebastián International Film Festival, and the Zurich Film Festival in September 2018. The film was theatrically released in the United States on October 5, 2018, distributed by Warner Bros. Pictures, after initially having been set for September 28, 2018, and May 18, 2018, releases. As of October 28, 2018, A Star Is Born has grossed $148.6 million in the United States and Canada, and $106 million in other territories, for a total worldwide gross of $254.6 million, against a production budget of $36 -- 40 million. In the United States and Canada, A Star Is Born grossed $1.35 million from select Tuesday and Wednesday night screenings, and $15.8 million on its first day, including $3.2 million from Thursday night previews. It went on to debut to $42.9 million for the weekend and finished second at the box office, behind fellow newcomer Venom. The film remained in second place in its second, third and fourth weekends, grossing a respective $28 million, $19.3 million and $14.1 million. Outside North America, the film was released day - and - date with the U.S. in 31 other countries, and made $14.2 million in its opening weekend; its largest markets were the United Kingdom ($5.3 million), France ($2.1 million) and Germany ($1.9 million). On review aggregator Rotten Tomatoes, A Star Is Born holds an approval rating of 90 % based on 394 reviews, with an average rating of 8.1 / 10. The website 's critical consensus reads, "With appealing leads, deft direction, and an affecting love story, A Star Is Born is a remake done right -- and a reminder that some stories can be just as effective in the retelling. '' On Metacritic, the film has a weighted average score of 88 out of 100, based on 60 critics, indicating "universal acclaim ''. Audiences polled by CinemaScore gave the film an average grade of "A '' on an A+ to F scale, while PostTrak reported film - goers gave it a 90 % positive score. Alonso Duralde of TheWrap gave the film a positive review, saying, "Cooper and Lady Gaga are dynamite together; this is a story that lives and dies by the central relationship and the instant chemistry that must blossom between them, and these two have it in spades, '' and praised the musical numbers, describing them as "electrifying ''. Owen Gleiberman of Variety lauded Cooper 's directing, co-writing, and acting, and called the film "a transcendent Hollywood movie ''. Leah Greenblatt of Entertainment Weekly gave the film a B+, singling out Gaga 's performance, saying, "she deserves praise for her restrained, human - scale performance as a singer whose real - girl vulnerability feels miles away from the glittery meat - dress delirium of her own stage persona. '' Stephanie Zacharek of Time magazine found the film superior to its previous iterations and similarly praised Cooper 's direction, the writing, as well the performances and chemistry of Cooper and Gaga. She stated: "You come away feeling something for these people, flawed individuals who are trying to hold their cracked pieces of self together -- or to mend the cracks of those they love, '' also describing Gaga 's performance as a "knockout. '' In his review for Los Angeles Times, Justin Chang called the film "remarkable, '' and praised Cooper for his fresh take on the well - worn formula of 1937 film, as well his direction, the performances, writing, and the cinematography. Peter Travers of Rolling Stone gave the film 4.5 out of five stars, and deemed it as a "modern classic, '' hailing the performances of Cooper and Gaga, and Cooper 's direction. He found the film 's screenplay and the original songs "seamless '' and "terrific, '' and also called the film a major Oscar contender of the year and one of the year 's best films. The Washington Post 's Ann Hornaday described the film as "lavishly delightful '' and "earthly convincing, '' and added that it "offers a suitably jaundiced glimpse of starmaking machinery at its most cynical, but also its most thrilling and gratifying. '' She similarly praised Cooper 's direction, the performances and chemistry of Cooper and Gaga, and the supporting performances, particularly Andrew Dice Clay and Sam Elliott. While praising the direction, acting, and writing, Michael Phillips in the Chicago Tribune argues that a A Star is Born 's formula has always been very seductive to audiences, even when it has been written poorly, and Cooper 's few missteps include being a bit of a scene hog. Admitting audiences love it, and he just liked it, Phillips drew attention to a skeptical review by Lindsey Romaine of Medium.com who criticized the story 's marginalization of the Gaga character in dealing with Cooper 's manipulative addict. She at least wanted a scene where Gaga 's character processed her behavior of letting the addict boyfriend get away with it. Phillips argued that it is in part the skillful musicianship which gets audiences to blow past such flaws.
who plays james rhodes in iron man 2
Iron Man 2 - Wikipedia Iron Man 2 is a 2010 American superhero film based on the Marvel Comics character Iron Man, produced by Marvel Studios and distributed by Paramount Pictures. It is the sequel to 2008 's Iron Man, and is the third film in the Marvel Cinematic Universe. Directed by Jon Favreau and written by Justin Theroux, the film stars Robert Downey Jr., Gwyneth Paltrow, Don Cheadle, Scarlett Johansson, Sam Rockwell, Mickey Rourke, and Samuel L. Jackson. Six months after the events of Iron Man, Tony Stark is resisting calls by the United States government to hand over the Iron Man technology while also combating his declining health from the arc reactor in his chest. Meanwhile, rogue Russian scientist Ivan Vanko has developed the same technology and built weapons of his own in order to pursue a vendetta against the Stark family, in the process joining forces with Stark 's business rival, Justin Hammer. Following the successful release of Iron Man in May 2008, Marvel Studios announced and immediately set to work on producing a sequel. In July of that same year Theroux was hired to write the script, and Favreau was signed to return and direct. Downey, Paltrow and Jackson were set to reprise their roles from Iron Man, while Cheadle was brought in to replace Terrence Howard in the role of James Rhodes. In the early months of 2009, Rourke, Rockwell and Johansson filled out the supporting cast, and filming took place from April to July of that year. Like its predecessor the film was shot mostly in California, except for a key sequence in Monaco. Iron Man 2 premiered at the El Capitan Theatre on April 26, 2010, and was released internationally between April 28 and May 7 before releasing in the U.S. on May 7, 2010. The film received generally positive reviews and was commercially successful, grossing over $623.9 million at the worldwide box office. The DVD and Blu - ray were released on September 28, 2010. The third installment of the Iron Man series, Iron Man 3, was released on May 3, 2013. In Russia, the media covers Tony Stark 's disclosure of his identity as Iron Man. Ivan Vanko, whose father Anton Vanko has just died, sees this and begins building a miniature arc reactor similar to Stark 's. Six months later, Stark is a superstar and uses his Iron Man suit for peaceful means, resisting government pressure to sell his designs. He reinstitutes the Stark Expo to continue his father Howard 's legacy. The palladium core in the arc reactor that keeps Stark alive and powers the armor is slowly poisoning him, and he can not find a substitute. Growing increasingly reckless and despondent about his impending death, and choosing not to tell anyone about his condition, Stark appoints his personal assistant Pepper Potts CEO of Stark Industries, and hires Stark employee Natalie Rushman to replace her as his personal assistant. Stark competes in the Monaco Historic Grand Prix, where he is attacked in the middle of the race by Vanko, who wields electrified whips. Stark dons his Mark V armor and defeats Vanko, but the suit is severely damaged. Vanko explains his intention was to prove to the world that Iron Man is not invincible. Impressed by Vanko 's performance, Stark 's rival, Justin Hammer, fakes Vanko 's death while breaking him out of prison and asks him to build a line of armored suits to upstage Stark. During what he believes is his final birthday party, Stark gets drunk while wearing the Mark IV suit. Disgusted, U.S. Air Force Lieutenant Colonel James Rhodes dons Stark 's Mark II prototype armor and tries to restrain him. The fight ends in a stalemate, so Rhodes confiscates the Mark II for the U.S. Air Force. Nick Fury, director of S.H.I.E.L.D., approaches Stark, revealing "Rushman '' to be Agent Natasha Romanoff and that Howard Stark was a S.H.I.E.L.D. founder whom Fury knew personally. Fury explains that Vanko 's father jointly invented the arc reactor with Stark, but when Anton tried to sell it for profit, Stark had him deported. The Soviets sent Anton to the gulag. Fury gives Stark some of his father 's old material; a hidden message in the diorama of the 1974 Stark Expo proves to be a diagram of the structure of a new element. With the aid of his computer J.A.R.V.I.S., Stark synthesizes it. When he learns Vanko is still alive, he places the new element in his arc reactor and ends his palladium dependency. At the Expo, Hammer unveils Vanko 's armored drones, led by Rhodes in a heavily weaponized version of the Mark II armor. Stark arrives in the Mark VI armor to warn Rhodes, but Vanko remotely takes control of both the drones and Rhodes ' armor and attacks Iron Man. Hammer is arrested while Romanoff and Stark 's bodyguard Happy Hogan go after Vanko at Hammer 's factory. Vanko escapes, but Romanoff returns control of the Mark II armor to Rhodes. Stark and Rhodes together defeat Vanko and his drones. Vanko seemingly commits suicide by blowing up his suit. At a debriefing, Fury informs Stark that because of Stark 's difficult personality, S.H.I.E.L.D. intends to use him only as a consultant. Stark and Rhodes receive medals for their heroism. In a post-credits scene, S.H.I.E.L.D. agent Phil Coulson reports the discovery of a large hammer at the bottom of a crater in a desert in New Mexico. The director, Jon Favreau, reprises his role as Happy Hogan, Tony Stark 's bodyguard and chauffeur, while Clark Gregg and Leslie Bibb reprise their roles as S.H.I.E.L.D. Agent Phil Coulson and reporter Christine Everhart, respectively. John Slattery appears as Tony 's father Howard Stark and Garry Shandling appears as United States Senator Stern, who wants Stark to give Iron Man 's armor to the government. Favreau stated that Shandling 's character was named after radio personality Howard Stern. Paul Bettany again voices Stark 's computer, J.A.R.V.I.S. Olivia Munn has a small role as Chess Roberts, a reporter covering the Stark expo, Kate Mara portrays a U.S. Marshal who summons Tony to the government hearing, and Stan Lee appears as himself (but is mistaken for Larry King). Additionally, news anchor Christiane Amanpour and political commentator Bill O'Reilly play themselves in newscasts. Adam Goldstein appears as himself and the film is dedicated to his memory. Further cameos include Tesla Motors CEO Elon Musk and Oracle Corporation CEO Larry Ellison. Favreau 's son Max appears as a child wearing an Iron Man mask who Stark saves from a drone. This was retroactively made the introduction of a young Peter Parker to the MCU, as confirmed in June 2017 by eventual Spider - Man actor Tom Holland, Feige and Spider - Man: Homecoming director Jon Watts. Jon Favreau said it was originally his intent to create a film trilogy for Iron Man, with Obadiah Stane (Jeff Bridges) becoming Iron Monger during the sequels. After a meeting between Favreau and various comic book writers, including Mark Millar, Stane became the main villain in Iron Man. Millar argued the Mandarin, whom Favreau originally intended to fill that role, was too fantastical. Favreau concurred, deciding, "I look at Mandarin more like how in Star Wars you had the Emperor, but Darth Vader is the guy you want to see fight. Then you work your way to the time when lightning bolts are shooting out of the fingers and all that stuff could happen. But you ca n't have what happened in Return of the Jedi happen in A New Hope. You just ca n't do it. '' Favreau also discussed in interviews how the films ' version of Mandarin "allows us to incorporate the whole pantheon of villains ''. He mentioned that S.H.I.E.L.D. will continue to have a major role. During development, Favreau said the film would explore Stark 's alcoholism, but it would not be "the ' Demon in a Bottle ' version ''. While promoting the first film, Downey stated that Stark would probably develop a drinking problem as he is unable to cope with his age, the effects of revealing he is Iron Man, and Pepper getting a boyfriend. Downey later clarified that the film was not a strict adaptation of the "Demon in a Bottle '' storyline from the comic book series, but was instead about the "interim space '' between the origin and the "Demon '' story arc. Shane Black gave some advice on the script, and suggested to Favreau and Downey that they model Stark on J. Robert Oppenheimer, who became depressed with being "the destroyer of worlds '' after working on the Manhattan Project. Immediately following Iron Man 's release, Marvel Studios announced that they were developing a sequel, with an intended release date of April 30, 2010. In July 2008, after several months of negotiating, Favreau officially signed on to direct. That same month Justin Theroux signed to write the script, which would be based on a story written by Favreau and Downey. Theroux co-wrote Tropic Thunder, which Downey had starred in, and Downey recommended him to Marvel. Genndy Tartakovsky storyboarded the film, and Adi Granov returned to supervise the designs for Iron Man 's armor. In October 2008, Marvel Studios came to an agreement to film Iron Man 2, as well as their next three films, at Raleigh Studios in Manhattan Beach, California. A few days later, Don Cheadle was hired to replace Terrence Howard. On being replaced, Howard stated, "There was no explanation, apparently the contracts that we write and sign are n't worth the paper that they 're printed on sometimes. Promises are n't kept, and good faith negotiations are n't always held up. '' Entertainment Weekly stated Favreau did not enjoy working with Howard, often re-shooting and cutting his scenes; Howard 's publicist said he had a good experience playing the part, while Marvel chose not to comment. As Favreau and Theroux chose to reduce the role, Marvel came to Howard to discuss lowering his salary -- Howard was the first actor hired in Iron Man and was paid the largest salary. The publication stated they were unsure whether Howard 's representatives left the project first or if Marvel chose to stop negotiating. Theroux denied the part of the report which claimed the size of the role had fluctuated. In November 2013, Howard stated that, going into the film, the studio offered him far less than was in his three - picture contract, claiming they told him the second will be successful, "with or without you, '' and, without mentioning him by name, said Downey "took the money that was supposed to go to me and pushed me out. '' In January 2009, Rourke and Rockwell entered negotiations to play a pair of villains. A few days later, Rockwell confirmed he would take the role, and that his character would be Justin Hammer. Paul Bettany confirmed that he would be returning to voice J.A.R.V.I.S. Marvel entered into early talks with Emily Blunt to play the Black Widow, though she was unable to take the role due to a previous commitment to star in Gulliver 's Travels. Samuel L. Jackson confirmed that he had been in discussions to reprise the role of Nick Fury from the first film 's post-credits scene, but that contract disputes were making a deal difficult. Jackson claimed that "There was a huge kind of negotiation that broke down. I do n't know. Maybe I wo n't be Nick Fury. '' In February, Jackson and Marvel came to terms, and he was signed to play the character in up to nine films. Downey and Rourke discussed his part during a roundtable discussion with David Ansen at the 2009 Golden Globes, and Rourke met with Favreau and Theroux to discuss the role. Rourke almost dropped out due to Marvel 's initial salary offer of $250,000, but the studio raised the offer, and in March Rourke signed on. Later that same day Scarlett Johansson signed on to play the Black Widow. Her deal included options for multiple films, including potentially The Avengers. In April, Garry Shandling, Clark Gregg, and Kate Mara joined the cast. Principal photography began April 6, 2009, at the Pasadena Masonic Temple, with the working title Rasputin. The bulk of the production took place at Raleigh Studios, though other locations were also used. Scenes were filmed at Edwards Air Force Base from May 11 through May 13. The location had also been used for Iron Man, and Favreau stated that he felt the "real military assets make the movie more authentic and the topography and the beauty of the desert and flightline open the movie up ''. The Historic Grand Prix of Monaco action sequence was shot in the parking lot of Downey Studios, with sets constructed in May and filming lasting through June. Permission to film in Monaco prior to the 2009 Monaco Grand Prix had initially been awarded, but was later retracted by Bernie Ecclestone. The filmmakers shipped one Rolls - Royce Phantom there, and filmed a track sequence in which race cars were later digitally added. Tanner Foust took on the role of driving Stark 's racing car. Also in June, it was reported that John Slattery had joined the film 's cast as Howard Stark. Olivia Munn was also cast, in an unspecified role. A massive green screen was constructed at the Sepulveda Dam to film a portion of the Stark Expo exterior, with the rest either shot at an area high school or added digitally. To construct the green screen, hundreds of shipping containers were stacked, covered in plywood and plaster, and then painted green. For the conclusion of that climactic scene, which the crew dubbed the "Japanese Garden '' scene, a set was built inside Sony Studios in Los Angeles. Filming lasted 71 days, and the film 's production officially wrapped on July 18, 2009. A post-credits scene depicting the discovery of a large hammer was filmed on the set of Thor, and some of it was reused in the film. Jon Favreau revealed that the scene was filmed with anamorphic lenses to match Thor, and was directed by Kenneth Branagh, the director of Thor. In January 2010, IMAX Corporation, Marvel, and Paramount announced that the film would receive a limited release on digital IMAX screens. It was not shot with IMAX cameras, so it was converted into the format using the IMAX DMR technology. The film underwent reshoots in February. Olivia Munn 's original role was cut, but she was given a new role during the reshoots. Janek Sirrs was the film 's visual effects supervisor, and Industrial Light & Magic again did the majority of the effects, as it did on the first film. ILM 's visual effects supervisor on the film, Ben Snow, said their work on the film was "harder '' than their work on the first, stating that Favreau asked more of them this time around. Snow described the process of digitally creating the suits: On the first Iron Man, we tried to use the Legacy (Studios, Stan Winston 's effects company) and Stan Winston suits as much as we could. For the second one, Jon (Favreau) was confident we could create the CG suits, and the action dictated using them. So, Legacy created what we called the "football suits '' from the torso up with a chest plate and helmet. We 'd usually put in some arm pieces, but not the whole arm. In the house fight sequence, where Robert Downey Jr. staggers around tipsy, we used some of the practical suit and extended it digitally. Same thing in the Randy 's Donuts scene. But in the rest of the film, we used the CG suit entirely. And Double Negative did an all - digital suit for the Monaco chase. ILM created 527 shots for the film, using programs such as Maya. Perception worked on over 125 shots for the film. They crafted gadgets, such as Tony Stark 's transparent LG smartphone, and created the backdrops for the Stark Expo as well as the computer screen interfaces on the touch - screen coffee table and the holographic lab environment. In total, 11 visual effect studios worked on the film. A soundtrack album featuring AC / DC was released by Columbia Records on April 19, 2010, in at least three different versions: basic, special and deluxe. The basic edition includes the CD; the special edition contains a 15 - track CD, a 32 - page booklet and a DVD featuring interviews, behind - the - scenes footage, and music videos; and the deluxe includes a reproduction of one of Iron Man 's first comic book appearances. Only 2 songs on the soundtrack actually appear in the movie. Although not included on the soundtrack album the film includes songs by The Average White Band, The Clash, Queen, Daft Punk, 2Pac and Beastie Boys. The film score was released commercially as Iron Man 2: Original Motion Picture Score on July 20, 2010, featuring 25 tracks. John Debney composed the score with Tom Morello. Iron Man 2 premiered at the El Capitan Theatre in Los Angeles, California on April 26, 2010, and was released in 6,764 theaters (48 IMAX) across 54 countries between April 28 and May 7, before going into general release in the United States on May 7, 2010. In the United States, it opened at 4,380 theaters, 181 of which were IMAX. The international release date of the film was moved forward to increase interest ahead of the 2010 FIFA World Cup association football tournament. At the 2009 San Diego Comic Con, a five - minute trailer for the movie was shown. Actors portraying Stark Industries recruiters handed out business cards with an invitation to apply. A website for Stark Industries went online, with an attached graphic of a "napkin memo '' from Stark to Potts announcing that Stark Industries no longer made weapons. Another section featured an online application. It was confirmed that the first theatrical trailer would premiere in front of Sherlock Holmes (another Robert Downey, Jr. film). This trailer was released online on December 16, 2009. A new trailer was shown by Robert Downey, Jr. on Jimmy Kimmel Live! on March 7 after the Academy Awards. Promotional partners included Symantec, Dr Pepper, Burger King, 7 Eleven, Audi, LG Electronics and Hershey. Author Alexander C. Irvine adapted the script into a novel, also titled Iron Man 2, that was released in April 2010. Prior to the film release, Marvel Comics released a four issue miniseries comic book titled Iron Man vs Whiplash, which introduced the film 's version of Whiplash into the Marvel Universe. A three issue prequel miniseries titled Iron Man 2: Public Identity was released in April. An Iron Man 2 video game was released by Sega on May 4, 2010 in North America, written by The Invincible Iron Man scribe Matt Fraction. The Wii version was developed by High Voltage Software and all console versions were published by Sega, while Gameloft published the mobile game. The game 's Comic - Con trailer showed that the Crimson Dynamo was set to appear as a villain. Cheadle and Jackson voice their respective characters in the games. The trailer revealed that A.I.M, Roxxon Energy Corporation, and Ultimo (depicted as a man named Kearson DeWitt in a large armor) are enemies in the game as well as reveal that the wearer of the Crimson Dynamo armor is General Valentin Shatalov. The game received generally unfavorable reviews, with a Metacritic score of 41 % for both the PS3 and Xbox 360 versions. On September 28, 2010, the film was released on DVD and Blu - ray Disc. The film was also collected in a 10 - disc box set titled "Marvel Cinematic Universe: Phase One -- Avengers Assembled '' which includes all of the Phase One films in the Marvel Cinematic Universe. It was released on April 2, 2013. Iron Man 2 earned $312.4 million in the United States and Canada, as well as $311.5 million in other territories, for a worldwide total of $623.9 million. Since the film was included in a predetermined legacy distribution deal that was signed before the Walt Disney Company purchased Marvel, Paramount Pictures distributed the film and collected 8 % of the box office, while the remaining portion went to Disney. Iron Man 2 earned $51 million on its opening day in the United States and Canada (including $7.5 million from Thursday previews), for a total weekend gross of $128 million, which was the fifth - highest opening weekend ever, at the time, behind The Dark Knight, Spider - Man 3, The Twilight Saga: New Moon and Pirates of the Caribbean: Dead Man 's Chest. It also had the highest opening for a 2010 movie. The film yielded an average of $29,252 per theater. IMAX contributed $9.8 million, which was the highest opening weekend for a 2D IMAX film, surpassing Star Trek 's previous record of $8.5 million. Iron Man 2 was the third - highest - grossing film of 2010 in the United States and Canada, behind Toy Story 3 and Alice in Wonderland. Iron Man 2 launched in six European markets with number - one openings on Wednesday, April 28, 2010, for a total $2.2 million. It earned $100.2 million its first five days from 53 foreign markets for a strong average of $14,814 per site. IMAX Corporation reported grosses of $2.25 million. This surpassed the previous record - holder for an IMAX 2D release, 2009 's Transformers: Revenge of the Fallen ($2.1 million). It was the seventh - highest - grossing film of 2010 internationally, behind Toy Story 3, Alice in Wonderland, Harry Potter and the Deathly Hallows -- Part 1, Inception, Shrek Forever After, and The Twilight Saga: Eclipse. The review aggregator website Rotten Tomatoes reported a 73 % approval rating with an average rating of 6.5 / 10 based on 278 reviews. The website 's consensus reads, "It is n't quite the breath of fresh air that Iron Man was, but this sequel comes close with solid performances and an action - packed plot. '' Metacritic gave the film 57 / 100 based on a normalized rating of 40 reviews. Brian Lowry of Variety stated, "Iron Man 2 is n't as much fun as its predecessor, but by the time the smoke clears, it 'll do ''. Anthony Lane of The New Yorker said, "To find a comic - book hero who does n't agonize over his supergifts, and would defend his constitutional right to get a kick out of them, is frankly a relief ''. David Edelstein of New York Magazine wrote, "It does n't come close to the emotional heft of those two rare 2s that outclassed their ones: Superman II and Spider - Man 2. But Iron Man 2 hums along quite nicely ''. Roger Ebert gave it 3 stars out of 4, stating that "Iron Man 2 is a polished, high - octane sequel, not as good as the original but building once again on a quirky performance by Robert Downey Jr ''. Frank Lovece of Film Journal International, a one - time Marvel Comics writer, said that, "In a refreshing and unexpected turn, the sequel to Iron Man does n't find a changed man. Inside the metal, imperfect humanity grows even more so, as thought - provoking questions of identity meet techno - fantasy made flesh. '' Conversely, Kirk Honeycutt of The Hollywood Reporter stated, "Everything fun and terrific about Iron Man, a mere two years ago, has vanished with its sequel. In its place, Iron Man 2 has substituted noise, confusion, multiple villains, irrelevant stunts and misguided story lines. '' After the release of Iron Man 2, The Walt Disney Studios agreed to pay Paramount at least $115 million for the worldwide distribution rights to Iron Man 3 and The Avengers. Disney, Marvel and Paramount announced a May 3, 2013 release date for Iron Man 3. Shane Black directed Iron Man 3, from a screenplay by Drew Pearce. Downey, Paltrow, Cheadle, and Favreau reprised their roles, while Ben Kingsley played Trevor Slattery, Guy Pearce played Aldrich Killian, and Rebecca Hall played Maya Hansen.
national institute of technology mizoram administrative block aizawl mizoram
National Institute of Technology Mizoram - Wikipedia National Institute of Technology Mizoram, also known as NIT Mizoram or NITMZ, is one of the 31 National Institutes of Technology in India. It is situated in Aizawl, Mizoram. NIT Mizoram was one of the ten new NITS established by the Ministry Of Human Resources Development, Govt. of India vide its order no. F. 23 - 13 - 2009 - TS - III Dated 30 October 2009 and 3 March 2010. In view of the above NIT Mizoram was started in the year 2010 in the state of Mizoram with an objective to impart education, research & training leading to B. Tech, M. Tech, M.Sc. & PhD. degrees. This institute has been declared as an Institute of National Importance by an Act of Parliament. Here the students are admitted through All India Entrance Exam - Joint Entrance Exam (JEE Main). The Bachelors programme B. Tech in Computer Science & Engineering, Electronics & Communication Engineering and Electrical & Electronics Engineering were started in the year 2010 - 11 and Department Of Mechanical engineering and Civil engineering started in 2013 - 14. The classes of NIT Mizoram were started from 2010 in Visvesvaraya National Institute of Technology, Nagpur and shifted to Aizawl only in 2011. The classes are currently operating from Dawrkawn, Chaltlang locality of Aizawl. College can be reached easily as Aizawl is connected by airways and roadways. NIT Mizoram Lengpui Campus Foundation stone was laid by Kapil Sibal, Minister of Human Resource Development on 13 October 2012. NIT Mizoram campus was initially planned in Thenzawl by the Chief Minister of Mizoram, Pu Lalthanhawla it was later shifted to a place located near Lengpui Airport, which is very close to Aizawl, Capital of Mizoram. The Institute is currently functioning in a temporary campus in the city of Aizawl. It comprises 1 Administrative Block and 4 Academic Blocks at Chaltlang, 3 Hostel Blocks at Tanhril and 1 Hostel Block at Durtlang. The permanent campus is proposed to be located at Lengpui, Aizawl district. The total demarcated land area is 190 acres (77 ha). The Students at present are being lodged in Saizahawla Boarding School (SBS) in Tanhril, near Mizoram University, There are 2 more Halls of Residence: One in Tanhril and the other in Durtlang. The institute has the following departments: The Institute organizes three fests every year, technical festival Morphosis, cultural festival Annunad and sports festival Shaurya.
the largest and most recently developed part of the brain is the
Human brain - wikipedia The human brain is the central organ of the human nervous system, and with the spinal cord makes up the central nervous system. The brain consists of the cerebrum, the brainstem and the cerebellum. It controls most of the activities of the body, processing, integrating, and coordinating the information it receives from the sense organs, and making decisions as to the instructions sent to the rest of the body. The brain is contained in, and protected by, the skull bones of the head. The cerebrum is the largest part of the human brain. It is divided into two cerebral hemispheres. The cerebral cortex is an outer layer of grey matter, covering the core of white matter. The cortex is split into the neocortex and the much smaller allocortex. The neocortex is made up of six neuronal layers, while the allocortex has three or four. Each hemisphere is conventionally divided into four lobes -- the frontal, temporal, parietal, and occipital lobes. The frontal lobe is associated with executive functions including self - control, planning, reasoning, and abstract thought, while the occipital lobe is dedicated to vision. Within each lobe, cortical areas are associated with specific functions, such as the sensory, motor and association regions. Although the left and right hemispheres are broadly similar in shape and function, some functions are associated with one side, such as language in the left and visual - spatial ability in the right. The hemispheres are connected by commissural nerve tracts, the largest being the corpus callosum. The cerebrum is connected by the brainstem to the spinal cord. The brainstem consists of the midbrain, the pons, and the medulla oblongata. The cerebellum is connected to the brainstem by pairs of tracts. Within the cerebrum is the ventricular system, consisting of four interconnected ventricles in which cerebrospinal fluid is produced and circulated. Underneath the cerebral cortex are several important structures, including the thalamus, the epithalamus, the pineal gland, the hypothalamus, the pituitary gland, and the subthalamus; the limbic structures, including the amygdala and the hippocampus; the claustrum, the various nuclei of the basal ganglia; the basal forebrain structures, and the three circumventricular organs. The cells of the brain include neurons and supportive glial cells. There are more than 86 billion neurons in the brain, and a more or less equal number of other cells. Brain activity is made possible by the interconnections of neurons and their release of neurotransmitters in response to nerve impulses. Neurons form elaborate neural networks of neural pathways and circuits. The whole circuitry is driven by the process of neurotransmission. The brain is protected by the skull, suspended in cerebrospinal fluid, and isolated from the bloodstream by the blood -- brain barrier. However, the brain is still susceptible to damage, disease, and infection. Damage can be caused by trauma, or a loss of blood supply known as a stroke. The brain is susceptible to degenerative disorders, such as Parkinson 's disease, dementias including Alzheimer 's disease, and multiple sclerosis. Psychiatric conditions, including schizophrenia and clinical depression, are thought to be associated with brain dysfunctions. The brain can also be the site of tumours, both benign and malignant; these mostly originate from other sites in the body. The study of the anatomy of the brain is neuroanatomy, while the study of its function is neuroscience. A number of techniques are used to study the brain. Specimens from other animals, which may be examined microscopically, have traditionally provided much information. Medical imaging technologies such as functional neuroimaging, and electroencephalography (EEG) recordings are important in studying the brain. The medical history of people with brain injury has provided insight into the function of each part of the brain. In culture, the philosophy of mind has for centuries attempted to address the question of the nature of consciousness and the mind - body problem. The pseudoscience of phrenology attempted to localise personality attributes to regions of the cortex in the 19th century. In science fiction, brain transplants are imagined in tales such as the 1942 Donovan 's Brain. The adult human brain weighs on average about 1.2 -- 1.4 kg (2.6 -- 3.1 lb) which is about 2 % of the total body weight, with a volume of around 1260 cm in men and 1130 cm in women, although there is substantial individual variation. Neurological differences between the sexes have not been shown to correlate in any simple way with IQ or other measures of cognitive performance. The cerebrum, consisting of the cerebral hemispheres, forms the largest part of the brain and is situated above the other brain structures. The outer region of the hemispheres, the cerebral cortex, is grey matter, consisting of cortical layers of neurons. Each hemisphere is divided into four main lobes. The brainstem, resembling a stalk, attaches to and leaves the cerebrum at the start of the midbrain area. The brainstem includes the midbrain, the pons, and the medulla oblongata. Behind the brainstem is the cerebellum (Latin: little brain). The cerebrum, brainstem, cerebellum, and spinal cord are covered by three membranes called meninges. The membranes are the tough dura mater; the middle arachnoid mater and the more delicate inner pia mater. Between the arachnoid mater and the pia mater is the subarachnoid space, which contains the cerebrospinal fluid. In the cerebral cortex, close to the basement membrane of the pia mater, is a limiting membrane called the glia limitans; this is the outermost membrane of the cortex. The living brain is very soft, having a gel - like consistency similar to soft tofu. The cortical layers of neurons constitute much of the brain 's grey matter, while the deeper subcortical regions of myelinated axons, make up the white matter. The cerebrum is the largest part of the human brain, and is divided into nearly symmetrical left and right hemispheres by a deep groove, the longitudinal fissure. The outer part of the cerebrum is the cerebral cortex, made up of grey matter arranged in layers. It is 2 to 4 millimetres (0.079 to 0.157 in) thick, and deeply folded to give a convoluted appearance. Beneath the cortex is the white matter of the brain. The largest part of the cerebral cortex is the neocortex, which has six neuronal layers. The rest of the cortex is of allocortex, which has three or four layers. The hemispheres are connected by five commissures that span the longitudinal fissure, the largest of these is the corpus callosum. The surface of the brain is folded into ridges (gyri) and grooves (sulci), many of which are named, usually according to their position, such as the frontal gyrus of the frontal lobe or the central sulcus separating the central regions of the hemispheres. There are many small variations in the secondary and tertiary folds. Each hemisphere is conventionally divided into four lobes; the frontal lobe, parietal lobe, temporal lobe, and occipital lobe, named according to the skull bones that overlie them. Each lobe is associated with one or two specialised functions though there is some functional overlap between them. The cortex is mapped by divisions into about fifty different functional areas known as Brodmann 's areas. These areas are distinctly different when seen under a microscope. The cortex is divided into two main functional areas -- a motor cortex and a sensory cortex. The primary sensory areas receive signals from the sensory nerves and tracts by way of relay nuclei in the thalamus. Primary sensory areas include the visual cortex of the occipital lobe, the auditory cortex in parts of the temporal lobe and insular cortex, and the somatosensory cortex in the parietal lobe. The primary motor cortex, which sends axons down to motor neurons in the brainstem and spinal cord, occupies the rear portion of the frontal lobe, directly in front of the somatosensory area. The remaining parts of the cortex, are called the association areas. These areas receive input from the sensory areas and lower parts of the brain and are involved in the complex cognitive processes of perception, thought, and decision - making. The main functions of the frontal lobe are to control attention, abstract thinking, behavior, problem solving tasks, and physical reactions and personality. The occipital lobe is the smallest lobe; its main functions are visual reception, visual - spatial processing, movement, and colour recognition. There is a smaller occipital lobule in the lobe known as the cuneus. The temporal lobe controls auditory and visual memories, language, and some hearing and speech. The cerebrum contains the ventricles where the cerebrospinal fluid is produced and circulated. Below the corpus callosum is the septum pellucidum, a membrane that separates the lateral ventricles. Beneath the lateral ventricles is the thalamus and to the front and below this is the hypothalamus. The hypothalamus leads on to the pituitary gland. At the back of the thalamus is the brainstem. The basal ganglia, also called basal nuclei, are a set of structures deep within the hemispheres involved in behaviour and movement regulation. The largest component is the striatum, others are the globus pallidus, the substantia nigra and the subthalamic nucleus. Part of the dorsal striatum, the putamen, and the globus pallidus, lie separated from the lateral ventricles and thalamus by the internal capsule, whereas the caudate nucleus stretches around and abuts the lateral ventricles on their outer sides. Below and in front of the striatum are a number of basal forebrain structures. These include the nucleus accumbens, nucleus basalis, diagonal band of Broca, substantia innominata, and the medial septal nucleus. These structures are important in producing the neurotransmitter, acetylcholine, which is then distributed widely throughout the brain. The basal forebrain, in particular the nucleus basalis, is considered to be the major cholinergic output of the central nervous system to the striatum and neocortex. The cerebellum is divided into an anterior lobe, a posterior lobe, and the flocculonodular lobe. The anterior and posterior lobes are connected in the middle by the vermis. The cerebellum has a much thinner outer cortex that is narrowly furrowed horizontally. Viewed from underneath between the two lobes is the third lobe the flocculonodular lobe. The cerebellum rests at the back of the cranial cavity, lying beneath the occipital lobes, and is separated from these by the cerebellar tentorium, a sheet of fibre. It is connected to the midbrain of the brainstem by the superior cerebellar peduncles, to the pons by the middle cerebellar peduncles, and to the medulla by the inferior cerebellar peduncles. The cerebellum consists of an inner medulla of white matter and an outer cortex of richly folded grey matter. The cerebellum 's anterior and posterior lobes appear to play a role in the coordination and smoothing of complex motor movements, and the flocculonodular lobe in the maintenance of balance although debate exists as to its cognitive, behavioral and motor functions. The brainstem lies beneath the cerebrum and consists of the midbrain, pons and medulla. It lies in the back part of the skull, resting on the part of the base known as the clivus, and ends at the foramen magnum, a large opening in the occipital bone. The brainstem continues below this as the spinal cord, protected by the vertebral column. Ten of the twelve pairs of cranial nerves emerge directly from the brainstem. The brainstem also contains nuclei of many cranial and peripheral nerves, as well as nuclei involved in the regulation of many essential processes including breathing, control of eye movements and balance. The reticular formation, a network of nuclei of ill - defined formation, is present within and along the length of the brainstem. Many nerve tracts, which transmit information to and from the cerebral cortex to the rest of the body, pass through the brainstem. The human brain is primarily composed of neurons, glial cells, neural stem cells, and blood vessels. Types of neuron include interneurons, pyramidal cells including Betz cells, motor neurons (upper and lower motor neurons), and cerebellar Purkinje cells. Betz cells are the largest cells (by size of cell body) in the nervous system. The adult human brain is estimated to contain 86 ± 8 billion neurons, with a roughly equal number (85 ± 10 billion) of non-neuronal cells. Out of these neurons, 16 billion (19 %) are located in the cerebral cortex, and 69 billion (80 %) are in the cerebellum. Types of glial cell are astrocytes (including Bergmann glia), oligodendrocytes, ependymal cells (including tanycytes), radial glial cells and microglia. Astrocytes are the largest of the glial cells. They are stellate cells with many processes radiating from their cell bodies. Some of these processes end as perivascular end - feet on capillary walls. The glia limitans of the cortex is made up of astrocyte foot processes that serve in part to contain the cells of the brain. Mast cells are white blood cells that interact in the neuroimmune system in the brain. Mast cells in the central nervous system are present in a number of brain structures and in the meninges; they mediate neuroimmune responses in inflammatory conditions and help to maintain the blood -- brain barrier, particularly in brain regions where the barrier is absent. Across systems, mast cells serve as the main effector cell through which pathogens can affect the gut -- brain axis. Some 400 genes are shown to be brain - specific. In all neurons ELAVL3 is expressed, and in pyramidal neurons NRGN and REEP2 are also expressed. GAD1 essential for the biosynthesis of GABA is expressed in interneurons. Proteins expressed in glial cells are astrocyte markers GFAP, and S100B. Myelin basic protein and the transcription factor OLIG2 are expressed in oligodendrocytes. Cerebrospinal fluid is a clear, colourless transcellular fluid that circulates around the brain in the subarachnoid space, in the ventricular system, and in the central canal of the spinal cord. It also fills some gaps in the subarachnoid space, known as subarachnoid cisterns. The four ventricles, two lateral, a third, and a fourth ventricle, all contain choroid plexus that produces cerebrospinal fluid. The third ventricle lies in the midline and is connected to the lateral ventricles. A single duct, the cerebral aqueduct between the pons and the cerebellum, connects the third ventricle to the fourth ventricle. Three separate openings, the middle and two lateral apertures, drain the cerebrospinal fluid from the fourth ventricle to the cisterna magna one of the major cisterns. From here, cerebrospinal fluid circulates around the brain and spinal cord in the subarachnoid space, between the arachnoid mater and pia mater. At any one time, there is about 150mL of cerebrospinal fluid -- most within the subarachnoid space. It is constantly being regenerated and absorbed, and replaces about once every 5 -- 6 hours. In other parts of the body, circulation in the lymphatic system clears extracellular waste products from the cell tissue. For the tissue of the brain, such a system has not yet been identified. However, the presence of a glymphatic system has been proposed. Newer studies (2015) from two laboratories have shown the presence of meningeal lymphatic vessels running alongside the blood vessels, and these have been shown with lymph valves, to be more extensive at the base of the brain where they exit with the cranial nerves. The internal carotid arteries supply oxygenated blood to the front of the brain and the vertebral arteries supply blood to the back of the brain. These two circulations join together in the circle of Willis, a ring of connected arteries that lies in the interpeduncular cistern between the midbrain and pons. The internal carotid arteries are branches of the common carotid arteries. They enter the cranium through the carotid canal, travel through the cavernous sinus and enter the subarachnoid space. They then enter the circle of Willis, with two branches, the anterior cerebral arteries emerging. These branches travel forward and then upward along the longitudinal fissure, and supply the front and midline parts of the brain. One or more small anterior communicating arteries join the two anterior cerebral arteries shortly after they emerge as branches. The internal carotid arteries continue forward as the middle cerebral arteries. They travel sideways along the sphenoid bone of the eye socket, then upwards through the insula cortex, where final branches arise. The middle cerebral arteries send branches along their length. The vertebral arteries emerge as branches of the left and right subclavian arteries. They travel upward through transverse foramina -- spaces in the cervical vertebrae and then emerge as two vessels, one on the left and one on the right of the medulla. They give off one of the three cerebellar branches. The vertebral arteries join in front of the middle part of the medulla to form the larger basilar artery, which sends multiple branches to supply the medulla and pons, and the two other anterior and superior cerebellar branches. Finally, the basilar artery divides into two posterior cerebral arteries. These travel outwards, around the superior cerebellar peduncles, and along the top of the cerebellar tentorium, where it sends branches to supply the temporal and occipital lobes. Each posterior cerebral artery sends a small posterior communicating artery to join with the internal carotid arteries. Cerebral veins drain deoxygenated blood from the brain. The brain has two main networks of veins: an exterior or superficial network, on the surface of the cerebrum that has three branches, and an interior network. These two networks communicate via anastomosing (joining) veins. The veins of the brain drain into larger cavities the dural venous sinuses usually situated between the dura mater and the covering of the skull. Blood from the cerebellum and midbrain drains into the great cerebral vein. Blood from the medulla and pons of the brainstem have a variable pattern of drainage, either into the spinal veins or into adjacent cerebral veins. The blood in the deep part of the brain drains, through a venous plexus into the cavernous sinus at the front, and the superior and inferior petrosal sinuses at the sides, and the inferior sagittal sinus at the back. Blood drains from the outer brain into the large superior saggital sinus, which rests in the midline on top of the brain. Blood from here joins with blood from the straight sinus at the confluence of sinuses. Blood from here drains into the left and right transverse sinuses. These then drain into the sigmoid sinuses, which receive blood from the cavernous sinus and superior and inferior petrosal sinuses. The sigmoid drains into the large internal jugular veins. The larger arteries throughout the brain supply blood to smaller capillaries. These smallest of blood vessels in the brain, are lined with cells joined by tight junctions and so fluids do not seep in or leak out to the same degree as they do in other capillaries, thereby creating the blood -- brain barrier. Pericytes play a major role in the formation of the tight junctions. The barrier is less permeable to larger molecules, but is still permeable to water, carbon dioxide, oxygen, and most fat - soluble substances (including anaesthetics and alcohol). The blood -- brain barrier is not present in areas of the brain that may need to respond to changes in body fluids, such as the pineal gland, area postrema, and some areas of the hypothalamus. There is a similar blood -- cerebrospinal fluid barrier, which serves the same purpose as the blood -- brain barrier, but facilitates the transport of different substances into the brain due to the distinct structural characteristics between the two barrier systems. At the beginning of the third week of development, the embryonic ectoderm forms a thickened strip called the neural plate. By the fourth week of development the neural plate has widened to give a broad cephalic end, a less broad middle part and a narrow caudal end. These swellings represent the beginnings of the forebrain, midbrain and hindbrain. Neural crest cells (derived from the ectoderm) populate the lateral edges of the plate at the neural folds. In the fourth week in the neurulation stage the neural plate folds and closes to form the neural tube, bringing together the neural crest cells at the neural crest. The neural crest runs the length of the tube with cranial neural crest cells at the cephalic end and caudal neural crest cells at the tail. Cells detach from the crest and migrate in a craniocaudal (head to tail) wave inside the tube. Cells at the cephalic end give rise to the brain, and cells at the caudal end give rise to the spinal cord. The tube flexes as it grows, forming the crescent - shaped cerebral hemispheres at the head. The cerebral hemispheres first appear on day 32. Early in the fourth week the cephalic part bends sharply forward in a cephalic flexure. This flexed part becomes the forebrain (prosencephalon); the adjoining curving part becomes the midbrain (mesencephalon) and the part caudal to the flexure becomes the hindbrain (rhombencephalon). These three areas are formed as swellings known as the primitive vesicles. In the fifth week of developmement five brain vesicles have formed. The forebrain separates into two vesicles an anterior telencephalon and a posterior diencephalon. The telencephalon gives rise to the cerebral cortex, basal ganglia, and related structures. The diencephalon gives rise to the thalamus and hypothalamus. The hindbrain also splits into two areas -- the metencephalon and the mylencephalon. The metencephalon gives rise to the cerebellum and pons. The myelencephalon gives rise to the medulla oblongata. Also during the fifth week, the brain divides into repeating segments called neuromeres. These are known as rhombomeres seen in the hindbrain. A characteristic of the brain is gyrification (wrinkling of the cortex). In the womb, the cortex starts off as smooth but starts to form fissures that begin to mark out the different lobes of the brain. Scientists do not have a clear answer as to why the cortex later wrinkles and folds, but the wrinkling and folding is associated with intelligence and neurological disorders. The fissures form as a result of the growing hemispheres that increase in size due to a sudden growth in cells of the grey matter. The underlying white matter does not grow at the same rate and the hemispheres are crowded into the small cranial vault. The first cleft to appear in the fourth month is the lateral cerebral fossa. The expanding caudal end of the hemisphere has to curve over in a forward direction to fit into the restricted space. This covers the fossa and turns it into a much deeper ridge known as the lateral sulcus and this marks out the temporal lobe. By the sixth month other sulci have formed that demarcate the frontal, parietal, and occipital lobes. A gene present in the human genome (ArhGAP11B) may play a major role in gyrification and encephalisation. The motor system of the brain is responsible for the generation and control of movement. Generated movements pass from the brain through nerves to motor neurons in the body, which control the action of muscles. The corticospinal tract carries movements from the brain, through the spinal cord, to the torso and limbs. The cranial nerves carry movements related to the eyes, mouth and face. Gross movement -- such as locomotion and the movement of arms and legs -- is generated in the motor cortex, divided into three parts: the primary motor cortex, found in the prefrontal gyrus and has sections dedicated to the movement of different body parts. These movements are supported and regulated by two other areas, lying anterior to the primary motor cortex: the premotor area and the supplementary motor area. The hands and mouth have a much larger area dedicated to them than other body parts, allowing finer movement; this has been visualised in a motor cortical homunculus. Impulses generated from the motor cortex travel along the corticospinal tract along the front of the medulla and cross over (decussate) at the medullary pyramids. These then travel down the spinal cord, with most connecting to interneurons, in turn connecting to lower motor neurons within the grey matter that then transmit the impulse to move to muscles themselves. The cerebellum and basal ganglia, play a role in fine, complex and coordinated muscle movements. Connections between the cortex and the basal ganglia control muscle tone, posture and movement initiation, and are referred to as the extrapyramidal system. The sensory nervous system is involved with the reception and processing of sensory information. This information is received through the cranial nerves, through tracts in the spinal cord, and directly at centres of the brain exposed to the blood. The brain also receives and interprets information from the special senses (vision, smell, hearing, and taste). Mixed motor and sensory signals are also integrated. From the skin, the brain receives information about fine touch, pressure, pain, vibration and temperature. From the joints, the brain receives information about joint position. The sensory cortex is found just near the motor cortex, and, like the motor cortex, has areas related to sensation from different body parts. Sensation collected by a sensory receptor on the skin is changed to a nerve signal, that is passed up a series of neurons through tracts in the spinal cord. The posterior column -- medial lemniscus pathway contains information about fine touch, vibration and position of joints. Neurons travel up the back part of the spinal cord to the back part of the medulla, where they connect with "second order '' neurons that immediately swap sides. These neurons then travel upwards into the ventrobasal complex in the thalamus where they connect with "third order '' neurons, and travel up to the sensory cortex. The spinothalamic tract carries information about pain, temperature, and gross touch. Neurons travel up the spinal cord and connect with second - order neurons in the reticular formation of the brainstem for pain and temperature, and also at the ventrobasal complex of the medulla for gross touch. Vision is generated by light that hits the retina of the eye. Photoreceptors in the retina transduce the sensory stimulus of light into an electrical nerve signal that is sent to the visual cortex in the occipital lobe. Vision from the left visual field is received on the right side of each retina (and vice versa) and passes through the optic nerve until some information changes sides, so that all information about one side of the visual field passes through tracts in the opposite side of the brain. The nerves reach the brain at the lateral geniculate nucleus, and travel through the optic radiation to reach the visual cortex. Hearing and balance are both generated in the inner ear. The movement of liquids within the inner ear is generated by motion (for balance) and transmitted vibrations generated by the ossicles (for sound). This creates a nerve signal that passes through the vestibulocochlear nerve. From here, it passes through to the cochlear nuclei, the superior olivary nucleus, the medial geniculate nucleus, and finally the auditory radiation to the auditory cortex. The sense of smell is generated by receptor cells in the epithelium of the olfactory mucosa in the nasal cavity. This information passes through a relatively permeable part of the skull to the olfactory nerve. This nerve transmits to the neural circuitry of the olfactory bulb from where information is passed to the olfactory cortex. Taste is generated from receptors on the tongue and passed along the facial and glossopharyngeal nerves into the solitary tract in the brainstem. Some taste information is also passed from the pharynx into this area via the vagus nerve. Information is then passed from here through the thalamus into the gustatory cortex. Autonomic functions of the brain include the regulation, or rhythmic control of the heart rate and rate of breathing, and maintaining homeostasis. Blood pressure and heart rate are influenced by the vasomotor centre of the medulla, which causes arteries and veins to be somewhat constricted at rest. It does this by influencing the sympathetic and parasympathetic nervous systems via the vagus nerve. Information about blood pressure is generated by baroreceptors in aortic bodies in the aortic arch, and passed to the brain along the afferent fibres of the vagus nerve. Information about the pressure changes in the carotid sinus comes from carotid bodies located near the carotid artery and this is passed via a nerve joining with the glossopharyngeal nerve. This information travels up to the solitary nucleus in the medulla. Signals from here influence the vasomotor centre to adjust vein and artery constriction accordingly. The brain controls the rate of breathing, mainly by respiratory centres in the medulla and pons. The respiratory centres control respiration, by generating motor signals that are passed down the spinal cord, along the phrenic nerve to the diaphragm and other muscles of respiration. This is a mixed nerve that carries sensory information back to the centres. There are four respiratory centres, three with a more clearly defined function, and an apneustic centre with a less clear function. In the medulla a dorsal respiratory group causes the desire to breathe in and receives sensory information directly from the body. Also in the medulla, the ventral respiratory group influences breathing out during exertion. In the pons the pneumotaxic centre influences the duration of each breath, and the apneustic centre seems to have an influence on inhalation. The respiratory centres directly senses blood carbon dioxide and pH. Information about blood oxygen, carbon dioxide and pH levels are also sensed on the walls of arteries in the peripheral chemoreceptors of the aortic and carotid bodies. This information is passed via the vagus and glossopharyngeal nerves to the respiratory centres. High carbon dioxide, an acidic pH, or low oxygen stimulate the respiratory centres. The desire to breathe in is also affected by pulmonary stretch receptors in the lungs which, when activated, prevent the lungs from overinflating by transmitting information to the respiratory centres via the vagus nerve. The hypothalamus in the diencephalon, is involved in regulating many functions of the body. Functions include neuroendocrine regulation, regulation of the circadian rhythm, control of the autonomic nervous system, and the regulation of fluid, and food intake. The circadian rhythm is controlled by two main cell groups in the hypothalamus. The anterior hypothalamus includes the suprachiasmatic nucleus and the ventrolateral preoptic nucleus which through gene expression cycles, generates a roughly 24 hour circadian clock. In the circadian day an ultradian rhythm takes control of the sleeping pattern. Sleep is an essential requirement for the body and brain and allows the closing down and resting of the body 's systems. There are also findings that suggest that the daily build - up of toxins in the brain are removed during sleep. Whilst awake the brain consumes a fifth of the body 's total energy needs. Sleep necessarily reduces this use and gives time for the restoration of energy - giving ATP. The effects of sleep deprivation show the absolute need for sleep. The lateral hypothalamus contains orexinergic neurons that control appetite and arousal through their projections to the ascending reticular activating system. The hypothalamus controls the pituitary gland through the release of peptides such as oxytocin, and vasopressin, as well as dopamine into the median eminence. Through the autonomic projections, the hypothalamus is involved in regulating functions such as blood pressure, heart rate, breathing, sweating, and other homeostatic mechanisms. The hypothalamus also plays a role in thermal regulation, and when stimulated by the immune system, is capable of generating a fever. The hypothalamus is influenced by the kidneys -- when blood pressure falls, the renin released by the kidneys stimulates a need to drink. The hypothalamus also regulates food intake through autonomic signals, and hormone release by the digestive system. While language functions were traditionally thought to be localized to Wernicke 's area and Broca 's area, it is now mostly accepted that a wider network of cortical regions contributes to language functions. The study on how language is represented, processed, and acquired by the brain is called neurolinguistics, which is a large multidisciplinary field drawing from cognitive neuroscience, cognitive linguistics, and psycholinguistics. The cerebrum has a contralateral organisation with each hemisphere of the brain interacting primarily with one half of the body: the left side of the brain interacts with the right side of the body, and vice versa. The developmental cause for this is uncertain. Motor connections from the brain to the spinal cord, and sensory connections from the spinal cord to the brain, both cross sides in the brainstem. Visual input follows a more complex rule: the optic nerves from the two eyes come together at a point called the optic chiasm, and half of the fibres from each nerve split off to join the other. The result is that connections from the left half of the retina, in both eyes, go to the left side of the brain, whereas connections from the right half of the retina go to the right side of the brain. Because each half of the retina receives light coming from the opposite half of the visual field, the functional consequence is that visual input from the left side of the world goes to the right side of the brain, and vice versa. Thus, the right side of the brain receives somatosensory input from the left side of the body, and visual input from the left side of the visual field. The left and right sides of the brain appear symmetrical, but they function asymmetrically. For example, the counterpart of the left - hemisphere motor area controlling the right hand is the right - hemisphere area controlling the left hand. There are, however, several important exceptions, involving language and spatial cognition. The left frontal lobe is dominant for language. If a key language area in the left hemisphere is damaged, it can leave the victim unable to speak or understand, whereas equivalent damage to the right hemisphere would cause only minor impairment to language skills. A substantial part of current understanding of the interactions between the two hemispheres has come from the study of "split - brain patients '' -- people who underwent surgical transection of the corpus callosum in an attempt to reduce the severity of epileptic seizures. These patients do not show unusual behavior that is immediately obvious, but in some cases can behave almost like two different people in the same body, with the right hand taking an action and then the left hand undoing it. These patients, when briefly shown a picture on the right side of the point of visual fixation, are able to describe it verbally, but when the picture is shown on the left, are unable to describe it, but may be able to give an indication with the left hand of the nature of the object shown. Emotions are generally defined as two - step multicomponent processes involving elicitation, followed by psychological feelings, appraisal, expression, autonomic responses, and action tendencies. Attempts to localize basic emotions to certain brain regions have been controversial, with some research finding no evidence for specific locations corresponding to emotions, and instead circuitry involved in general emotional processes. The amygdala, orbitofrontal cortex, mid and anterior insula cortex and lateral prefrontal cortex, appeared to be involved in generating the emotions, while weaker evidence was found for the ventral tegmental area, ventral pallidum and nucleus accumbens in incentive salience. Others, however, have found evidence of activation of specific regions, such as the basal ganglia in happiness, the subcallosal cingulate cortex in sadness, and amygdala in fear. The brain is responsible for cognition. The brain gives rise to countless cognitive processes that constitute cognition as a whole; however, higher cognitive function is derived from the set of executive functions, which are a group of cognitive processes that allow the cognitive control of behavior: selecting and successfully monitoring behaviors that facilitate the attainment of chosen goals. Executive functions include the ability to filter information and tune out irrelevant stimuli with attentional control and cognitive inhibition, the ability to process and manipulate information held in working memory, the ability to think about multiple concepts simultaneously and switch tasks with cognitive flexibility, the ability to inhibit impulses and prepotent responses with inhibitory control, and the ability to determine the relevance of information or appropriateness of an action. Higher order executive functions require the simultaneous use of multiple basic executive functions and include planning and fluid intelligence (i.e., reasoning and problem solving). The prefrontal cortex plays a significant role in mediating executive functions. Neuroimaging during neuropsychological tests of executive function, such as the stroop test and working memory tests, have found that cortical maturation of the prefrontal cortex correlates with executive function in children. Planning involves activation of the dorsolateral prefrontal cortex (DLPFC), anterior cingulate cortex, angular prefrontal cortex, right prefrontal cortex, and supramarginal gyrus. Working memory manipulation involves the DLPFC, inferior frontal gyrus, and areas of the parietal cortex. Inhibitory control involves multiple areas of the prefrontal cortex as well as the caudate nucleus and subthalamic nucleus. Task shifting does n't involve specific regions of the brain, but instead involves multiple regions of the prefrontal cortex and parietal lobe. Brain activity is made possible by the interconnections of neurons that are linked together to reach their targets. A neuron consists of a cell body, axon, and dendrites. Dendrites are often extensive branches that receive information in the form of signals from the axon terminals of other neurons. The signals received may cause the neuron to initiate an action potential (an electrochemical signal or nerve impulse) which is sent along its axon to the axon terminal, to connect with the dendrites or with the cell body of another neuron. An action potential is initiated at the initial segment of an axon, which contains a complex of proteins. When an action potential, reaches the axon terminal it triggers the release of a neurotransmitter at a synapse that propagates a signal that acts on the target cell. These chemical neurotransmitters include dopamine, serotonin, GABA, glutamate, and acetylcholine. GABA is the major inhibitory neurotransmitter in the brain, and glutamate is the major excitatory neurotransmitter. Neurons link at synapses to form pathways and elaborate neural networks, and the activity between them is driven by the process of neurotransmission. The brain consumes up to 20 % of the energy used by the human body, more than any other organ. In humans, blood glucose is the primary source of energy for most cells and is critical for normal function in a number of tissues, including the brain. The human brain consumes approximately 60 % of blood glucose in fasted, sedentary individuals. Brain metabolism normally relies upon blood glucose as an energy source, but during times of low glucose (such as fasting, endurance exercise, or limited carbohydrate intake), the brain uses ketone bodies for fuel with a smaller need for glucose. The brain can also utilize lactate during exercise. The brain stores glucose in the form of glycogen, albeit in significantly smaller amounts than that found in the liver or skeletal muscle. Long - chain fatty acids can not cross the blood -- brain barrier, but the liver can break these down to produce ketone bodies. However, short - chain fatty acids (e.g., butyric acid, propionic acid, and acetic acid) and the medium - chain fatty acids, octanoic acid and heptanoic acid, can cross the blood -- brain barrier and be metabolized by brain cells. Although the human brain represents only 2 % of the body weight, it receives 15 % of the cardiac output, 20 % of total body oxygen consumption, and 25 % of total body glucose utilization. The brain mostly uses glucose for energy, and deprivation of glucose, as can happen in hypoglycemia, can result in loss of consciousness. The energy consumption of the brain does not vary greatly over time, but active regions of the cortex consume somewhat more energy than inactive regions: this fact forms the basis for the functional brain imaging methods PET and fMRI. These functional imaging techniques provide a three - dimensional image of metabolic activity. The function of sleep is not fully understood; however, there is evidence that sleep enhances the clearance of metabolic waste products, some of which are potentially neurotoxic, from the brain and may also permit repair. Evidence suggests that the increased clearance of metabolic waste during sleep occurs via increased functioning of the glymphatic system. Sleep may also have an effect on cognitive function by weakening unnecessary connections. The brain is not fully understood, and research is ongoing. Neuroscientists, along with researchers from allied disciplines, study how the human brain works. The boundaries between the specialties of neuroscience, neurology and other disciplines such as psychiatry have faded as they are all influenced by basic research in neuroscience. Neuroscience research has expanded considerably in recent decades. The "Decade of the Brain '', an initiative of the United States Government in the 1990s, is considered to have marked much of this increase in research, and was followed in 2013 by the BRAIN Initiative. The Human Connectome Project was a five - year study launched in 2009 to analyse the anatomical and functional connections of parts of the brain, and has provided much data. Information about the structure and function of the human brain comes from a variety of experimental methods, including animals and humans. Information about brain trauma and stroke has provided information about the function of parts of the brain and the effects of brain damage. Neuroimaging is used to visualise the brain and record brain activity. Electrophysiology is used to measure, record and monitor the electrical activity of the cortex. Measurements may be of local field potentials of cortical areas, or of the activity of a single neuron. An electroencephalogram can record the electrical activity of the cortex using electrodes placed non-invasively on the scalp. Invasive measures include electrocorticography, which uses electrodes placed directly on the exposed surface of the brain. This method is used in cortical stimulation mapping, used in the study of the relationship between cortical areas and their systemic function. By using much smaller microelectrodes, single - unit recordings can be made from a single neuron that give a high spatial resolution and high temporal resolution. This has enabled the linking of brain activity to behaviour, and the creation of neuronal maps. Functional neuroimaging techniques show changes in brain activity that relate to the function of specific brain areas. One technique is functional magnetic resonance imaging (fMRI) which has the advantages over earlier methods of SPECT and PET of not needing the use of radioactive materials and of offering a higher resolution. Another technique is functional near - infrared spectroscopy. These methods rely on the haemodynamic response that shows changes in brain activity in relation to changes in blood flow, useful in mapping functions to brain areas. Resting state fMRI looks at the interaction of brain regions whilst the brain is not performing a specific task. This is also used to show the default mode network. Any electrical current generates a magnetic field; neural oscillations induce weak magnetic fields, and in functional magnetoencephalography the current produced can show localised brain function in high resolution. Tractography uses MRI and image analysis to create 3D images of the nerve tracts of the brain. Connectograms give a graphical representation of the neural connections of the brain. Differences in brain structure can be measured in some disorders, notably schizophrenia and dementia. Different biological approaches using imaging have given more insight for example into the disorders of depression and obsessive - compulsive disorder. A key source of information about the function of brain regions is the effects of damage to them. Advances in neuroimaging have enabled objective insights into mental disorders, leading to faster diagnosis, more accurate prognosis, and better monitoring. Bioinformatics is a field of study that includes the creation and advancement of databases, and computational and statistical techniques, that can be used in studies of the human brain, particularly in the areas of gene and protein expression. Bioinformatics and studies in genomics, and functional genomics, generated the need for DNA annotation, a transcriptome technology, identifying genes, and their and location and function. GeneCards is a major database. As of 2017, just under 20,000 protein - coding genes are seen to be expressed in the human, and some 400 of these genes are brain - specific. The data that has been provided on gene expression in the brain has fuelled further research into a number of disorders. The long term use of alcohol for example, has shown altered gene expression in the brain, and cell - type specific changes that may relate to alcohol use disorder. These changes have been noted in the synaptic transcriptome in the prefrontal cortex, and are seen as a factor causing the drive to alcohol dependence, and also to other substance abuses. Other related studies have also shown evidence of synaptic alterations and their loss, in the ageing brain. Changes in gene expression alter the levels of proteins in various pathways and this has been shown to be evident in synaptic contact dysfunction or loss. This dysfunction has been seen to affect many structures of the brain and has a marked effect on inhibitory neurons resulting in a decreased level of neurotransmission, and subsequent cognitive decline and disease. Brain damage, or disease of the brain can manifest in a wide variety of ways. Traumatic brain injury, for example in contact sport, after a fall, or in traffic or work accidents, can be associated with both immediate and longer - term problems. Immediate problems that develop may include bleeding within the skull, compressing the brain tissue or damaging its blood supply, skull fractures, injury to a particular area, deafness, and concussion. In addition to the site of injury, the opposite side of the brain may be affected, termed a contrecoup injury. Longer - term issues that may develop include post-traumatic stress, hydrocephalus, and chronic traumatic encephalopathy. Neurodegenerative diseases result in progressive damage to different parts of the brain 's function, and worsen with age. Common examples include dementia such as Alzheimer 's disease, alcoholic dementia or vascular dementia; Parkinson 's disease; and other rarer infectious, genetic, or metabolic causes such as Huntington 's disease, motor neuron diseases, HIV dementia, syphilis - related dementia and Wilson 's disease. Neurodegenerative diseases can affect different parts of the brain, and can affect movement, memory, and cognition. The brain, although protected by the blood -- brain barrier, can be affected by infections including viruses, bacteria and fungi. Infection may be of the meninges (meningitis), the brain matter (encephalitis), or within the brain matter (such as a cerebral abscess). Rare prion diseases including Creutzfeldt -- Jakob disease and its variant, and kuru may also affect the brain. The most common cancers in the brain come from elsewhere in the body -- most commonly the lung, breast and skin. Cancers of brain tissue can also occur, and originate from any tissue in and around the brain. Meningioma, cancer of the meninges around the brain, is more common than cancers of brain tissue. Cancers within the brain may cause symptoms related to their size or position, with symptoms including headache and nausea, or the gradual development of focal symptoms such as gradual difficulty seeing, swallowing, talking, or as a change of mood. Cancers are in general investigated through the use of CT scans and MRI scans. A variety of other tests including blood tests and lumbar puncture may be used to investigate for the cause of the cancer and evaluate the type and stage of the cancer. The corticosteroid dexamethasone is often given to decrease the swelling of brain tissue around a tumour. Surgery may be considered, however given the complex nature of many tumours or based on tumour stage or type, radiotherapy or chemotherapy may be considered more suitable. Mental disorders, such as major depressive disorder, schizophrenia, bipolar disorder, post-traumatic stress disorder, attention deficit hyperactivity disorder, obsessive - compulsive disorder, Tourette syndrome, and addiction, are known to relate to the functioning of the brain. Treatment for mental disorders may include psychotherapy, psychiatry, social intervention and personal recovery work or cognitive behavioural therapy; the underlying issues and associated prognoses vary significantly between individuals. Epileptic seizures are thought to relate to abnormal electrical activity. Seizure activity can manifest as absence (of consciousness), focal effects such as limb movement or impediments of speech, or be generalized in nature. Status epilepticus refers to a seizure or series of seizures that have not terminated within 30 minutes, although this definition has recently been revised. Seizures have a large number of causes, however many seizures occur without a definitive cause being found. In a person with epilepsy, risk factors for further seizures may include sleeplessness, drug and alcohol intake, and stress. Seizures may be assessed using blood tests, EEG and various medical imaging techniques based on the medical history and exam findings. In addition to treating an underlying cause and reducing exposure to risk factors, anticonvulsant medications can play a role in preventing further seizures. Some brain disorders such as Tay -- Sachs disease are congenital, and linked to genetic and chromosomal mutations. A rare group of congenital cephalic disorders known as lissencephaly is characterised by the lack of, or inadequacy of, cortical folding. Normal development of the brain can be affected during pregnancy by nutritional deficiencies, teratogens, infectious diseases, and by the use of recreational drugs and alcohol. A stroke is a decrease in blood supply to an area of the brain causing cell death and brain injury. This can lead to a wide range of symptoms, including the "FAST '' symptoms of facial droop, arm weakness, and speech difficulties (including with speaking and finding words or forming sentences). Symptoms relate to the function of the affected area of the brain and can point to the likely site and cause of the stroke. Difficulties with movement, speech, or sight usually relate to the cerebrum, whereas imbalance, double vision, vertigo and symptoms affecting more than one side of the body usually relate to the brainstem or cerebellum. Most strokes result from loss of blood supply, typically because of an embolus, rupture of a fatty plaque or narrowing of small arteries. Strokes can also result from bleeding within the brain. Transient ischaemic attacks (TIAs) are strokes in which symptoms resolve within 24 hours. Investigation into the stroke will involve a medical examination (including a neurological examination) and the taking of a medical history, focusing on the duration of the symptoms and risk factors (including high blood pressure, atrial fibrillation, and smoking). Further investigation is needed in younger patients. An ECG and biotelemetry may be conducted to identify atrial fibrillation; an ultrasound can investigate narrowing of the carotid arteries; an echocardiogram can be used to look for clots within the heart, diseases of the heart valves or the presence of a patent foramen ovale. Blood tests are routinely done as part of the workup including diabetes tests and a lipid profile. Some treatments for stroke are time - critical. These include clot dissolution or surgical removal of a clot for ischaemic strokes, and decompression for haemorrhagic strokes. As stroke is time critical, hospitals and even pre-hospital care of stroke involves expedited investigations -- usually a CT scan to investigate for a haemorrhagic stroke and a CT or MR angiogram to evaluate arteries that supply the brain. MRI scans, not as widely available, may be able to demonstrate the affected area of the brain more accurately, particularly with ischaemic stroke. Having experienced a stroke, a person may be admitted to a stroke unit, and treatments may be directed as preventing future strokes, including ongoing anticoagulation (such as aspirin or clopidogrel), antihypertensives, and lipid - lowering drugs. A multidisciplinary team including speech pathologists, physiotherapists, occupational therapists, and psychologists plays a large role in supporting a person affected by a stroke and their rehabilitation. Brain death refers to an irreversible total loss of brain function. This is characterised by coma, loss of reflexes, and apnoea, however, the declaration of brain death varies geographically and is not always accepted. In some countries there is also a defined syndrome of brainstem death. Declaration of brain death can have profound implications as the declaration, under the principle of medical futility, will be associated with the withdrawal of life support, and as those with brain death often have organs suitable for organ donation. The process is often made more difficult by poor communication with patients ' families. When brain death is suspected, reversible differential diagnoses such as hypothermia - induced coma, electrolyte, neurological and drug - related cognitive suppression need to be excluded. Testing for reflexes can be of help in the decision, as can the absence of response and breathing. Clinical observations, including a total lack of responsiveness, a known diagnosis, and neural imaging evidence, may all play a role in the decision to pronounce brain death. Neuroanthropology is the study of the relationship between culture and the brain. It explores how the brain gives rise to culture, and how culture influences brain development. Cultural differences and their relation to brain development and structure are researched in different fields. The philosophy of the mind studies such issues as the problem of understanding consciousness and the mind -- body problem. The relationship between the brain and the mind is a significant challenge both philosophically and scientifically. This is because of the difficulty in explaining how mental activities, such as thoughts and emotions, can be implemented by physical structures such as neurons and synapses, or by any other type of physical mechanism. This difficulty was expressed by Gottfried Leibniz in the analogy known as Leibniz 's Mill: One is obliged to admit that perception and what depends upon it is inexplicable on mechanical principles, that is, by figures and motions. In imagining that there is a machine whose construction would enable it to think, to sense, and to have perception, one could conceive it enlarged while retaining the same proportions, so that one could enter into it, just like into a windmill. Supposing this, one should, when visiting within it, find only parts pushing one another, and never anything by which to explain a perception. Doubt about the possibility of a mechanistic explanation of thought drove René Descartes, and most other philosophers along with him, to dualism: the belief that the mind is to some degree independent of the brain. There has always, however, been a strong argument in the opposite direction. There is clear empirical evidence that physical manipulations of, or injuries to, the brain (for example by drugs or by lesions, respectively) can affect the mind in potent and intimate ways. In the 19th century, the case of Phineas Gage, a railway worker who was injured by a stout iron rod passing through his brain, convinced both researchers and the public that cognitive functions were localised in the brain. Following this line of thinking, a large body of empirical evidence for a close relationship between brain activity and mental activity has led most neuroscientists and contemporary philosophers to be materialists, believing that mental phenomena are ultimately the result of, or reducible to, physical phenomena. The size of the brain and a person 's intelligence are not strongly related. Studies tend to indicate small to moderate correlations (averaging around 0.3 to 0.4) between brain volume and IQ. The most consistent associations are observed within the frontal, temporal, and parietal lobes, the hippocampi, and the cerebellum, but these only account for a relatively small amount of variance in IQ, which itself has only a partial relationship to general intelligence and real - world performance. Other animals, including whales and elephants have larger brains than humans. However, when the brain - to - body mass ratio is taken into account, the human brain is almost twice as large as that of a bottlenose dolphin, and three times as large as that of a chimpanzee. However, a high ratio does not of itself demonstrate intelligence: very small animals have high ratios and the treeshrew has the largest quotient of any mammal. Research has disproved some common misconceptions about the brain. These include both ancient and modern myths. It is not true that neurons are not replaced after the age of two; nor that only ten per cent of the brain is used. Popular culture has also oversimplified the lateralisation of the brain, suggesting that functions are completely specific to one side of the brain or the other. Akio Mori coined the term game brain for the unreliably supported theory that spending long periods playing video games harmed the brain 's pre-frontal region and the expression of emotion and creativity. Historically, the brain featured in popular culture through phrenology, a pseudoscience that assigned personality attributes to different regions of the cortex. The cortex remains important in popular culture as covered in books and satire. The brain features in science fiction, with themes such as brain transplants and cyborgs (beings with features like partly artificial brains). The 1942 science fiction book (adapted three times for cinema) Donovan 's Brain tells the tale of an isolated brain kept alive in vitro, gradually taken over by a malign intelligence. The Edwin Smith Papyrus, an ancient Egyptian medical treatise written in the 17th century BC, contains the earliest recorded reference to the brain. The hieroglyph for brain, occurring eight times in this papyrus, describes the symptoms, diagnosis, and prognosis of two traumatic injuries to the head. The papyrus mentions the external surface of the brain, the effects of injury (including seizures and aphasia), the meninges, and cerebrospinal fluid. In the fifth century BC, Alcmaeon of Croton in Magna Grecia, first considered the brain to be the seat of the mind. Also in the fifth century BC in Athens, Hippocrates believed the brain to be the seat of intelligence. Aristotle, in his biology initially believed the heart to be the seat of intelligence, and saw the brain as a cooling mechanism for the blood. He reasoned that humans are more rational than the beasts because, among other reasons, they have a larger brain to cool their hot - bloodedness. Aristotle did describe the meninges and distinguished between the cerebrum and cerebellum. Herophilus of Chalcedon in the fourth and third centuries BC distinguished the cerebrum and the cerebellum, and provided the first clear description of the ventricles; and with Erasistratus of Ceos experimented on living brains. Their works are now mostly lost, and we know about their achievements due mostly to secondary sources. Some of their discoveries had to be re-discovered a millennium after their deaths. Anatomist physician Galen in the second century AD, during the time of the Roman Empire, dissected the brains of sheep, monkeys, dogs, and pigs. He concluded that, as the cerebellum was denser than the brain, it must control the muscles, while as the cerebrum was soft, it must be where the senses were processed. Galen further theorized that the brain functioned by movement of animal spirits through the ventricles. In 1316, Mondino de Luzzi 's Anathomia began the modern study of brain anatomy. Niccolò Massa discovered in 1536 that the ventricles were filled with fluid. Archiangelo Piccolomini of Rome was the first to distinguish between the cerebrum and cerebral cortex. In 1543 Andreas Vesalius published his seven - volume De humani corporis fabrica. The seventh book covered the brain and eye, with detailed images of the ventricles, cranial nerves, pituitary gland, meninges, structures of the eye, the vascular supply to the brain and spinal cord, and an image of the peripheral nerves. Vesalius rejected the common belief that the ventricles were responsible for brain function, arguing that many animals have a similar ventricular system to humans, but no true intelligence. René Descartes proposed the theory of dualism to tackle the issue of the brain 's relation to the mind. He suggested that the pineal gland was where the mind interacted with the body after recording the brain mechanisms responsible for circulating cerebrospinal fluid. This dualism likely provided impetus for later anatomists to further explore the relationship between the anatomical and functional aspects of brain anatomy. Thomas Willis is considered a second pioneer in the study of neurology and brain science. In 1664 in Cerebri Anatome (Latin: Anatomy of the brain), followed by Cerebral Pathology in 1667. In these he described the structure of the cerebellum, the ventricles, the cerebral hemispheres, the brainstem, and the cranial nerves, studied its blood supply; and proposed functions associated with different areas of the brain. The circle of Willis was named after his investigations into the blood supply of the brain, and he was the first to use the word "neurology. '' Willis removed the brain from the body when examining it, and rejected the commonly held view that the cortex only consisted of blood vessels and the view of the last two millennia that the cortex was only incidentally important. In the late 19th century, Emil du Bois - Reymond and Hermann von Helmholtz, following the work of their teacher Johannes Peter Müller showed the electrical inpulses which pass along nerves; but unlike Müller 's views, that such impulses were able to be observed. Richard Caton in 1875 demonstrated electrical impulses in the cerebral hemispheres of rabbits and monkeys. In the 1820s, Jean Pierre Flourens pioneered the experimental method of damaging specific parts of animal brains describing the effects on movement and behavior. Studies of the brain became more sophisticated with the use of the microscope and the development of a silver staining method by Camillo Golgi during the 1880s. This was able to show the intricate structures of single neurons. This was used by Santiago Ramón y Cajal and led to the formation of the neuron doctrine, the then revolutionary hypothesis that the neuron is the functional unit of the brain. He used microscopy to uncover many cell types, and proposed functions for the cells he saw. For this, Golgi and Cajal are considered the founders of twentieth century neuroscience, both sharing the Nobel prize in 1906 for their studies and discoveries in this field. Charles Sherrington published his influential 1906 work The Integrative Action of the Nervous System examining the function of reflexes, evolutionary development of the nervous system, functional specialisation of the brain, and layout and cellular function of the central nervous system. John Farquhar Fulton, founded the Journal of Neurophysiology and published the first comprehensive textbook on the physiology of the nervous system during 1938. Neuroscience during the twentieth century began to be recognized as a distinct unified academic discipline, with David Rioch, Francis O. Schmitt, and Stephen Kuffler playing critical roles in establishing the field. Rioch originated the integration of basic anatomical and physiological research with clinical psychiatry at the Walter Reed Army Institute of Research, starting in the 1950s. During the same period, Schmitt established the Neuroscience Research Program, an inter-university and international organisation, bringing together biology, medicine, psychological and behavioural sciences. The word neuroscience itself arises from this program. Paul Broca associated regions of the brain with specific functions, in particular language in Broca 's area, following work on brain - damaged patients. John Hughlings Jackson described the function of the motor cortex by watching the progression of epileptic seizures through the body. Carl Wernicke described a region associated with language comprehension and production. Korbinian Brodmann divided regions of the brain based on the appearance of cells. By 1950, Sherrington, Papez, and MacLean had identified many of the brainstem and limbic system functions. The capacity of the brain to re-organise and change with age, and a recognised critical development period, were attributed to neuroplasticity, pioneered by Margaret Kennard, who experimented on monkeys during the 1930 - 40s. Harvey Cushing (1869 -- 1939) is recognised as the first proficient brain surgeon in the world. In 1937, Walter Dandy began the practice of vascular neurosurgery by performing the first surgical clipping of an intracranial aneurysm. The human brain has many properties that are common to all vertebrate brains, and shares many features common to all mammalian brains, most notably a six - layered cerebral cortex and a set of associated structures, including the hippocampus and amygdala. The cortex is proportionally larger in greater mammals and humans than many other animals. Humans have more association cortex, sensory and motor parts than other mammals such as the rat and the cat. As a primate brain, the human brain has a much larger cerebral cortex, in proportion to body size, than most mammals, and a highly developed visual system. As a hominid brain, the human brain is substantially enlarged even in comparison to the brain of a typical monkey. The sequence of human evolution from Australopithecus (four million years ago) to Homo sapiens (modern man) was marked by a steady increase in brain size. As brain size increased, this altered the size and shape of the skull, from about 600 cm in Homo habilis to an average of about 1520 cm in Homo neanderthalensis. Differences in DNA, gene expression, and gene -- environment interactions help explain the differences between the function of the human brain and other primates. ocular group: central retinal 2 ° (Spinomesencephalic tract → Superior colliculus of Midbrain tectum)
when does the new captain america movie come out
List of Marvel Cinematic Universe films - wikipedia The Marvel Cinematic Universe (MCU) films are an American series of superhero films, based on characters that appear in publications by Marvel Comics. The films have been in production since 2007, and in that time Marvel Studios has produced 19 films, with 13 more in various stages of production. The series collectively has grossed over $16.8 billion at the global box office, making it the highest - grossing film franchise of all time. Kevin Feige has produced every film in the Marvel Cinematic Universe. Avi Arad served as a producer on the two 2008 releases, Gale Anne Hurd also produced The Incredible Hulk and Amy Pascal produced the Spider - Man films. The films are written and directed by a variety of individuals and feature large, often ensemble, casts. Many of the actors, including Robert Downey Jr., Chris Evans, Chris Hemsworth, Samuel L. Jackson, and Scarlett Johansson signed contracts to star in numerous films. The first film in the Marvel Cinematic Universe was Iron Man (2008), which was distributed by Paramount Pictures. Paramount also distributed Iron Man 2 (2010), Thor (2011) and Captain America: The First Avenger (2011), while Universal Pictures distributed The Incredible Hulk (2008). Walt Disney Studios Motion Pictures began distributing the films with the 2012 crossover film The Avengers, which concluded Phase One of the franchise. Phase Two includes Iron Man 3 (2013), Thor: The Dark World (2013), Captain America: The Winter Soldier (2014), Guardians of the Galaxy (2014), Avengers: Age of Ultron (2015), and Ant - Man (2015). Captain America: Civil War (2016) is the first film in the franchise 's Phase Three, and is followed by Doctor Strange (2016), Guardians of the Galaxy Vol. 2 (2017), Spider - Man: Homecoming (2017), Thor: Ragnarok (2017), Black Panther (2018), and Avengers: Infinity War (2018), with Ant - Man and the Wasp (2018), Captain Marvel (2019), and an untitled Avengers film (2019) still scheduled for the phase. Sony Pictures distributes the Spider - Man films, which they continue to own, finance, and have final creative control over. A sequel to Spider - Man: Homecoming has been scheduled for 2019, alongside Guardians of the Galaxy Vol. 3 in 2020, with an additional two untitled films also scheduled for 2020, three untitled films scheduled for 2021, and three untitled films scheduled for 2022. Feige has indicated that Marvel may abandon the phase grouping after the conclusion of Phase Three. Billionaire industrialist Tony Stark builds himself a suit of armor after he is taken captive by a terrorist organization. Free from his captors, he decides to upgrade and don his armor in order to hunt down weapons that were sold under the table. In April 2006, Marvel hired Jon Favreau to direct Iron Man, with the writing teams of Art Marcum and Matt Holloway and Mark Fergus and Hawk Ostby writing competing scripts. Favreau consolidated both into one script, which was then polished by John August. Robert Downey, Jr. was cast in the title role in September 2006, after growing out a goatee and working out to convince the filmmakers he was right for the part. Principal photography began on March 12, 2007, with the first few weeks spent on Stark 's captivity in Afghanistan, which was filmed in Inyo County, California. Production also occurred on the former Hughes Company soundstages in Playa Vista, Los Angeles, California, with additional filming at Edwards Air Force Base and Caesars Palace in Las Vegas, Nevada. Iron Man premiered at the Greater Union theater in George Street, Sydney, on April 14, 2008, and was released internationally on April 30, and in North America on May 2. The film ends with a post-credits scene featuring Samuel L. Jackson as Nick Fury, who approaches Stark regarding the "Avenger Initiative ''. Favreau said that he included the scene as "a little tip of the hat for the fans... a way to sort of tee up The Avengers. '' Jackson was only on set for a day, with a skeletal crew to avoid the news of his cameo leaking. Captain America 's shield is also visible in the background of a scene; it was added by an ILM artist as a joke, and Favreau decided to leave it in the film. After being exposed to gamma radiation that causes him to transform into the monstrous Hulk, scientist Bruce Banner goes on the run and isolates himself from his love, Betty Ross. Hunted by the military, Banner seeks to cure himself and prevent his condition from being weaponized. In January 2006, Marvel reclaimed the film rights for the Hulk character from Universal Pictures after Universal failed to meet a deadline to develop a sequel to director Ang Lee 's 2003 film Hulk. Universal retained distribution rights for future Hulk films. Instead of moving forward with a sequel, Marvel hired Louis Leterrier to direct The Incredible Hulk, a reboot. Leterrier initially turned down the job out of respect for Lee, but later reconsidered and signed on. The script was written by Zak Penn, who drafted a treatment for the 2003 film. In April 2006, Edward Norton entered negotiations to portray Bruce Banner and rewrite Penn 's script, although Penn received sole credit for the screenplay. Production began on July 9, 2007 and filming primarily took place in Toronto, with additional filming in New York City and Rio de Janeiro. The film premiered at the Gibson Amphitheatre on June 8, 2008, and was released on June 13. The film takes place simultaneously with the events of Iron Man 2 and Thor, the former of which is set six months after the events of Iron Man. Downey briefly reprised his role from Iron Man as Tony Stark in a cameo appearance at the end of the film. Downey said that the filmmakers "were just cross-pollinating our superheroes. It happens to be a scene where I basically approach (actor William Hurt 's character General Ross), and we may be considering going into some sort of limited partnership together. The great thing is he -- and I do n't want to give too much away -- but he 's in disrepair at the time I find him. It was really fun seeing him play this really powerful character who 's half in the bag. '' In addition, Captain America is briefly seen frozen in ice in an alternate opening of the film, included in the DVD release. After Tony Stark reveals himself to be Iron Man, the U.S. government demands he hand over his technology. Meanwhile, a rival industrialist and a Russian scientist conspire to use his own technology against him. Immediately following the successful release of Iron Man in May 2008, Marvel Studios announced it was developing a sequel, Iron Man 2. Favreau returned as director and Justin Theroux was hired to write the screenplay, which would be based on an original story by Favreau and Downey. In October 2008, Downey signed a new four - picture deal, that retroactively included the first film, to reprise his role and Don Cheadle was hired to replace Terrence Howard as James Rhodes. Jackson signed on to reprise his role as Nick Fury from the Iron Man post-credits sequence in up to nine films, and Scarlett Johansson was cast as the Black Widow, as part of a multi-film commitment. Principal photography began April 6, 2009, at the Pasadena Masonic Temple in Pasadena, California. The majority of filming took place at Raleigh Studios in Manhattan Beach, California. Other locations included Edwards Air Force Base, Monaco, and the Sepulveda Dam. Iron Man 2 premiered at the El Capitan Theatre in Los Angeles, California on April 26, 2010, and was released internationally between April 28 and May 7 before releasing in North America on May 7. The film is set six months after the events of Iron Man, and takes place simultaneously with the events of The Incredible Hulk and Thor. The filmmakers continued to refer to other Marvel films by again including Captain America 's shield. Favreau explained, "We introduced Captain America 's shield briefly in one shot in the last film. So now it really was in his room, so we had to figure out how to deal with the reality that the shield was in his workshop. '' A scene toward the end of Iron Man 2 in a S.H.I.E.L.D. safe house contains several Easter eggs, ranging from footage from The Incredible Hulk displayed on a monitor to pointers on a map indicating several locales related to other Marvel films, including one pointing toward a region of Africa in reference to the Black Panther. A young Peter Parker appears as the child wearing an Iron Man mask whom Stark saves from a drone; the appearance was confirmed in June 2017 by Spider - Man actor Tom Holland, Kevin Feige and Spider - Man: Homecoming director Jon Watts. The film 's post-credits scene showed the discovery of Thor 's hammer in a crater. Thor, crown prince of Asgard, is banished to Earth and stripped of his powers after he reignites a dormant war. As his brother, Loki, plots to take the throne for himself, Thor must prove himself worthy and reclaim his hammer Mjölnir. Mark Protosevich was hired to develop a script for Thor in April 2006, after the rights were acquired from Sony Pictures. In August 2007 Marvel hired Matthew Vaughn to direct the film, however he exited the project in May 2008. In September 2008, Kenneth Branagh entered into negotiations to replace Vaughn. In May 2009, Chris Hemsworth was in negotiations to portray the titular character, and Tom Hiddleston was set to play his brother, Loki. Both actors were contracted to star in several films. Marvel hired the writing team of Ashley Edward Miller and Zack Stentz to write a new script for the film, which was then rewritten by Don Payne. Production began on January 11, 2010 in Los Angeles, California, before moving to Galisteo, New Mexico in March. Thor had its world premiere on April 17, 2011 at the Event Cinemas theatre in George Street, Sydney and a U.S. premiere on May 2 at the El Capitan Theatre in Los Angeles, California. The film was released internationally from April 21 to 30, and on May 6 in North America. The film takes place simultaneously with the events of The Incredible Hulk and Iron Man 2, the latter of which is set six months after the events of Iron Man. Clark Gregg, who appeared in Iron Man and Iron Man 2 as S.H.I.E.L.D. agent Phil Coulson, reprised the role in Thor. About his role in Thor he stated, "Agent Coulson was one of the guys who was n't really in the comic books, and he (had) a very kind of small role in Iron Man. And I was just very lucky that they chose to expand that character and (chose) to put him more into the universe of it. '' After signing on to appear as Clint Barton / Hawkeye in The Avengers, Jeremy Renner made a cameo appearance as the character during a scene in Thor. Branagh said that they "were always going to have a guy in a basket above the action where Thor breaks in the S.H.I.E.L.D. camp '', and that he was thrilled when the producers told him they wanted to use Renner 's Hawkeye for that role. The film ends with a post-credits scene featuring Loki, watching as Erik Selvig and Nick Fury discuss the Tesseract. The scene was directed by Joss Whedon, who directed The Avengers. Stellan Skarsgård, who played Selvig, said the scene was not included when he first read the screenplay for Thor, and that he was sent pages for the scene after agreeing to appear in The Avengers. In 1942, Steve Rogers is deemed physically unfit to enlist in the U.S. Army and fight the Nazis in World War II. Recruited for a secret military operation, he is physically transformed into a super-soldier dubbed Captain America and must battle the Red Skull, head of a Nazi science division known as Hydra. In April 2006, Marvel hired David Self to write the script for a Captain America film. Joe Johnston signed on to direct in November 2008, and Christopher Markus & Stephen McFeely were hired to rewrite the script. In March 2010, Chris Evans was cast as Captain America and Hugo Weaving was cast as the Red Skull. Production began on June 28, 2010 in the United Kingdom, with locations in London, Caerwent, Manchester and Liverpool. The film premiered on July 19, 2011, at the El Capitan Theatre in Los Angeles, California, and was released in North America on July 22, and in international markets starting July 27. The Tesseract from the Thor post-credits scene appears as a MacGuffin in Captain America: The First Avenger. In the film, Dominic Cooper portrays a young Howard Stark, the father of Tony Stark, who hosts an early version of the Stark Expo, the fair Tony hosts in Iron Man 2. The final scene of the film includes a brief appearance by Jackson 's Nick Fury followed by a teaser trailer for Marvel 's The Avengers after the credits. Nick Fury, the director of S.H.I.E.L.D., gathers the superheroes Iron Man, Thor, Captain America, the Hulk, Black Widow and Hawkeye to fight Thor 's brother Loki, who plots to subjugate the Earth. Zak Penn, who wrote The Incredible Hulk, was hired to write a script for The Avengers in June 2007. In April 2010, Joss Whedon closed a deal to direct the film, and to rework Penn 's script. Marvel announced that Edward Norton would not be reprising the role of Bruce Banner / Hulk, and in July 2010, Mark Ruffalo was cast in his place. Downey, Evans, Hemsworth, Johansson, Renner, Hiddleston and Jackson reprised their respective roles from previous films. Principal photography began in April 2011 in Albuquerque, New Mexico, before moving to Cleveland, Ohio in August, and New York City in September. The premiere was held on April 11, 2012 at the El Capitan Theatre in Los Angeles, California, and the film was released in North America on May 4. Gwyneth Paltrow, who portrayed Pepper Potts in Iron Man and Iron Man 2, was included in the film at Downey 's insistence. Prior to this, Whedon had not intended the film to include supporting characters from the heroes ' individual films, commenting, "You need to separate the characters from their support systems in order to create the isolation you need for a team. '' Avi Arad said that Sony Pictures and Disney discussed incorporating the OsCorp Tower from The Amazing Spider - Man into the climax of The Avengers, but Feige said that "the deal was never close to happening. '' The supervillain Thanos appears in a mid-credits scene, portrayed by Damion Poitier. Tony Stark faces a powerful enemy, the Mandarin, who attacks and destroys his mansion. Left to his own devices and battling posttraumatic stress disorder, Stark struggles to get to the bottom of a series of mysterious explosions. In late 2010, Marvel and Disney announced that they were developing a third Iron Man film. In February 2011, Marvel hired Shane Black to direct Iron Man 3. Black co-wrote the film 's script with Drew Pearce. Downey, Paltrow, and Cheadle reprised their roles from Iron Man 2, while Guy Pearce and Ben Kingsley joined the cast as Aldrich Killian and Trevor Slattery, respectively. Filming began in May 2012, in North Carolina. Additional filming took place in southern Florida, China, and Los Angeles. Iron Man 3 premiered at Le Grand Rex in Paris, France on April 14, 2013 and at the El Capitan Theatre in Los Angeles, California on April 24. The film was released internationally on April 25, and in the United States on May 3. The film is set in December 2013, after the events of The Avengers. In the film Tony Stark experiences PTSD - like symptoms following the Battle of New York in The Avengers. Black explained, "that 's an anxiety response to feeling inferior to The Avengers, but also to being humbled by sights he can not possibly begin to understand or reconcile with the realities he 's used to... There 's a line in the movie about ' ever since that big guy with the hammer fell out of the sky, the rules have changed '. That 's what we 're dealing with here. '' Bruce Banner appears in a post-credits scene, with Ruffalo reprising the role. About the scene, Ruffalo said "They were about to wrap the movie and I saw Robert (Downey, Jr.) at the Academy Awards... and he said, ' What do you think about coming and doing a day? ' I said, ' Are you kidding me? Bang, let 's do it! ' We sort of spitballed that scene, then I came in and we shot for a couple of hours and laughed. '' Thor reunites with astrophysicist Jane Foster as a series of portals, linking worlds at random, begin to appear. He discovers that Malekith and his army of Dark Elves have returned after thousands of years, and they seek a powerful weapon known as the Aether. Thor must join forces with his now - imprisoned brother Loki to stop them. A sequel to Thor was first announced in June 2011, with Hemsworth reprising his role as Thor. Hiddleston confirmed he would return as Loki in September, and Alan Taylor signed on to direct the film in December. The film 's title was announced as Thor: The Dark World in July 2012 at the San Diego Comic - Con International, and Christopher Eccleston was cast as Malekith a month later. Production started in September 2012 in Bourne Wood, Surrey, with additional filming taking place in Iceland and London. The film premiered at the Odeon Leicester Square in London on October 22, 2013. It was internationally released on October 30, 2013 and on November 8, 2013 in North America. The film is set one year after the events of The Avengers. Evans briefly makes a cameo appearance in the film as Captain America when Loki shapeshifts into him while mocking Thor. Hiddleston wore the Captain America costume while standing in for Evans, before Evans came to shoot the scene. Hiddleston said, "I did an impression of Loki in the Captain America costume, and then they showed Chris (Evans) my performance on tape. It 's him doing an impression of me doing an impression of him. And it 's brilliant. '' James Gunn, the director of Guardians of the Galaxy, directed the mid-credits scene which featured the Collector, played by Benicio del Toro. Asked about shooting the scene, Gunn said, "I got the script that morning, and I did it in two hours at the end of a day of second unit shooting (for Guardians of the Galaxy). '' Steve Rogers, now working with S.H.I.E.L.D., teams up with Natasha Romanoff / Black Widow and Sam Wilson / Falcon to expose a deep conspiracy which involves a mysterious assassin known only as the Winter Soldier. A sequel to 2011 's Captain America: The First Avenger was announced in April 2012. Anthony and Joe Russo were hired to direct in June, and in July it was officially titled Captain America: The Winter Soldier. Evans and Jackson were set to reprise their respective roles as Captain America and Nick Fury, and Johansson would again play the Black Widow. Sebastian Stan, who portrayed Bucky Barnes in Captain America: The First Avenger, returned as the Winter Soldier. Production started in April 2013 in Manhattan Beach, California, and filming also took place in Washington, D.C. and Cleveland, Ohio. The film premiered in Los Angeles on March 13, 2014. Captain America: The Winter Soldier was released internationally on March 26 and in North America on April 4. The film is set two years after the events of The Avengers. Stephen Strange, the alter - ego of the Marvel superhero Doctor Strange, is mentioned by name in the film by the character Jasper Sitwell. A remodeled Stark Tower from The Avengers, now known as Avengers Tower, also makes an appearance in the film. Whedon directed a post-credits scene featuring Baron Wolfgang von Strucker (Thomas Kretschmann), List (Henry Goodman), Quicksilver (Aaron Taylor - Johnson), and the Scarlet Witch (Elizabeth Olsen), who appear in Avengers: Age of Ultron. The revelation in the film that S.H.I.E.L.D. had been infiltrated by Hydra informed the final six episodes of the first season of Agents of S.H.I.E.L.D., a television series set in the MCU. Peter Quill / Star - Lord and a group of misfits, including Gamora, Rocket, Drax the Destroyer and Groot, fight to keep a powerful orb from the clutches of the villainous Ronan. Nicole Perlman began writing a screenplay in 2009. Marvel Studios announced it was developing a Guardians of the Galaxy film in July 2012. The film is directed by James Gunn, based on his and Perlman 's screenplay. In February 2013, Chris Pratt was cast in the lead role, as Peter Quill / Star - Lord. The film was shot at Shepperton Studios and in London from July to October 2013, and post-production work was completed on July 7, 2014. The film premiered on July 21, 2014 in Hollywood. Guardians of the Galaxy was released in the United Kingdom on July 31, 2014, and in North America on August 1. The film is set in 2014. Josh Brolin provides the voice and performance capture for Thanos, the supervillain who appeared in The Avengers mid-credits scene. Gunn noted that the film would be connected to Avengers: Infinity War. Several other objects of significance appear in the Collector 's museum, including a Chitauri from The Avengers and a Dark Elf from Thor: The Dark World, among other characters. About their appearances Gunn said, "There 's a lot of stuff in the Collector 's Museum. And for me, it was mostly just really fun. As a Marvel fan, giving the actual fans something that they can freeze frame on their Blu - Ray at home and just kind of pick out everything that 's in there. So there are, I mean, seriously all those boxes have something interesting in them, so it 's pretty fun. '' Ronan 's race, the Kree, were first introduced in the Agents of S.H.I.E.L.D. episode "T.A.H.I.T.I. ''. Captain America, Iron Man, Thor, the Hulk, Black Widow, and Hawkeye must work together as the Avengers to defeat Ultron, a technological enemy bent on human extinction, while encountering the powerful twins Pietro and Wanda Maximoff, as well as the new entity Vision. A sequel to The Avengers was announced by Disney in May 2012, shortly after the first film 's release. In August 2012, Joss Whedon was signed to return as writer and director. In June 2013, Downey signed a deal to reprise the role of Iron Man for the second and third Avengers films. On July 20, 2013, at San Diego Comic - Con International, Whedon announced that the subtitle of the film would be Age of Ultron. In August 2013, James Spader was announced to portray Ultron. Second unit filming began on February 11, 2014 in Johannesburg, South Africa. Principal photography began in March 2014 at Shepperton Studios in Surrey, England, with additional footage filmed at Fort Bard and various other locations in the Aosta Valley region of Italy, and Seoul, South Korea. Filming was completed on August 6, 2014. Avengers: Age of Ultron had its world premiere in Los Angeles on April 13, 2015, and was released internationally beginning April 22, and on May 1 in North America. The film confirms that the gem in Loki 's scepter is an Infinity Stone, specifically the Mind Stone, and Brolin reappears as Thanos in the mid-credit scene wielding an Infinity Gauntlet. It also features references to Vibranium and Wakanda, both connections to Black Panther, introducing both to the universe ahead of Black Panther 's solo film. Additionally, Andy Serkis portrays Ulysses Klaue in the film, traditionally a Black Panther antagonist. Thief Scott Lang must aid his mentor Dr. Hank Pym in safeguarding the mystery of the Ant - Man technology, which allows its user to decrease in size but increase in strength, from various menaces and plot a heist to defend the Earth. Ant - Man is directed by Peyton Reed with a screenplay written by Edgar Wright & Joe Cornish and Adam McKay & Paul Rudd, from a story by Wright & Cornish, that includes both Scott Lang and Hank Pym. Edgar Wright was initially slated to direct and write the film, but left the project in May 2014 due to creative differences. In January 2013, Feige stated that Ant - Man would be the first film in Phase Three of the Marvel Cinematic Universe. However, in October 2014, it was revealed that the film would be the last film of Phase Two. Pre-production started in October 2013, and principal photography took place from August to December 2014, in San Francisco, Fayette County, Georgia at Pinewood Atlanta, and Downtown Atlanta. In December 2013, Rudd was cast as Ant - Man, followed in January 2014 with the casting of Michael Douglas as Pym and the confirmation of Rudd as Lang. Ant - Man had its world premiere in Los Angeles on June 29, 2015, and was released in France on July 14, and in North America on July 17. The film is set several months after the events of Avengers: Age of Ultron. Scott Lang attempts to infiltrate the new Avengers headquarters in Upstate New York featured in Age of Ultron, and confronts Sam Wilson / Falcon, played by Anthony Mackie. McKay and Rudd decided to add Falcon to Ant - Man after watching Captain America: The Winter Soldier. The Russo brothers filmed the post-credit scene, which was footage from Captain America: Civil War, and features Mackie as Falcon, Chris Evans as Steve Rogers / Captain America, and Sebastian Stan as Bucky Barnes / Winter Soldier. The Avengers become fractured into two opposing teams, one led by Captain America and another by Iron Man, after extensive collateral damage prompts politicians to pass an act regulating superhuman activity with government oversight and accountability for the Avengers while also facing against a new enemy, Helmut Zemo, who seeks revenge upon the Avengers. By January 2014, Anthony and Joe Russo had signed on to return to direct a third Captain America installment, which they confirmed in March 2014, with Chris Evans returning as Captain America, Feige returning to produce, and Christopher Markus & Stephen McFeely writing the screenplay. In October 2014, the title was officially announced as Captain America: Civil War along with the reveal that Downey would appear in the film as Tony Stark / Iron Man. The film is an adaptation from the "Civil War '' storyline in the comics. It is also the first film of Phase Three. Filming began in April 2015 at Pinewood Atlanta, and concluded in August 2015. Captain America: Civil War had its premiere in Hollywood on April 12, 2016, was released internationally beginning April 27, and was released on May 6 in North America. The film is set one year after the events of Avengers: Age of Ultron. Captain America: Civil War introduces Tom Holland as Peter Parker / Spider - Man and Chadwick Boseman as T'Challa / Black Panther to the MCU, who appear in solo films in 2017 and 2018, respectively. William Hurt reprises his role as Thunderbolt Ross from The Incredible Hulk, and is now the US Secretary of State. For the mid-credits scene, in which Black Panther offers Captain America and Bucky Barnes asylum in Wakanda, Joe and Anthony Russo received input from Black Panther director Ryan Coogler on the look and design of Wakanda. After Stephen Strange, the world 's top neurosurgeon, is involved in a car accident that ruins his career, he sets out on a journey of healing, where he encounters the Ancient One, who teaches Strange the use of Mystic Arts and to defend the Earth from mystical threats. In June 2010, Thomas Dean Donnelly and Joshua Oppenheimer were hired to write the screenplay for a film starring the character Doctor Strange. In January 2013, Kevin Feige confirmed that Doctor Strange would be a part of their Phase Three slate of films. In June 2014, Scott Derrickson was hired to direct. In December 2014, Benedict Cumberbatch was cast in the eponymous role, and Jon Spaihts was confirmed to rewrite the script. In December 2015, C. Robert Cargill revealed he was a co-writer on the film, and the following April, revealed that Derrickson also wrote the script. Pre-production began in June 2014, with filming beginning in November 2015 in Nepal, before moving to Longcross Studios in the UK later in the month. Filming concluded in New York City in April 2016. Doctor Strange had its premiere in Hong Kong on October 13, 2016, and was released in the United Kingdom on October 25, 2016, and on November 4 in the United States. Derrickson stated that the events of the film take "roughly '' a year, ending "up to date with the rest of the MCU ''. Doctor Strange introduces the Eye of Agamotto, a mystical relic that can manipulate time and is revealed to be an Infinity Stone at the end of the film, specifically the Time Stone. The film 's mid-credits scene features a cameo appearance by Hemsworth as Thor, meeting with Strange, which was footage from Thor: Ragnarok. The scene was directed by Ragnarok director Taika Waititi. The Guardians of the Galaxy travel throughout the cosmos and struggle to keep their newfound family together while helping Peter Quill learn more about his true parentage and facing against new enemies. In July 2014, Guardians of the Galaxy co-writer Nicole Perlman confirmed that Gunn would return to write and direct the sequel. Chris Pratt returns for the sequel as Peter Quill / Star - Lord, along with the other Guardians from the first film as well as additional cast members. They are joined by Pom Klementieff as Mantis, and Kurt Russell as Ego. In June 2015, the film 's title was revealed as Guardians of the Galaxy Vol. 2. Filming began in February 2016 at Pinewood Atlanta, and concluded in June 2016. Guardians of the Galaxy Vol. 2 premiered in Tokyo on April 10, 2017, and was released on May 5, 2017. The film is set two - to - three months after the events of Guardians of the Galaxy, in 2014. One of the film 's post-credit sequences hints at the introduction of Adam Warlock, after Gunn originally intended for Warlock to make a full appearance in Vol. 2. He noted that Warlock could appear in future Guardians films, and is considered "a pretty important part '' of the cosmic side of the Marvel Cinematic Universe. The Grandmaster, played by Jeff Goldblum, is seen dancing in the end credits, before his appearance in Thor: Ragnarok. Peter Parker tries to balance being the hero Spider - Man with his high school life under guidance of Tony Stark as he deals with the threat of the Vulture. On February 9, 2015, Sony Pictures and Marvel announced that Sony would be releasing a Spider - Man film co-produced by Marvel Studios president Feige and Amy Pascal, with Sony Pictures continuing to own, finance, distribute, and have final creative control of the Spider - Man films. In April 2015, Feige confirmed the character would be Peter Parker and added that Marvel had been working to add Spider - Man to the MCU since at least October 2014, when they announced their full slate of Phase Three films, saying, "Marvel does n't announce anything officially until it 's set in stone. So we went forward with that Plan A in October, with the Plan B being, if (the deal) were to happen with Sony, how it would all shift. We 've been thinking about (the Spider - Man film) as long as we 've been thinking about Phase Three. '' In June 2015, Tom Holland was cast in the role of Spider - Man and Jon Watts was hired to direct the film, and the next month, John Francis Daley & Jonathan Goldstein were hired to write the screenplay. Additional screenwriters include Watts & Christopher Ford and Chris McKenna & Erik Sommers. In April 2016, the title was revealed to be Spider - Man: Homecoming. Production began in June 2016 at Pinewood Atlanta, and concluded in October 2016. Spider - Man: Homecoming premiered on June 28, 2017 in Hollywood, and was released in the United Kingdom on July 5, and the United States on July 7, 2017. The film is set several months after the events of Captain America: Civil War, which is eight years after the events of The Avengers. In April 2016, Feige confirmed that characters from previous MCU films would appear in the film, with Robert Downey Jr. confirmed to reprise his role as Tony Stark / Iron Man shortly thereafter. Favreau, Paltrow, and Evans also reprise their roles as Happy Hogan, Pepper Potts, and Steve Rogers / Captain America, respectively. The clean - up crew Damage Control appear in the film (after previously being referenced in Iron Man and Agents of S.H.I.E.L.D.) ahead of an intended television series about them. Various weaponry and artifacts from previous films are referenced throughout the film that Toomes and his crew repurpose for their weapons. In Parker 's high school, one of his classes has a lesson about the Sokovia Accords, and portraits of Bruce Banner, Howard Stark and Abraham Erskine are seen within the school. Thor, trapped on another world without Mjölnir, must survive a gladiatorial duel against the Hulk and return to Asgard in time to stop the villainous Hela and the impending Ragnarök. In January 2014, Marvel announced that a third Thor film was in development, with Craig Kyle and Christopher Yost writing the screenplay, and was officially announced as Thor: Ragnarok in October 2014. By October 2015, Taika Waititi entered in negotiations to direct Thor: Ragnarok. In December 2015, Stephany Folsom was hired to rewrite the script. A year later, in January 2017, it was revealed that Eric Pearson wrote the screenplay, with Kyle, Yost and Folsom receiving story credit. Pearson, Kyle and Yost would ultimately receive screenwriting credit for the film. Hemsworth, Hiddleston, Idris Elba and Anthony Hopkins reprise their roles as Thor, Loki, Heimdall and Odin, respectively, and are joined by Cate Blanchett as Hela. Production began in July 2016 in Australia at Village Roadshow Studios, and wrapped in late October 2016. Thor: Ragnarok premiered in Los Angeles on October 10, 2017, began its international release on October 24, 2017 in the United Kingdom, and was released on November 3, 2017 in the United States. The film is set four years after the events of Thor: The Dark World, two years after the events of Avengers: Age of Ultron, and around the same time period as Captain America: Civil War and Spider - Man: Homecoming. Producer Brad Winderbaum noted that "Things happen on top of each other now in Phase Three. They 're not as interlocked as they were in Phase One. '' Mark Ruffalo and Benedict Cumberbatch appear in the film as Bruce Banner / Hulk and Doctor Stephen Strange, respectively. The film reveals that the Infinity Gauntlet first seen in Odin 's vault in Thor was a fake, while also introducing Thanos ' ship Sanctuary II in a post-credits scene. T'Challa returns home as sovereign of the nation of Wakanda only to find his dual role of king and protector challenged by a long - time adversary in a conflict that has global consequences. Documentary filmmaker Mark Bailey was hired to write a script for Black Panther in January 2011. In October 2014, the film was announced and Chadwick Boseman was revealed to be portraying T'Challa / Black Panther. In January 2016, Ryan Coogler was announced as director, and the following month, Joe Robert Cole was confirmed as the film 's screenwriter. In April 2016, Feige confirmed that Coogler was a co-screenwriter. Filming began in January 2017 at EUE / Screen Gems Studios in Atlanta, and concluded in April 2017. Black Panther premiered in Los Angeles on January 29, 2018, and began its international release on February 13, 2018, and was released on February 16, 2018 in the United States. The film also had a "cross-nation release '' in Africa, a first for a Disney film. The film is set one week after the events of Captain America: Civil War. Florence Kasumba, Serkis, Martin Freeman, and John Kani reprise their roles as Ayo, Ulysses Klaue, Everett K. Ross and T'Chaka respectively from previous MCU films. The film 's post-credits scene features a cameo appearance by Sebastian Stan, reprising his role as Bucky Barnes. The Avengers join forces with the Guardians of the Galaxy to try to stop Thanos from collecting all of the Infinity Stones. The film was announced in October 2014 as Avengers: Infinity War -- Part 1. In April 2015, Marvel announced that Anthony and Joe Russo would direct the film and in May, that Christopher Markus & Stephen McFeely would write the screenplay. In July 2016, Marvel revealed the title would be shortened to simply Avengers: Infinity War. Brolin reprises his role as Thanos, and is part of an ensemble cast featuring many actors who have appeared in other MCU films. Filming for Infinity War began in January 2017 in Atlanta, and lasted until July 2017. Additional filming also took place in Scotland. Avengers: Infinity War premiered in Los Angeles on April 23, 2018. It was released worldwide on April 27, 2018, with a few debuts beginning as early as April 25 in a handful of countries. The film is set two years after the events of Captain America: Civil War. Marvel had been planting the seeds for Infinity War since their early films, by introducing the Infinity Stones as MacGuffins: the Tesseract / Space Stone in Captain America: The First Avenger, Loki 's Scepter / Mind Stone in The Avengers, the Aether / Reality Stone in Thor: The Dark World, the Orb / Power Stone in Guardians of the Galaxy, and the Eye of Agamotto / Time Stone in Doctor Strange. Additionally, Thanos is shown holding an empty Infinity Gauntlet in Avengers: Age of Ultron. The Red Skull from Captain America: The First Avenger appears in the film, played by Ross Marquand instead of Hugo Weaving, and is the keeper of the final Infinity Stone, the Soul Stone. The post-credits scene features Nick Fury transmitting a distress signal on a device, which has the insignia of Captain Marvel. List indicator (s) In June 2012, Marvel announced a 10 - disc box set titled "Marvel Cinematic Universe: Phase One -- Avengers Assembled '', for release on September 25, 2012. The box set includes all six of the Phase One films -- Iron Man, The Incredible Hulk, Iron Man 2, Thor, Captain America: The First Avenger, and Marvel 's The Avengers -- on Blu - ray and Blu - ray 3D, in a replica of Nick Fury 's briefcase from The Avengers. In August 2012, luggage company Rimowa GmbH, who developed the briefcase for The Avengers, filed suit against Marvel Studios and Buena Vista Home Entertainment in U.S. federal court, complaining that "Marvel did not obtain any license or authorization from Rimowa to make replica copies of the cases for any purpose. '' The set was delayed to early 2013 for the packaging to be redesigned. The box set, with a redesigned case, was released on April 2, 2013. In addition, the box set included a featurette on the then - upcoming Phase Two films, showing footage and concept art, as well as previously unreleased deleted scenes from all of the Phase One films. In July 2015, Marvel announced a 13 - disc box set titled "Marvel Cinematic Universe: Phase Two Collection '', for release on December 8, 2015, exclusive to Amazon.com. The box set includes all six of the Phase Two films -- Iron Man 3, Thor: The Dark World, Captain America: The Winter Soldier, Guardians of the Galaxy, Avengers: Age of Ultron, and Ant - Man -- on Blu - ray, Blu - ray 3D and a digital copy, in a replica of the Orb from Guardians of the Galaxy, plus a bonus disc and exclusive memorabilia. Material on the bonus disc includes all of the Marvel One - Shots with commentary, deleted scenes and pre-production creative features for each of the films, featurettes on the making of the post-credit scenes for the films, and first looks at Captain America: Civil War, Doctor Strange, and Guardians of the Galaxy Vol. 2. Scott Lang tries to balance his home life with his responsibilities as Ant - Man, when Hope van Dyne and Hank Pym present him with a new mission, requiring him to team up with Van Dyne as the Wasp. Ant - Man and the Wasp was announced in October 2015. Peyton Reed confirmed that he would return to direct in November 2015, and that Paul Rudd and Evangeline Lilly would reprise their roles as Scott Lang / Ant - Man and Hope van Dyne / Wasp, respectively. In December 2015, Andrew Barrer, Gabriel Ferrari, and Rudd were confirmed to write the screenplay, with Chris McKenna and Erik Sommers revealed to have also contributed to the script in August 2017. In February 2017, Michael Douglas confirmed he would reprise his role as Hank Pym in the film. Filming began in August 2017 in Atlanta with additional filming in San Francisco, and ended in November 2017. Ant - Man and the Wasp is scheduled to be released on July 6, 2018. The film is set two years after the events of Captain America: Civil War and before the events of Avengers: Infinity War. Carol Danvers becomes Captain Marvel, one of the galaxy 's strongest heroes, after the Earth is caught in the center of an intergalactic conflict between two alien worlds. In May 2013, The Hollywood Reporter reported that Marvel had a working script for Ms. Marvel. In October 2014, Marvel announced the film would be titled Captain Marvel and feature Carol Danvers. In April 2015, Nicole Perlman and Meg LeFauve were announced as screenwriters. At the 2016 San Diego Comic - Con, Brie Larson was confirmed to play the role of Carol Danvers. In April 2017, Anna Boden and Ryan Fleck were hired to direct. That August, Geneva Robertson - Dworet was revealed to be taking over as the film 's screenwriter, replacing Perlman and LeFauve. Liz Flahive, Carly Mensch, Boden and Ryan Fleck are also credited as screenwriters on the film. Location filming occurred in January 2018, while principal photography began in March 2018 in Los Angeles, and is scheduled to last until May 2018. The film is scheduled to be released on March 8, 2019. The film is set in the 1990s. Jackson, Djimon Hounsou, Lee Pace, and Clark Gregg reprise their roles as Nick Fury, Korath, Ronan the Accuser, and Phil Coulson, respectively, while the Skrull species are introduced to the MCU. The film was announced in October 2014 as Avengers: Infinity War -- Part 2. In April 2015, it was revealed that Anthony and Joe Russo would direct the film and in May, that Christopher Markus and Stephen McFeely would write the screenplay. In July 2016, Marvel revealed the title would be changed, being known simply at that time as the Untitled Avengers film. Brolin reprises his role as Thanos, and is part of an ensemble cast featuring many actors who have appeared in other MCU films. Filming began in August 2017 in Atlanta, and ended in January 2018. The film is scheduled to be released on May 3, 2019. In December 2016, Sony Pictures scheduled a sequel to Spider - Man: Homecoming for release on July 5, 2019. A year later, Watts was confirmed to be returning to direct the film. Chris McKenna and Erik Sommers, two of the writers of the first film, return to write the script. Filming is scheduled to begin in early July 2018, in London, and will last until December 2018. The film is set after the untitled Avengers film. In April 2016, Kevin Feige stated that "Guardians 3 is (one film that 's) up there '' being considered for release beyond 2019. In March 2017, Gunn stated that a third Guardians film would happen "for sure '', and the following month confirmed he would return to write and direct Guardians of the Galaxy Vol. 3. Filming is set to begin in January 2019. The film is scheduled to be released in 2020. The film is set after the untitled Avengers film. Disney has scheduled multiple release dates for untitled Marvel Studios films. These include: May 1, July 31, and November 6, 2020; May 7, July 30, and November 5, 2021; and February 18, May 6, and July 29, 2022. In October 2016, Feige said it was a combination of knowing what films would occupy the 2020 dates and allowing some flexibility, saying, "We know what (films) we 'd like them to be for 2020. Over the years, where we 're aiming we 've been lucky enough that it 's usually been the same thing but we always leave ourselves the opportunity to bob and weave and adapt if we have to. But we know where we 're headed for 2020 and we have ideas and we 're beginning to solidify the years beyond that. '' Feige and Marvel have additional storylines planned through 2028, resulting in 20 films "on the docket that are completely different from anything that 's come before -- intentionally. '' In October 2014, in terms of Phase Four films, Feige said, for the time being, "We 're not going to talk specifically about the story of any of those films, the plot of any of those films, what happens to any of the characters in any of those films. In fact, even to talk about any of those characters -- who will be involved in those movies -- will be a bit of a spoiler as to what may or may not happen to them in earlier movies. '' In April 2016, Feige added, "We 're only working on what 's been announced through the end of 2019. And it is still a big chess board for 2020 and beyond. '' A year later, Feige noted, "We have an idea (of what the MCU looks like post-Infinity War), and it 's gon na be very, very different '', but cautioned that Marvel would not be "actively discussing anything past untitled Avengers '' besides dating the sequel to Spider - Man: Homecoming and that James Gunn would be working on Guardians of the Galaxy Vol. 3. He also was not sure if Marvel would continue to group the films in phases once Phase Three concluded, that "it might be a new thing ''. Feige noted Marvel hoped to reveal additional films after the release of the untitled Avengers film. A second sequel to Homecoming is also planned. In February 2014, Feige stated that after exploring Black Widow 's past in Age of Ultron, he would like to see it explored further in a solo film. Marvel has done some development work for the potential film, including a "pretty in depth '' treatment by Nicole Perlman, and by May 2016, Feige stated that Marvel was "creatively and emotionally '' committed to creating the solo film. By January 2018, Jac Schaeffer was hired to write the script for the potential film. By the end of April, Marvel had met with directors Deniz Gamze Ergüven, Chloé Zhao, and Amma Asante for the project. By April 2018, Marvel had met with multiple screenwriters, to craft a film based on the Eternals, with a focus on the character Sersi. Feige stated that a film based on the group was "one of many many many things that we are actively beginning to have creative discussions about to see if we believe in them enough to put them on a slate. '' A month later, Matthew and Ryan Firpo were hired to write the script for the project. In May 2013, The Hollywood Reporter reported that Marvel had a working script for Blade. In July 2015, Wesley Snipes, who played Blade in three films before the character 's rights reverted to Marvel, stated that he had discussions with Marvel to reprise the role. A film based on the Runaways went through a number of iterations. Brian K. Vaughan was originally hired to write a screenplay based on the property in May 2008. In April 2010, Marvel hired Peter Sollett to direct the film, and Drew Pearce was hired to write a script in May. The following October, development on the film was put on hold, with Pearce revealing in September 2013 that the Runaways film had been shelved in favor of The Avengers, with the earliest it could release being Phase Three. In October 2014, after announcing all of Marvel 's Phase Three films without Runaways, Feige stated the project was "still an awesome script that exists in our script vault '', adding, "We 'd love to do something with Runaways some day. In our television and future film discussions, it 's always one that we talk about, because we have a solid draft there. But again, we ca n't make them all. '' In August 2016, Marvel Television announced Marvel 's Runaways from the streaming service Hulu, with the series receiving a full season order in May 2017. It premiered in November 2017. In April 2013, Feige mentioned the Inhumans as a property out of which he was "confident '' a film would be made. Inhumans as a concept would first be introduced to the MCU in 2014 through the second season of the television series Agents of S.H.I.E.L.D. By August 2014, the studio was ready to move forward in development with the film, with a screenplay written by Joe Robert Cole. In October 2014, the film was announced for Phase Three and scheduled for release July 2019. By October 2015, Cole was no longer involved with the film and any potential drafts that he may have written would not be used. In April 2016, Inhumans was removed from the release schedule, and would no longer be a part of Phase Three. In July 2016, Feige said Inhumans would "certainly '' be a part of the discussion regarding the film ideas for 2020 and 2021, adding the following November that he was still optimistic the film could be released in Phase Four. In November 2016, Marvel Television announced the series Marvel 's Inhumans, which premiered on ABC in September 2017, after the first two episodes were screened in IMAX. The series was not intended to be a reworking of the film. ABC canceled Inhumans after one season in May 2018.
who are the top selling rock bands of all time
List of best - selling Music artists - wikipedia This list includes music artists with claims of 75 million or more record sales. The artists in the following tables are listed with both their claimed sales figure along with their total of certified units and are ranked in descending order, with the artist with the highest amount of claimed sales at the top. If two or more artists have the same claimed sales, they are then ranked by certified units. The claimed sales figure and the total of certified units (for each country) within the provided sources include sales of albums, singles, compilation - albums, music videos as well as downloads of singles and full - length albums. Sales figures, such as those from Soundscan, which are sometimes published by Billboard magazine, have not been included in the certified units column. As of 2017, based on both sales claims and certified units, The Beatles are considered the highest - selling band. Elvis Presley is considered the highest - selling individual artist based on sales claims and Rihanna is the highest - selling individual artist based on certified units. All artists included on this list, which have begun charting on official albums or singles charts have their available claimed figures supported by at least 20 % in certified units. That is why Cliff Richard, Diana Ross, Scorpions, Charles Aznavour, Bing Crosby, Gloria Estefan, Deep Purple, Iron Maiden, Tom Jones, The Jackson 5, Dionne Warwick, the Spice Girls, Luciano Pavarotti, Dolly Parton, Ozzy Osbourne, Andrea Bocelli and others have not been included on this list. The more recent the artist, the higher the required percentage of certified units, so artists such as Rihanna, Taylor Swift, Flo Rida, Katy Perry, Justin Bieber, Adele, Lady Gaga and Bruno Mars are expected to have their claimed figures supported by over 75 % in certified units. The certified units are sourced from available online databases of local music industry associations. All certified units are converted from Gold / Platinum / Diamond certification awards based on criteria provided by certifying bodies. The certified sales percentage varies according to the first year that an artist appeared in the charts. The requirements of certified sales are designed to avoid inflated sales figures, which are frequently practiced by record companies for promotional purposes. The claimed figures are referenced from online articles created by highly reliable sources. For clarity, the sources used, say the term "records '' (singles, albums, videos) and not "albums ''. However, if all available sources for an artist or band say "albums '', such sources are only used if the certified album units of the said artist meet the required percentage amount. This list uses claimed figures that are closest to artists ' available certified units: inflated claimed figures that meet the required certified units amount but are unrealistically high, are not used. The claimed figures are upgraded only when there is a significant progress in artists ' certified sales. In other words, the available certified sales for each artist should get relatively closer to already listed claimed figure in order for higher figures to replace the listed ones. The certified sales of the newer artists may sometimes be higher than their listed claimed figures. This is because Recording Industry Association of America and almost all other certifying bodies count streaming towards Gold and Platinum thresholds required for Digital Single Award certification. For this reason, some singles and even albums get over certified by hundreds of thousands of units. The over certified figures, however, are often in millions of units for RIAA certifications, one such example is Rihanna 's single "We Found Love '', which is certified at nine times Platinum by the RIAA, yet during the time of the certification, it had sold 5.4 million downloads. The certified sales for some artists / bands who have multi-disc albums can be higher than their listed claimed figures due to Recording Industry Association of America (RIAA) counting each unit within set as one unit toward certification. The Beatles Elvis Presley Michael Jackson Madonna Elton John Led Zeppelin Pink Floyd Rihanna Mariah Carey Celine Dion AC / DC Whitney Houston Queen The Rolling Stones Taylor Swift Garth Brooks Eminem Eagles U2 Bruno Mars Kanye West Katy Perry Justin Bieber Adele Lady Gaga Metallica Jay Z Bon Jovi B'z Beyoncé Coldplay Shania Twain Flo Rida Ayumi Hamasaki Van Halen Linkin Park Journey The Black Eyed Peas Kenny G Tupac Shakur Usher Enya
was leonardo dicaprio nominated for an oscar for the great gatsby
List of Awards and nominations received by Leonardo DiCaprio - wikipedia American actor Leonardo DiCaprio has won 54 awards from 161 nominations, and was named runner - up for 4 of those nominations. He has been nominated for six Academy Awards, four British Academy Film Awards and nine Screen Actors Guild Awards, winning one of each award from them and three Golden Globe Awards from eleven nominations. DiCaprio received three Young Artist Award nominations for his roles in television shows during the early 1990s -- the soap opera Santa Barbara (1990), the dramedy Parenthood (1990), and the sitcom Growing Pains (1991). This was followed by his film debut in the direct - to - video feature Critters 3 (1991). He played a mentally challenged boy in the drama What 's Eating Gilbert Grape (1993), a role that earned him nominations for the Academy Award and Golden Globe Award for Best Supporting Actor. Three years later, he appeared in Romeo + Juliet, for which he earned a Best Actor award from the Berlin International Film Festival. DiCaprio featured opposite Kate Winslet in the romantic drama Titanic (1997), the highest - grossing film to that point. For the film, he garnered the MTV Movie Award for Best Male Performance and his first Golden Globe Award for Best Actor nomination. For a role in The Beach, he was nominated in two categories at the 2000 Teen Choice Awards, including Film -- Choice Actor. However, the film also earned him a Golden Raspberry Award for Worst Actor nomination. DiCaprio was cast in the role of con - artist Frank Abagnale, Jr. in the crime drama Catch Me If You Can, and starred in the historical drama Gangs of New York -- films that earned him two nominations at the 2003 MTV Movie Awards. DiCaprio was nominated for his first Academy Award, BAFTA Award and Critics ' Choice Movie Award for Best Actor for his role as Howard Hughes in the biographical drama The Aviator (2004); he also won a Golden Globe Award in the same category. For his next appearances -- the crime drama The Departed (2006), the war thriller Blood Diamond (2006), the drama Revolutionary Road (2008) and the biographical drama J. Edgar (2011) -- he garnered Golden Globe Award for Best Actor -- Motion Picture Drama nominations. DiCaprio earned nominations for the Saturn Award for Best Actor for his roles in the psychological thriller Shutter Island (2010) and the science fiction thriller Inception (2010). He co-produced and played stockbroker Jordan Belfort in The Wolf of Wall Street (2013), a role that earned him the Golden Globe Award for Best Actor -- Motion Picture Musical or Comedy. The film was nominated for several Academy Awards, including Best Picture and Best Actor, although it failed to win in any category. He won the Golden Globe Award, BAFTA Award, and Academy Award for Best Actor for his portrayal of Hugh Glass in the 2015 film The Revenant. The Australian Academy of Cinema and Television Arts Awards are presented annually by the Australian Academy of Cinema and Television Arts (AACTA) to recognize and honor achievements in the film and television industry. DiCaprio has received two awards from four nominations. The Academy Awards, or "Oscars '', are a set of awards given annually for excellence of cinematic achievements. The awards, organized by the Academy of Motion Picture Arts and Sciences (AMPAS), were first held in 1929 at the Hollywood Roosevelt Hotel. DiCaprio has received one award from six nominations. A non-profit organization, the Alliance of Women Film Journalists was founded in 2006 to support women 's work in the film industry. DiCaprio has received one award from two nominations. The Austin Film Critics Association, founded in 2005 by Cole Dabney and Bobby McCurdy, is an organization of professional film critics. DiCaprio has received one award from two nominations. The Australian Film Critics Association is a film critic organization, formed in 1996. DiCaprio has been nominated once. Founded in West Berlin in 1951, the Berlin International Film Festival is celebrated annually in February. DiCaprio has been awarded once. First awarded in 1995, the Blockbuster Entertainment Awards ceremony was held annually and ended in 2001. DiCaprio has been awarded twice. Formed in 1981, the Boston Society of Film Critics is an organization to award "commendations to the best of the year 's films and filmmakers ''. DiCaprio has been awarded once. The British Academy Film Award is an annual award show presented by the British Academy of Film and Television Arts. The awards were founded in 1947 as The British Film Academy, by David Lean, Alexander Korda, Carol Reed, Charles Laughton, Roger Manvell and others. DiCaprio has received one award from four nominations. The Chicago Film Critics Association was founded in 1990 by Sharon LeMaire and Sue Kiner. DiCaprio has won two awards from four nominations. The Critics ' Choice Movie Awards have been presented annually since 1995 by the Broadcast Film Critics Association for outstanding achievements in the film industry. DiCaprio has received two awards from seven nominations. The Dallas -- Fort Worth Film Critics Association is an organization of radio, television, and internet journalists from Dallas -- Fort Worth - based publications. DiCaprio has received one award from five nominations. Founded in 2007, the Detroit Film Critics Society is a film critic organization based in Detroit, Michigan. DiCaprio has been nominated three times. The Dorian Awards are presented by the Gay and Lesbian Entertainment Critics Association (GALECA). DiCaprio has received one award from three nominations. The Dublin Film Critics ' Circle is an Irish film critic association that have held awards since 2006. DiCaprio has received one award from two nominations. The Empire Awards is a British awards ceremony held annually to recognize cinematic achievements. DiCaprio has received four nominations. The Film Critics Circle of Australia is a group of film critics. In an annual event, the Circle gives awards to people in the film industry. DiCaprio has been nominated once. The Golden Globe Award is an accolade bestowed by the 93 members of the Hollywood Foreign Press Association (HFPA) recognizing excellence in film and television, both domestic and foreign. DiCaprio has received three awards from eleven nominations. The Golden Raspberry Awards are awarded in recognition of the worst in film. DiCaprio has received one award from two nominations. The Hollywood Film Awards are held annually to recognize talent in the film industry. DiCaprio has received one award. The Irish Film & Television Academy Awards are presented annually to award best in films and television. DiCaprio has received two awards from five nominations. The London Film Critics Circle, founded in 1913, is an association for working British critics. DiCaprio has received three nominations. Founded in 1975, the Los Angeles Film Critics Association is an organization of critics who present awards each January. DiCaprio has received one award. The MTV Movie Awards is an annual award show presented by MTV to honor outstanding achievements in films. Founded in 1992, the winners of the awards are decided online by the audience. DiCaprio has received four awards from fifteen nominations. The National Board of Review was founded in 1909 in New York City to award "film, domestic and foreign, as both art and entertainment ''. DiCaprio has received four awards. The National Society of Film Critics is a film critic organization, founded in 1966. DiCaprio has been the runner - up for one award. The New York Film Critics Circle is an organization of critics founded in 1935. They present awards annually in December. DiCaprio has been the runner - up for one award. The Nickelodeon Kids ' Choice Awards, also known as the Kids Choice Awards (KCAs), is an annual awards show that airs on the Nickelodeon cable channel that honors the year 's biggest television, movie, and music acts, as voted by Nickelodeon viewers. DiCaprio has received one award. The Online Film Critics Society Awards is an annual event of an international professional association of film critics to award achievements in the film industry. DiCaprio has received three nominations. Founded in 1989 in Palm Springs, California, the Palm Springs International Film Festival is held annually in January. DiCaprio has been awarded once. The People 's Choice Awards is an American awards show recognizing the people and the work of popular culture. The show has been held annually since 1975 and is voted on by the general public. DiCaprio has received one award from five nominations. The Rembrandt Awards is an annual event held to award filmmakers and actors in several categories. DiCaprio has received one award. The San Diego Film Critics Society, an organization of film reviewers in San Diego, was founded in 1996 to award films. DiCaprio has received one award from two nominations. Founded in 1986, the Santa Barbara International Film Festival is an eleven - day film festival held in Santa Barbara, California. DiCaprio has received two awards. The Satellite Awards are a set of annual awards given by the International Press Academy. DiCaprio has received three awards from nine nominations. The Scream Awards are held annually to recognize films in the horror, science fiction, and fantasy genre. DiCaprio has received one award from two nominations. The Saturn Awards are presented annually by the Academy of Science Fiction, Fantasy and Horror Films to honor science fiction, fantasy, and horror films, television and home video. DiCaprio has been nominated three times. The Screen Actors Guild Awards are organized by the Screen Actors Guild ‐ American Federation of Television and Radio Artists. First awarded in 1995, the awards aim to recognize excellent achievements in film and television. DiCaprio has received one award from nine nominations. The St. Louis Gateway Film Critics Association, founded in 2004, is an organization of film critics operating in Greater St. Louis. DiCaprio received one award from four nominations. The Teen Choice Awards is an annual awards show that airs on the Fox Network. The awards honor the year 's biggest achievements in music, movies, sports, television, fashion and other categories, voted by teen viewers aged 13 to 19. DiCaprio has received three awards from ten nominations. The Vancouver Film Critics Circle was founded by David Spaner and Ian Caddell in 2000. DiCaprio has received two nominations. The Visual Effects Society is an annual event to recognize "outstanding visual effects artistry and innovation worldwide in film, animation, television, commercials and video games and the VFX supervisors, VFX producers and hands - on - the - keys artists who bring this work to life ''. DiCaprio has received one nomination. The Washington D.C. Area Film Critics Association is a group of film critics in Washington, founded in 2002. DiCaprio has received one award from five nominations. Presented by Young Artist Association, a non-profit organization, the Young Artist Awards are held annually to honor young performers. DiCaprio has received three nominations.
did slovakia qualify for world cup play offs
2018 FIFA World Cup qualification -- UEFA group F - Wikipedia The 2018 FIFA World Cup qualification UEFA Group F was one of the nine UEFA groups for 2018 FIFA World Cup qualification. The group consisted of six teams: England, Slovakia, Scotland, Slovenia, Lithuania, and Malta. The draw for the first round (group stage) was held as part of the 2018 FIFA World Cup Preliminary Draw on 25 July 2015, starting 18: 00 MSK (UTC + 3), at the Konstantinovsky Palace in Strelna, Saint Petersburg, Russia. The group winners, England, qualified directly for the 2018 FIFA World Cup. The group runners - up, Slovakia, were eliminated as the worst runners - up. The fixture list was confirmed by UEFA on 26 July 2015, the day following the draw. Times are CET / CEST, as listed by UEFA (local times are in parentheses). Lithuania v Slovenia Slovakia v England Malta v Scotland England v Malta Scotland v Lithuania Slovenia v Slovakia Lithuania v Malta Slovakia v Scotland Slovenia v England England v Scotland Malta v Slovenia Slovakia v Lithuania England v Lithuania Malta v Slovakia Scotland v Slovenia Scotland v England Slovenia v Malta Lithuania v Slovakia Lithuania v Scotland Malta v England Slovakia v Slovenia England v Slovakia Scotland v Malta Slovenia v Lithuania England v Slovenia Malta v Lithuania Scotland v Slovakia Lithuania v England Slovakia v Malta Slovenia v Scotland There were 74 goals scored in 30 matches, for an average of 2.47 goals per match. A player is automatically suspended for the next match for the following offences: The following suspensions were served during the qualifying matches:
when was the first plane used in ww1
Aviation in World war I - wikipedia World War I was the first major conflict involving the large - scale use of aircraft. Tethered observation balloons had already been employed in several wars, and would be used extensively for artillery spotting. Germany employed Zeppelins for reconnaissance over the North Sea and Baltic and also for strategic bombing raids over Britain and the Eastern Front. Aeroplanes were just coming into military use at the outset of the war. Initially, they were used mostly for reconnaissance. Pilots and engineers learned from experience, leading to the development of many specialized types, including fighters, bombers, and trench strafers. Ace fighter pilots were portrayed as modern knights, and many became popular heroes. The war also saw the appointment of high - ranking officers to direct the belligerent nations ' air war efforts. While the impact of aircraft on the course of the war was mainly tactical rather than strategic, most important being direct cooperation with ground forces (especially ranging and correcting artillery fire), the first steps in the strategic roles of aircraft in future wars was also foreshadowed. At the 1911 meeting of the Institute of International Law in Madrid, legislation was proposed to limit the use of airplanes to reconnaissance missions and banning them from being used as platforms for weapons. This legislation was rooted in a fear that airplanes would be used to attack undefended cities, violating Article 69 of the Den Hague Reglement (the set of international laws governing warfare). At the start of the war, there was some debate over the usefulness of aircraft in warfare. Many senior officers, in particular, remained sceptical. However the initial campaigns of 1914 proved that cavalry could no longer provide the reconnaissance expected by their generals, in the face of the greatly increased firepower of twentieth century armies, and it was quickly realised that aircraft could at least locate the enemy, even if early air reconnaissance was hampered by the newness of the techniques involved. Early skepticism and low expectations quickly turned to unrealistic demands beyond the capabilities of the primitive aircraft available. Even so, air reconnaissance played a critical role in the "war of movement '' of 1914, especially in helping the Allies halt the German invasion of France. On 22 August 1914, British Captain L.E.O. Charlton and Lieutenant V.H.N. Wadham reported German General Alexander von Kluck 's army was preparing to surround the BEF, contradicting all other intelligence. The British High Command took note of the report and started to withdraw toward Mons, saving the lives of 100,000 soldiers. Later, during the First Battle of the Marne, observation aircraft discovered weak points and exposed flanks in the German lines, allowing the allies to take advantage of them. In Germany the great successes of the early Zeppelin airships had largely overshadowed the importance of heavier - than - air aircraft. Out of a paper strength of about 230 aircraft belonging to the army in August 1914 only 180 or so were of any use. The French military aviation exercises of 1911, 1912, and 1913 had pioneered cooperation with the cavalry (reconnaissance) and artillery (spotting), but the momentum was if anything slacking. Great Britain had "started late '' and initially relied largely on the French aircraft industry, especially for aircraft engines. The initial British contribution to the total allied airwar effort in August 1914 (of about 184 aircraft) was three squadrons with about 30 serviceable machines. By the end of the war, Great Britain had formed the world 's first air force to be independent of either army or naval control, the Royal Air Force. The American army and navy air services were far behind; even in 1917, when the United States entered the war, they were to be almost totally dependent on the French and British aircraft industries for combat aircraft. The Germans ' great air "coup '' of 1914 (at least according to contemporary propaganda) was at the Battle of Tannenberg in East Prussia, where an unexpected Russian attack was reported by Leutnants Canter and Mertens, resulting in the Russians ' being forced to withdraw. By the end of 1914 the line between the Germans and the Allies stretched from the North Sea to the Alps. The initial "war of movement '' largely ceased, and the front became static. Three main functions of short range reconnaissance squadrons had emerged by March 1915. The first was photographic reconnaissance: building up a complete mosaic map of the enemy trench system. The first air cameras used glass plates. (Kodak cellulose film had been invented, but did not at this stage have sufficient resolution). Artillery "spotting '' enabled the ranging of artillery on targets invisible to the gunners. Radio telephony was not yet practical from an aircraft, so communication was a problem. By March 1915, a two - seater on "artillery observation '' duties was typically equipped with a primitive radio transmitter transmitting using Morse code, but had no receiver. The artillery battery signalled to the aircraft by laying strips of white cloth on the ground in prearranged patterns. Observation duties were shared with the tethered balloons, which could communicate directly with their batteries by field telephone, but were far less flexible in locating targets and reporting the fall of shot. "Contact patrol '' work attempted to follow the course of a battle by communicating with advancing infantry while flying over the battlefield. The technology of the period did not permit radio contact, while methods of signalling were necessarily crude, including dropping messages from the aircraft. Soldiers were initially reluctant to reveal their positions to aircraft, as they (the soldiers) found distinguishing between friend and foe problematic. Reconnaissance flying, like all kinds, was a hazardous business. In April 1917, the worst month for the entire war for the RFC, the average life expectancy of a British pilot on the Western Front was 69 flying hours. Typical 1914 aircraft could carry only very small bomb loads -- the bombs themselves, and their stowage, were still very elementary, and effective bomb sights were still to be developed. Nonetheless the beginnings of strategic and tactical bombing date from the earliest days of the war. Notable are the raids by the RNAS on the German airship sheds at Düsseldorf, Cologne and Friedrichshafen in September, October and November 1914, as well as the formation of the Brieftauben Abteilung Ostende (or "Ostend carrier pigeon detachment '', cover name for the first German strategic bombing unit), which mounted the first token raid over the English Channel in December. As Dickson had predicted, initially air combat was extremely rare, and definitely subordinate to reconnaissance. There are even stories of the crew of rival reconnaissance aircraft exchanging nothing more belligerent than smiles and waves. This soon progressed to throwing grenades, and other objects -- even grappling hooks. The first aircraft brought down by another was an Austrian reconnaissance aircraft rammed on 8 September 1914 by Russian pilot Pyotr Nesterov in Galicia in the Eastern Front. Both planes crashed as the result of the attack killing all occupants. Eventually pilots began firing handheld firearms at enemy aircraft, however pistols were too inaccurate and the single shot rifles too unlikely to score a hit. On October 5, 1914, French pilot Louis Quenault opened fire on a German aircraft with a machine gun for the first time and the era of air combat was under way as more and more aircraft were fitted with machine guns. As early as 1912, designers at the British firm Vickers were experimenting with machine gun carrying aircraft. The first concrete result was the Vickers Experimental Fighting Biplane 1, which featured at the 1913 Aero Show in London. and appeared in developed form as the FB. 5 in February 1915. This pioneering fighter, like the Royal Aircraft Factory F.E. 2b and the Airco DH. 1, was a pusher type. These had the engine and propeller behind the pilot, facing backward, rather than at the front of the aircraft, as in a tractor configuration design. This provided an optimal machine gun position, from which the gun could be fired directly forward without an obstructing propeller, and reloaded and cleared in flight. An important drawback was that pusher designs tended to have an inferior performance to tractor types with the same engine power because of the extra drag created by the struts and rigging necessary to carry the tail unit. The F.E. 2d, a more powerful version of the F.E. 2b, remained a formidable opponent well into 1917, when pusher fighters were already obsolete. They were simply too slow to catch their quarry. The forward firing gun of a pusher "gun carrier '' provided some offensive capability -- the mounting of a machine gun firing to the rear from a two - seater tractor aircraft gave defensive capability. There was an obvious need for some means to fire a machine gun forward from a tractor aircraft, especially from one of the small, light, "scout '' aircraft, adapted from pre-war racers, that were to perform most air combat duties for the rest of the war. It would seem most natural to place the gun between the pilot and the propeller, firing in the direct line of flight, so that the gun could be aimed by "aiming the aircraft ''. It was also important that the breech of the weapon be readily accessible to the pilot, so that he could clear the jams and stoppages to which early machine guns were prone. However, this presented an obvious problem: a percentage of bullets fired "free '' through a revolving propeller will strike the blades, with predictable results. Early experiments with synchronised machine guns had been carried out in several countries before the war. Franz Schneider, then working for Nieuport in France but later working for L.V.G. in Germany, patented a synchronisation gear on 15 July 1913. An early Russian gear was designed by a Lieutenant Poplavko: the Edwards brothers in England designed the first British example, and the Morane - Saulnier company were also working on the problem in 1914. All these early experiments failed to attract official attention, partly due to official inertia and partly due to the terrifying results of failures of these early synchronising gears, which included dangerously ricocheting bullets as well as disintegrating propellers. The Lewis gun, used on many early Allied aircraft, proved next to impossible to successfully synchronise due to its open bolt firing cycle. In an open bolt firing cycle, it is impossible to predict the exact time any given round will fire, a problematic characteristic in a weapon one is attempting to fire between the spinning blades of a propeller. Photographs of fuselage - mounted Lewis guns aimed directly ahead on RNAS aircraft, and looking as if they "should '' be synchronised -- as with some of their Bristol Scouts -- were probably in fact free firing, hardly a satisfactory solution. The Maxim guns used by both the Allies (as the Vickers) and Germany (as the Parabellum MG 14 and Spandau lMG 08) had a closed bolt firing cycle that started with a bullet already in the breech and the breech closed, so the firing of the bullet was the next step in the cycle. This meant that the exact instant the round would be fired could be predicted, making these weapons considerably easier to synchronise. The standard French light machine gun, the Hotchkiss, was also most unamenable to synchronisation due to rounds "hanging fire ''. The Morane - Saulnier company designed a "safety backup '' in the form of "deflector blades '' (metal wedges), complete with metal tiebars extending outwards from the propeller hub for bracing, fitted to the rear surfaces of a propeller at the radial point where they would be struck by a bullet. Roland Garros tried out this system in a Morane - Saulnier L in April 1915. He managed to score several kills, although it proved to be an inadequate and dangerous solution. Garros eventually was forced by engine failure (possibly caused by the repeated strain on his aircraft 's crankshaft of the "deflected '' bullets striking his propeller) to land behind enemy lines, and he and his secret weapon were captured by the Germans. Famously, the German High Command passed Garros ' captured Morane to the Fokker company -- who already produced Morane type monoplanes for the German Air Service -- with orders to copy the design. The deflector system was totally unsuitable for the steel - jacketed German ammunition so that the Fokker engineers were forced to revisit the synchronisation idea (perhaps infringing Schneider 's patent), crafting the Stangensteuerung system by the spring of 1915, used on the examples of their pioneering Eindecker fighter. Crude as these little monoplanes were, they produced a period of German air superiority, known as the "Fokker Scourge '' by the Allies. The psychological effect exceeded the material -- the Allies had up to now been more or less unchallenged in the air, and the vulnerability of their older reconnaissance aircraft, especially the British B.E. 2 and French Farman pushers, came as a very nasty shock. Another method used at this time to fire a machine gun forward from a tractor design was to mount the gun to fire above the propeller arc. This required the gun to be mounted on the top wing of biplanes and be mounted on complicated drag - inducing structures in monoplanes. Reaching the gun so that drums or belts could be changed, or jams cleared, presented problems even when the gun could be mounted relatively close to the pilot. Eventually the excellent Foster mounting became more or less the standard way of mounting a Lewis gun in this position in the R.F.C.: this allowed the gun to slide backward for drum changing, and also to be fired at an upward angle, a very effective way of attacking an enemy from the "blind spot '' under its tail. This type of mounting was still only possible for a biplane with a top wing positioned near the apex of the propeller 's arc -- it put considerable strain on the fragile wing structures of the period, and it was less rigid than a gun mounting on the fuselage, producing a greater "scatter '' of bullets, especially at anything but very short range. The earliest versions of the Bristol Scout to see aerial combat duty in 1915, the Scout C, had Lewis gun mounts in RNAS service that sometimes were elevated above the propeller arc, and sometimes (in an apparently reckless manner) firing directly through the propeller arc without synchronisation. During the spring and summer of 1915, Captain Lanoe Hawker of the Royal Flying Corps, however, had mounted his Lewis gun just forward of the cockpit to fire forwards and outwards, on the left side of his aircraft 's fuselage at about a 30 ° horizontal angle. On 25 July 1915 Captain Hawker flew his Scout C, bearing RFC serial number 1611 against several two - seat German observation aircraft of the Fliegertruppe, and managed to defeat three of them in aerial engagements to earn the first Victoria Cross awarded to a British fighter pilot, while engaged against enemy fixed - wing aircraft. The first purpose - designed fighter aircraft included the British Vickers F.B. 5, and machine guns were also fitted to several French types, such as the Morane - Saulnier L and N. Initially the German Air Service lagged behind the Allies in this respect, but this was soon to change dramatically. In July 1915 the Fokker E.I, the first aircraft to enter service with a "synchronisation gear '' which enabled a machine gun to fire through the arc of the propeller without striking its blades, became operational. This gave an important advantage over other contemporary fighter aircraft. This aircraft and its immediate successors, collectively known as the Eindecker (German for "monoplane '') -- for the first time supplied an effective equivalent to Allied fighters. Two German military aviators, Leutnants Otto Parschau and Kurt Wintgens, worked for the Fokker firm during the spring of 1915, demonstrating the revolutionary feature of the forward - firing synchronised machine gun to the embryonic force of Fliegertruppe pilots of the German Empire. The first successful engagement involving a synchronised - gun - armed aircraft occurred on the afternoon of July 1, 1915, to the east of Lunéville, France when Leutnant Kurt Wintgens, one of the pilots selected by Fokker to demonstrate the small series of five Eindecker prototype aircraft, forced down a French Morane - Saulnier "Parasol '' two seat observation monoplane behind Allied lines with his Fokker M. 5K / MG Eindecker production prototype aircraft, carrying the IdFlieg military serial number "E. 5 / 15 ''. Some 200 shots from the synchronised Parabellum MG14 machine gun on Wintgens ' aircraft had hit the Gnome Lambda rotary engine of the Morane Parasol, forcing it to land safely in Allied territory. By late 1915 the Germans had achieved air superiority, rendering Allied access to the vital intelligence derived from continual aerial reconnaissance more dangerous to acquire. In particular the defencelessness of Allied reconnaissance types was exposed. The first German "ace '' pilots, notably Max Immelmann, had begun their careers. The number of actual Allied casualties involved was for various reasons very small compared with the intensive air fighting of 1917 -- 18. The deployment of the Eindeckers was less than overwhelming: the new type was issued in ones and twos to existing reconnaissance squadrons, and it was to be nearly a year before the Germans were to follow the British in establishing specialist fighter squadrons. The Eindecker was also, in spite of its advanced armament, by no means an outstanding aircraft, being closely based on the pre-war Morane - Saulnier H, although it did feature a steel tubing fuselage framework (a characteristic of all Fokker wartime aircraft designs) instead of the wooden fuselage components of the French aircraft. Nonetheless, the impact on morale of the fact that the Germans were effectively fighting back in the air created a major scandal in the British parliament and press. The ascendancy of the Eindecker also contributed to the surprise the Germans were able to achieve at the start of the Battle of Verdun because the French reconnaissance aircraft failed to provide their usual cover of the German positions. Fortunately for the Allies, two new British fighters that were a match for the Fokker, the two - seat F.E. 2b and the single - seat D.H. 2, were already in production. These were both pushers, and could fire forwards without gun synchronisation. The F.E. 2b reached the front in September 1915, and the D.H. 2 in the following February. On the French front, the tiny Nieuport 11, a tractor biplane with a forward firing gun mounted on the top wing outside the arc of the propeller, also proved more than a match for the German fighter when it entered service in January 1916. With these new types the Allies re-established air superiority in time for the Battle of the Somme, and the "Fokker Scourge '' was over. The Fokker E. III, Airco DH - 2 and Nieuport 11 were the very first in a long line of single seat fighter aircraft used by both sides during the war. Very quickly it became clear the primary role of fighters would be attacking enemy two - seaters, which were becoming increasingly important as sources of reconnaissance and artillery observation, while also escorting and defending friendly two - seaters from enemy fighters. Fighters were also used to attack enemy observation balloons, strafe enemy ground targets, and defend friendly airspace from enemy bombers. Almost all the fighters in service with both sides, with the exception of the Fokkers ' steel - tube fuselaged airframes, continued to use wood as the basic structural material, with fabric - covered wings relying on external wire bracing. However, the first practical all - metal aircraft was produced by Hugo Junkers, who also used a cantilever wing structure with a metal covering. The first flight tests of the initial flight demonstrator of this technology, the Junkers J 1 monoplane, took place at the end of 1915 heralding the future of aircraft structural design. When the battle of Verdun began on 21 February 1916, air superiority initially enabled the Germans to establish a blockade (luftsperre) on the French air squadrons. However the French were already arming their specialist fighter squadrons, the Escadrilles de chasse, with the Nieuport 11, and with a new offensive strategy they quickly overcame the luftsperre, establishing air superiority over the battle by April. In the meantime, in the aftermath of the Fokker Scourge, the need for a larger, better equipped RFC became obvious, and the process of raising many new British squadrons was started. In the short term, creating new units was easier than producing aircraft to equip them, and training pilots to man them. When the Battle of the Somme started in July 1916, most ordinary RFC squadrons were still equipped with the BE. 2c, which had already proved an easy target for the Fokker Eindecker. New types such as the Sopwith 11⁄2 Strutter had to be transferred from production intended for the RNAS. Even more seriously, replacement pilots were being sent to France with pitifully few flying hours. Nonetheless, air superiority and an "offensive '' strategy facilitated the greatly increased involvement of the RFC in the battle itself, in what was known at the time as "trench strafing '' -- in modern terms, close support. For the rest of the war, this became a regular routine, with both attacking and defending infantry in a land battle being constantly liable to attack by machine guns and light bombs from the air. At this time, counter fire from the ground was far less effective than it became later, when the necessary techniques of deflection shooting had been mastered. The first step towards specialist fighter - only aviation units within the German military was the establishment of the so - called Kampfeinsitzer Kommando (single - seat battle unit, abbreviated as "KEK '') formations by Inspektor - Major Friedrich Stempel in February 1916. These were based around Eindeckers and other new fighter designs emerging, like the Pfalz E-series monoplanes, that were being detached from their former Feldflieger Abteilung units during the winter of 1915 -- 16 and brought together in pairs and quartets at particularly strategic locations, as "KEK '' units were formed at Habsheim, Vaux, Avillers, Jametz, and Cunel, as well as other strategic locations along the Western Front to act as Luftwachtdienst (aerial guard force) units, consisting only of fighters. In a pioneering move in March 1916, German master aerial tactician Oswald Boelcke came up with the idea of having "forward observers '' located close to the front lines to spot Allied aircraft approaching the front, to avoid wear and tear on the trio of Fokker Eindecker scout aircraft he had based with his own "KEK '' unit based at Sivry - sur - Meuse, just north of Verdun. By April 1916, the air superiority established by the Eindecker pilots and maintained by their use within the KEK formations had long evaporated as the Halberstadt D. II began to be phased in as Germany 's first biplane fighter design, with the first Fokker D - series biplane fighters joining the Halberstadts, and a target was set to establish 37 new squadrons in the next 12 months -- entirely equipped with single seat fighters, and manned by specially selected and trained pilots, to counter the Allied fighter squadrons already experiencing considerable success, as operated by the Royal Flying Corps and the French Aéronautique Militaire. The small numbers of questionably built Fokker D. IIIs posted to the Front pioneered the mounting of twin lMG 08s before 1916 's end, as the building numbers of the similarly armed, and much more formidable new twin - gun Albatros D.Is were well on the way to establishing the German air superiority marking the first half of 1917. Allied air superiority was maintained during the height of both battles, and the increased effectiveness of Allied air activity proved disturbing to the German Army 's top - level Oberste Heeresleitung command staff. A complete reorganisation of the Fliegertruppen des deutschen Kaiserreiches into what became officially known as the Luftstreitkräfte followed and had generally been completed by October 1916. This reorganisation eventually produced the German strategic bombing squadrons that were to produce such consternation in England in 1917 and 1918, and the specialist close support squadrons (Schlachtstaffeln) that gave the British infantry such trouble at Cambrai and during the German Spring offensive of 1918. Its most famous and dramatic effect, however, involved the raising of specialist fighter squadrons or Jagdstaffeln -- a full year after similar units had become part of the RFC and the French Aéronautique Militaire. Initially these units were equipped with the Halberstadt D. II (Germany 's first biplane fighter), the Fokker D.I and D. II, along with the last few surviving Eindeckers, all three biplane design types using a single lMG 08, before the Fokker D. III and Albatros D.I twin - gun types arrived at the Front. The first half of 1917 was a successful period for the jagdstaffeln and the much larger RFC suffered significantly higher casualties than their opponents. While new Allied fighters such as the Sopwith Pup, Sopwith Triplane, and SPAD S. VII were coming into service, at this stage their numbers were small, and suffered from inferior firepower: all three were armed with just a single synchronised Vickers machine gun. On the other hand, the jagdstaffeln were in the process of replacing their early motley array of equipment with Albatros D - series aircraft, armed with twin synchronised MG08s. The D.I and D. II of late 1916 were succeeded by the new Albatros D. III, which was, in spite of structural difficulties, "the best fighting scout on the Western Front '' at the time. Meanwhile, most RFC two - seater squadrons still flew the BE. 2e, a very minor improvement on the BE. 2c, and still fundamentally unsuited to air - to - air combat. This culminated in the rout of April 1917, known as "Bloody April ''. The RFC (Royal Flying Corps) suffered particularly severe losses, although Trenchard 's policy of "offensive patrol '', which placed most combat flying on the German side of the lines, was maintained. During the last half of 1917, the British Sopwith Camel and S.E. 5a and the French SPAD S. XIII, all fitted with two forward firing machine guns, became available in numbers. The ordinary two seater squadrons in the RFC received the R.E. 8 or the F.K. 8, not outstanding warplanes, but far less vulnerable than the BE. 2e they replaced. The F.E. 2d at last received a worthy replacement in the Bristol F. 2b. On the other hand, the latest Albatros, the D.V, proved to be a disappointment, as was the Pfalz D. III. The exotic Fokker Dr. I was plagued, like the Albatros, with structural problems. By the end of the year the air superiority pendulum had swung once more in the Allies ' favour. The surrender of the Russians and the Treaty of Brest - Litovsk in March 1918, and the resulting release of troops from the Eastern Front gave the Germans a "last chance '' of winning the war before the Americans could become effectively involved. This resulted in the last great German offensive of the war, the "Spring Offensive '', which opened on 21 March. The main attack fell on the British front on the assumption that defeat of the British army would result in the surrender of the mutiny - weakened French. In the air, the battle was marked by the carefully coordinated use of the Schlachtstaffeln or "battle flights '', equipped with the light CL class two seaters built by the Halberstadt and Hannover firms, that had proved so effective in the German counter-attack in early October 's Battle of Cambrai. The new German fighter aircraft, notably the Fokker D. VII, that might have revived German air superiority in time for this battle had not however reached the Jagdstaffeln in sufficient numbers, despite its own premier on the Western Front in the mid-Spring of 1918. As with several offensives on both sides, thorough planning and preparation led to initial success, and in fact to deeper penetration than had been achieved by either side since 1914. Many British airfields had to be abandoned to the advancing Germans in a new war of movement. Losses of aircraft and their crew were very heavy on both sides -- especially to light anti-aircraft fire. However, by the time of the death of Manfred von Richthofen, the famed Red Baron, on 21 April, the great offensive had largely stalled. The new German fighters had still not arrived, and the British still held general air superiority. The month of April 1918 began with the consolidation of the separate British RFC and RNAS air services into the Royal Air Force, the first independent air arm not subordinate to its national army or navy. By the end of April the new Fokker, Pfalz and Roland fighters had finally begun to replace the obsolescent equipment of the Jagdstaffeln, but this did not proceed with as much dispatch as it might have, due to increasing shortages of supplies on the side of the Central Powers, and many of the Jastas still flew Albatros D types at the time of the armistice. The rotary engined Fokker D. VIII and Siemens - Schuckert D. IV, as well as surviving Fokker Triplanes, suffered from poor reliability and shortened engine life due to the Voltol - based oil that was used to replace scarce castor oil -- captured and salvaged Allied aircraft (especially Sopwith Camels) were scrounged, not only for engines and equipment, but even for their lubricants. Nonetheless, by September casualties in the RFC had reached the highest level since "Bloody April '' -- and the Allies were maintaining air superiority by weight of numbers rather than technical superiority. 1918, especially the second half of the year, also saw the United States increasingly involved. While American volunteers had been flying in Allied squadrons since the early years of the war, not until 1918 did all - American squadrons begin active operations. Technically America had fallen well behind the European powers in aviation, and no American designed types saw action, with the exception of the Curtiss flying boats. At first, the Americans were largely supplied with second - rate and obsolete aircraft, such as the Nieuport 28, Sopwith 11⁄2 Strutter, and Dorand AR. 2 types, and inexperienced American airmen stood little chance against their seasoned opponents. As numbers grew and equipment improved with the introduction of the twin - gun SPAD XIII as well as the Sopwith Camel and even the S.E. 5a into American service near the war 's end, the Americans came to hold their own in the air; although casualties were heavy, as indeed were those of the French and British, in the last desperate fighting of the war. One of the French twin - seat reconnaissance aircraft used by the French and the USAAS, the Salmson 2. A2, was among the World War I - era aircraft to pioneer the use of "fixed '' radial engines in military aircraft -- the liquid - cooled radials designed by Georges Canton and Pierre Unné powered the 2. A2 aircraft, and were among the first "fixed '' radial aircraft powerplants ever designed, and manufactured by the parent Société des Moteurs Salmson aircraft and automobile manufacturing firm, from 1908 to 1920. By war 's end, the impact of aerial missions on the ground war was in retrospect mainly tactical -- strategic bombing, in particular, was still very rudimentary indeed. This was partly due to its restricted funding and use, as it was, after all, a new technology. On the other hand, the artillery, which had perhaps the greatest effect of any military arm in this war, was in very large part as devastating as it was due to the availability of aerial photography and aerial "spotting '' by balloon and aircraft. By 1917 weather bad enough to restrict flying was considered as good as "putting the gunner 's eyes out ''. Some, such as then - Brigadier General Billy Mitchell, commander of all American air combat units in France, claimed, "(T) he only damage that has come to (Germany) has been through the air ''. Mitchell was famously controversial in his view that the future of war was not on the ground or at sea, but in the air. During the course of the War, German aircraft losses accounted to 27,637 by all causes, while Entente losses numered over 88,613 lost (52,640 France & 35,973 Great Britain) Though aircraft still functioned as vehicles of observation, increasingly they were used as a weapon in themselves. Dog fights erupted in the skies over the front lines, and aircraft went down in flames. From this air - to - air combat, the need grew for better aircraft and gun armament. Aside from machine guns, air - to - air rockets were also used, such as the Le Prieur rocket against balloons and airships. Recoilless rifles and autocannons were also attempted, but they pushed early fighters to unsafe limits while bringing negligible returns, with the German Becker 20mm autocannon being fitted to a few twin - engined Luftstreitkräfte G - series medium bombers for offensive needs, and at least one late - war Kaiserliche Marine zeppelin for defense -- the uniquely armed SPAD S. XII single - seat fighter carried one Vickers machine gun and a special, hand - operated semi-automatic 37mm gun firing through a hollow propeller shaft. Another innovation was air - to - air bombing if a fighter had been fortunate enough to climb higher than an airship. The Ranken dart was designed just for this opportunity. This need for improvement was not limited to air - to - air combat. On the ground, methods developed before the war were being used to deter enemy aircraft from observation and bombing. Anti-aircraft artillery rounds were fired into the air and exploded into clouds of smoke and fragmentation, called archie by the British. Anti-aircraft artillery defenses were increasingly used around observation balloons, which became frequent targets of enemy fighters equipped with special incendiary bullets. Because balloons were so flammable, due to the hydrogen used to inflate them, observers were given parachutes, enabling them to jump to safety. Ironically, only a few aircrew had this option, due in part to a mistaken belief they inhibited aggressiveness, and in part to their significant weight. During a bombing raid over Kragujevac on 30 September 1915, private Radoje Ljutovac of the Serbian Army successfully shot down one of the three aircraft. Ljutovac used a slightly modified Turkish cannon captured some years previously. This was the first time that a military aeroplane was shot down with ground - to - air artillery fire, and thus a crucial moment in anti-aircraft warfare. As the stalemate developed on the ground, with both sides unable to advance even a few hundred yards without a major battle and thousands of casualties, aircraft became greatly valued for their role gathering intelligence on enemy positions and bombing the enemy 's supplies behind the trench lines. Large aircraft with a pilot and an observer were used to scout enemy positions and bomb their supply bases. Because they were large and slow, these aircraft made easy targets for enemy fighter aircraft. As a result, both sides used fighter aircraft to both attack the enemy 's two - seat aircraft and protect their own while carrying out their missions. While the two - seat bombers and reconnaissance aircraft were slow and vulnerable, they were not defenseless. Two - seaters had the advantage of both forward - and rearward - firing guns. Typically, the pilot controlled fixed guns behind the propeller, similar to guns in a fighter aircraft, while the observer controlled one with which he could cover the arc behind the aircraft. A tactic used by enemy fighter aircraft to avoid fire from the rear gunner was to attack from slightly below the rear of two - seaters, as the tail gunner was unable to fire below the aircraft. However, two - seaters could counter this tactic by going into a dive at high speeds. Pursuing a diving two - seater was hazardous for a fighter pilot, as it would place the fighter directly in the rear gunner 's line of fire; several high scoring aces of the war were shot down by "lowly '' two - seaters, including Raoul Lufbery, Erwin Böhme, and Robert Little. The first aerial bombardment of civilians occurred during World War I. In the opening weeks of the war Zeppelins bombed Liege, Antwerp and Warsaw, and other cities including Paris and Bucharest were targeted. And in January 1915 the Germans began a bombing campaign against England that was to last until 1918, initially using airships. There were 19 raids in 1915, in which 37 tons of bombs were dropped, killing 181 people and injuring 455. Raids continued in 1916. London was accidentally bombed in May, and in July, the Kaiser allowed directed raids against urban centres. There were 23 airship raids in 1916 in which 125 tons of ordnance were dropped, killing 293 people and injuring 691. Gradually British air defenses improved. In 1917 and 1918 there were only eleven Zeppelin raids against England, and the final raid occurred on 5 August 1918, resulting in the death of Peter Strasser, commander of the German Naval Airship Department. By the end of the war, 54 airship raids had been undertaken, in which 557 people were killed and 1,358 injured. The Zeppelin raids were complemented by the Gotha G bombers from 1917, which were the first heavier than air bombers to be used for strategic bombing, and by a small force of five Zeppelin - Staaken R.VI "giant '' four engined bombers from late September 1917 through to mid-May 1918. Twenty - eight Gotha twin - engined bombers were lost on the raids over England, with no losses for the Zeppelin - Staaken giants. It has been argued that the raids were effective far beyond material damage in diverting and hampering wartime production, and diverting twelve squadrons and over 17,000 men to air defenses. Calculations performed on the number of dead to the weight of bombs dropped had a profound effect on attitudes of the British government and population in the interwar years, who believed that "The bomber will always get through ''. Manned observation balloons floating high above the trenches were used as stationary reconnaissance points on the front lines, reporting enemy troop positions and directing artillery fire. Balloons commonly had a crew of two equipped with parachutes: upon an enemy air attack on the flammable balloon, the crew would parachute to safety. Recognized for their value as observer platforms, observation balloons were important targets of enemy aircraft. To defend against air attack, they were heavily protected by large concentrations of antiaircraft guns and patrolled by friendly aircraft. Blimps and balloons helped contribute to the stalemate of the trench warfare of World War I, and contributed to air - to - air combat for air superiority because of their significant reconnaissance value. To encourage pilots to attack enemy balloons, both sides counted downing an enemy balloon as an "air - to - air '' kill, with the same value as shooting down an enemy aircraft. Some pilots, known as balloon busters, became particularly distinguished by their prowess at shooting down enemy balloons. The premier balloon busting ace was Willy Coppens: 35 of his 37 victories were enemy balloons. As pioneer aviators invented air - to - air combat, the contending sides developed various methods of tracking aerial casualties and victories. Aviators with five or more aerial victories confirmed by their parent air service were dubbed "aces ''. Their numbers would burgeon, until by war 's end, there were over 1,800 aces. The following aces scored the most victories for their respective air services. The following aviators were the first to reach important milestones in the development of aerial combat during World War I:
do i need a visa to visit france from trinidad and tobago
Visa requirements for Trinidad and Tobago citizens - wikipedia Visa requirements for Trinidad and Tobago citizens are administrative entry restrictions by the authorities of other states placed on citizens of Trinidad and Tobago. As of 18 January 2018, Trinidad and Tobago citizens had visa - free or visa on arrival access to 133 countries and territories, ranking the Trinidad and Tobago passport 33nd in terms of travel freedom according to the Henley Passport Index. As a member of Caricom, Trinidad and Tobago passport holders have access to freedom of movement in all Caricom states. Trinidad and Tobago signed a mutual visa waiver agreement with Schengen Area countries on 28 May 2015 allowing citizens to travel visa free to all Schengen states as well as associated countries and some territories. Visa requirements for holders of normal passports traveling for tourist purposes: British Overseas Territories. Open border with Schengen Area. Russia is a transcontinental country in Eastern Europe and Northern Asia. The majority of its population (80 %) lives in European Russia, therefore Russia as a whole is included as a European country here. Turkey is a transcontinental country in the Middle East and Southeast Europe. Has part of its territory (3 %) in Southeast Europe called Turkish Thrace. Azerbaijan (Artsakh) and Georgia (Abkhazia; South Ossetia) are transcontinental countries. Both have part of their territories in the European part of the Caucasus. Kazakhstan is a transcontinental country. Has part of its territories located west of the Urals in Eastern Europe. Armenia and Cyprus (Northern Cyprus; Akrotiri and Dhekelia) are entirely in Southwest Asia but having socio - political connections with Europe. Egypt is a transcontinental country in North Africa and the Middle East. Has part of its territory in the Middle East called Sinai Peninsula. Part of the Realm of New Zealand. Partially recognized. Unincorporated territory of the United States. Part of Norway, not part of the Schengen Area, special open - border status under Svalbard Treaty British Overseas Territories. Open border with Schengen Area. Russia is a transcontinental country in Eastern Europe and Northern Asia. The vast majority of its population (80 %) lives in European Russia. Turkey is a transcontinental country in the Middle East and Southeast Europe. Has a small part of its territory (3 %) in Southeast Europe called Turkish Thrace. Azerbaijan and Georgia (Abkhazia; South Ossetia) are transcontinental countries. Both have a small part of their territories in the European part of the Caucasus. Kazakhstan is a transcontinental country. Has a small part of its territories located west of the Urals in Eastern Europe. Armenia (Artsakh) and Cyprus (Northern Cyprus) are entirely in Southwest Asia but having socio - political connections with Europe. Egypt is a transcontinental country in North Africa and the Middle East. Has a small part of its territory in the Middle East called Sinai peninsula. Partially recognized.
which player is most dominant in establishing american foreign policy
Foreign policy of the United States - Wikipedia The foreign policy of the United States is its interactions with foreign nations and how it sets standards of interaction for its organizations, corporations and system citizens of the United States. The officially stated goals of the foreign policy of the United States, including all the Bureaus and Offices in the United States Department of State, as mentioned in the Foreign Policy Agenda of the Department of State, are "to build and sustain a more democratic, secure, and prosperous world for the benefit of the American people and the international community. '' In addition, the United States House Committee on Foreign Affairs states as some of its jurisdictional goals: "export controls, including nonproliferation of nuclear technology and nuclear hardware; measures to foster commercial interaction with foreign nations and to safeguard American business abroad; international commodity agreements; international education; and protection of American citizens abroad and expatriation. '' U.S. foreign policy and foreign aid have been the subject of much debate, praise and criticism, both domestically and abroad. Subject to the advice and consent role of the U.S. Senate, the President of the United States negotiates treaties with foreign nations, but treaties enter into force if ratified by two - thirds of the Senate. The President is also Commander in Chief of the United States Armed Forces, and as such has broad authority over the armed forces. Both the Secretary of State and ambassadors are appointed by the President, with the advice and consent of the Senate. The United States Secretary of State acts similarly to a foreign minister and under Executive leadership is the primary conductor of state - to - state diplomacy. Congress is the only branch of government that has the authority to declare war. Furthermore, Congress writes the civilian and military budget, thus has vast power in military action and foreign aid. Congress also has power to regulate commerce with foreign nations. The main trend regarding the history of U.S. foreign policy since the American Revolution is the shift from non-interventionism before and after World War I, to its growth as a world power and global hegemony during and since World War II and the end of the Cold War in the 20th century. Since the 19th century, U.S. foreign policy also has been characterized by a shift from the realist school to the idealistic or Wilsonian school of international relations. Foreign policy themes were expressed considerably in George Washington 's farewell address; these included among other things, observing good faith and justice towards all nations and cultivating peace and harmony with all, excluding both "inveterate antipathies against particular nations, and passionate attachments for others '', "steer (ing) clear of permanent alliances with any portion of the foreign world '', and advocating trade with all nations. These policies became the basis of the Federalist Party in the 1790s. But the rival Jeffersonians feared Britain and favored France in the 1790s, declaring the War of 1812 on Britain. After the 1778 alliance with France, the U.S. did not sign another permanent treaty until the North Atlantic Treaty in 1949. Over time, other themes, key goals, attitudes, or stances have been variously expressed by Presidential ' doctrines ', named for them. Initially these were uncommon events, but since WWII, these have been made by most presidents. Jeffersonians vigorously opposed a large standing army and any navy until attacks against American shipping by Barbary corsairs spurred the country into developing a naval force projection capability, resulting in the First Barbary War in 1801. Despite two wars with European Powers -- the War of 1812 and the 1898 Spanish -- American War -- American foreign policy was peaceful and marked by steady expansion of its foreign trade during the 19th century. The 1803 Louisiana Purchase doubled the nation 's geographical area; Spain ceded the territory of Florida in 1819; annexation brought in the independent Texas Republic in 1845; a war with Mexico in 1848 added California, Arizona, Utah, Nevada, and New Mexico. The U.S. bought Alaska from the Russian Empire in 1867, and it annexed the independent Republic of Hawaii in 1898. Victory over Spain in 1898 brought the Philippines, and Puerto Rico, as well as oversight of Cuba. The short experiment in imperialism ended by 1908, as the U.S. turned its attention to the Panama Canal and the stabilization of regions to its south, including Mexico. The 20th century was marked by two world wars in which Allied powers, along with the United States, defeated their enemies and through this participation the United States increased its international reputation. President Wilson 's Fourteen Points was developed from his idealistic Wilsonianism program of spreading democracy and fighting militarism so as to end any wars. It became the basis of the German Armistice (which amounted to a military surrender) and the 1919 Paris Peace Conference. The resulting Treaty of Versailles, due to European allies ' punitive and territorial designs, showed insufficient conformity with these points and the U.S. signed separate treaties with each of its adversaries; due to Senate objections also, the U.S. never joined the League of Nations, which was established as a result of Wilson 's initiative. In the 1920s, the United States followed an independent course, and succeeded in a program of naval disarmament, and refunding the German economy. Operating outside the League it became a dominant player in diplomatic affairs. New York became the financial capital of the world, but the Wall Street Crash of 1929 hurled the Western industrialized world into the Great Depression. American trade policy relied on high tariffs under the Republicans, and reciprocal trade agreements under the Democrats, but in any case exports were at very low levels in the 1930s. The United States adopted a non-interventionist foreign policy from 1932 to 1938, but then President Franklin D. Roosevelt moved toward strong support of the Allies in their wars against Germany and Japan. As a result of intense internal debate, the national policy was one of becoming the Arsenal of Democracy, that is financing and equipping the Allied armies without sending American combat soldiers. Roosevelt mentioned four fundamental freedoms, which ought to be enjoyed by people "everywhere in the world ''; these included the freedom of speech and religion, as well as freedom from want and fear. Roosevelt helped establish terms for a post-war world among potential allies at the Atlantic Conference; specific points were included to correct earlier failures, which became a step toward the United Nations. American policy was to threaten Japan, to force it out of China, and to prevent its attacking the Soviet Union. However, Japan reacted by an attack on Pearl Harbor in December 1941, and the United States was at war with Japan, Germany, and Italy. Instead of the loans given to allies in World War I, the United States provided Lend - Lease grants of $50,000,000,000. Working closely with Winston Churchill of Britain, and Joseph Stalin of the Soviet Union, Roosevelt sent his forces into the Pacific against Japan, then into North Africa against Italy and Germany, and finally into Europe starting with France and Italy in 1944 against the Germans. The American economy roared forward, doubling industrial production, and building vast quantities of airplanes, ships, tanks, munitions, and, finally, the atomic bomb. Much of the American war effort went to strategic bombers, which flattened the cities of Japan and Germany. After the war, the U.S. rose to become the dominant non-colonial economic power with broad influence in much of the world, with the key policies of the Marshall Plan and the Truman Doctrine. Almost immediately, however, the world witnessed division into broad two camps during the Cold War; one side was led by the U.S. and the other by the Soviet Union, but this situation also led to the establishment of the Non-Aligned Movement. This period lasted until almost the end of the 20th century and is thought to be both an ideological and power struggle between the two superpowers. A policy of containment was adopted to limit Soviet expansion, and a series of proxy wars were fought with mixed results. In 1991, the Soviet Union dissolved into separate nations, and the Cold War formally ended as the United States gave separate diplomatic recognition to the Russian Federation and other former Soviet states. In domestic politics, foreign policy is not usually a central issue. In 1945 -- 1970 the Democratic Party took a strong anti-Communist line and supported wars in Korea and Vietnam. Then the party split with a strong, "dovish '', pacifist element (typified by 1972 presidential candidate George McGovern). Many "hawks '', advocates for war, joined the Neoconservative movement and started supporting the Republicans -- especially Reagan -- based on foreign policy. Meanwhile, down to 1952 the Republican Party was split between an isolationist wing, based in the Midwest and led by Senator Robert A. Taft, and an internationalist wing based in the East and led by Dwight D. Eisenhower. Eisenhower defeated Taft for the 1952 nomination largely on foreign policy grounds. Since then the Republicans have been characterized by a hawkish and intense American nationalism, and strong opposition to Communism, and strong support for Israel. In the 21st century, U.S. influence remains strong but, in relative terms, is declining in terms of economic output compared to rising nations such as China, India, Russia, and the newly consolidated European Union. Substantial problems remain, such as climate change, nuclear proliferation, and the specter of nuclear terrorism. Foreign policy analysts Hachigian and Sutphen in their book The Next American Century suggest all five powers have similar vested interests in stability and terrorism prevention and trade; if they can find common ground, then the next decades may be marked by peaceful growth and prosperity. In 2017 diplomats from other countries developed new tactics to deal with President Donald Trump. The New York Times reported on the eve of his first foreign trip as president: Trump has numerous aides giving advice on foreign policy. The chief diplomat was Secretary of State Rex Tillerson. His major foreign policy positions, which sometimes are at odds with Trump, include: In the United States, there are three types of treaty - related law: In contrast to most other nations, the United States considers the three types of agreements as distinct. Further, the United States incorporates treaty law into the body of U.S. federal law. As a result, Congress can modify or repeal treaties afterward. It can overrule an agreed - upon treaty obligation even if this is seen as a violation of the treaty under international law. Several U.S. court rulings confirmed this understanding, including Supreme Court decisions in Paquete Habana v. the United States (1900), and Reid v. Covert (1957), as well as a lower court ruling n Garcia - Mir v. Meese (1986). Further, the Supreme Court has declared itself as having the power to rule a treaty as void by declaring it "unconstitutional '', although as of 2011, it has never exercised this power. The State Department has taken the position that the Vienna Convention on the Law of Treaties represents established law. Generally, when the U.S. signs a treaty, it is binding. However, as a result of the Reid v. Covert decision, the U.S. adds a reservation to the text of every treaty that says, in effect, that the U.S. intends to abide by the treaty, but if the treaty is found to be in violation of the Constitution, then the U.S. legally ca n't abide by the treaty since the U.S. signature would be ultra vires. The United States has ratified and participates in many other multilateral treaties, including arms control treaties (especially with the Soviet Union), human rights treaties, environmental protocols, and free trade agreements. The United States is a founding member of the United Nations and most of its specialized agencies, notably including the World Bank Group and International Monetary Fund. The U.S. has at times has withheld payment of dues due to disagreements with the UN. The United States is also member of: After it captured the islands from Japan during World War II, the United States administered the Trust Territory of the Pacific Islands from 1947 to 1986 (1994 for Palau). The Northern Mariana Islands became a U.S. territory (part of the United States), while Federated States of Micronesia, the Marshall Islands, and Palau became independent countries. Each has signed a Compact of Free Association that gives the United States exclusive military access in return for U.S. defense protection and conduct of military foreign affairs (except the declaration of war) and a few billion dollars of aid. These agreements also generally allow citizens of these countries to live and work in the United States with their spouses (and vice versa), and provide for largely free trade. The federal government also grants access to services from domestic agencies, including the Federal Emergency Management Agency, National Weather Service, the United States Postal Service, the Federal Aviation Administration, the Federal Communications Commission, and U.S. representation to the International Frequency Registration Board of the International Telecommunication Union. The United States notably does not participate in various international agreements adhered to by almost all other industrialized countries, by almost all the countries of the Americas, or by almost all other countries in the world. With a large population and economy, on a practical level this can undermine the effect of certain agreements, or give other countries a precedent to cite for non-participation in various agreements. In some cases the arguments against participation include that the United States should maximize its sovereignty and freedom of action, or that ratification would create a basis for lawsuits that would treat American citizens unfairly. In other cases, the debate became involved in domestic political issues, such as gun control, climate change, and the death penalty. Examples include: While America 's relationships with Europe have tended to be in terms of multilateral frameworks, such as NATO, America 's relations with Asia have tended to be based on a "hub and spoke '' model using a series of bilateral relationships where states coordinate with the United States and do not collaborate with each other. On May 30, 2009, at the Shangri - La Dialogue Defense Secretary Robert M. Gates urged the nations of Asia to build on this hub and spoke model as they established and grew multilateral institutions such as ASEAN, APEC and the ad hoc arrangements in the area. However, in 2011 Gates said that the United States must serve as the "indispensable nation, '' for building multilateral cooperation. As of 2014, the U.S. currently produces about 66 % of the oil that it consumes. While its imports have exceeded domestic production since the early 1990s, new hydraulic fracturing techniques and discovery of shale oil deposits in Canada and the American Dakotas offer the potential for increased energy independence from oil exporting countries such as OPEC. Former U.S. President George W. Bush identified dependence on imported oil as an urgent "national security concern ''. Two - thirds of the world 's proven oil reserves are estimated to be found in the Persian Gulf. Despite its distance, the Persian Gulf region was first proclaimed to be of national interest to the United States during World War II. Petroleum is of central importance to modern armies, and the United States -- as the world 's leading oil producer at that time -- supplied most of the oil for the Allied armies. Many U.S. strategists were concerned that the war would dangerously reduce the U.S. oil supply, and so they sought to establish good relations with Saudi Arabia, a kingdom with large oil reserves. The Persian Gulf region continued to be regarded as an area of vital importance to the United States during the Cold War. Three Cold War United States Presidential doctrines -- the Truman Doctrine, the Eisenhower Doctrine, and the Nixon Doctrine -- played roles in the formulation of the Carter Doctrine, which stated that the United States would use military force if necessary to defend its "national interests '' in the Persian Gulf region. Carter 's successor, President Ronald Reagan, extended the policy in October 1981 with what is sometimes called the "Reagan Corollary to the Carter Doctrine '', which proclaimed that the United States would intervene to protect Saudi Arabia, whose security was threatened after the outbreak of the Iran -- Iraq War. Some analysts have argued that the implementation of the Carter Doctrine and the Reagan Corollary also played a role in the outbreak of the 2003 Iraq War. Almost all of Canada 's energy exports go to the United States, making it the largest foreign source of U.S. energy imports: Canada is consistently among the top sources for U.S. oil imports, and it is the largest source of U.S. natural gas and electricity imports. In 2007 the U.S. was Sub-Saharan Africa 's largest single export market accounting for 28 % of exports (second in total to the EU at 31 %). 81 % of U.S. imports from this region were petroleum products. Foreign assistance is a core component of the State Department 's international affairs budget, which is $49 billion in all for 2014. Aid is considered an essential instrument of U.S. foreign policy. There are four major categories of non-military foreign assistance: bilateral development aid, economic assistance supporting U.S. political and security goals, humanitarian aid, and multilateral economic contributions (for example, contributions to the World Bank and International Monetary Fund). In absolute dollar terms, the United States government is the largest international aid donor ($23 billion in 2014). The U.S. Agency for International Development (USAID) manages the bulk of bilateral economic assistance; the Treasury Department handles most multilateral aid. In addition many private agencies, churches and philanthropies provide aid. Although the United States is the largest donor in absolute dollar terms, it is actually ranked 19 out of 27 countries on the Commitment to Development Index. The CDI ranks the 27 richest donor countries on their policies that affect the developing world. In the aid component the United States is penalized for low net aid volume as a share of the economy, a large share of tied or partially tied aid, and a large share of aid given to less poor and relatively undemocratic governments. Foreign aid is a highly partisan issue in the United States, with liberals, on average, supporting foreign aid much more than conservatives do. As of 2016, the United States is actively conducting military operations against the Islamic State of Iraq and the Levant and Al - Qaeda under the Authorization for Use of Military Force Against Terrorists, including in areas of fighting in the Syrian Civil War and Yemeni Civil War. The Guantanamo Bay Naval Base holds what the federal government considers unlawful combatants from these ongoing activities, and has been a controversial issue in foreign relations, domestic politics, and Cuba -- United States relations. Other major U.S. military concerns include stability in Afghanistan and Iraq after the recent invasions of those countries, and Russian military activity in Ukraine. The United States is a founding member of NATO, an alliance of 29 North American and European nations formed to defend Western Europe against the Soviet Union during the Cold War. Under the NATO charter, the United States is compelled to defend any NATO state that is attacked by a foreign power. The United States itself was the first country to invoke the mutual defense provisions of the alliance, in response to the September 11 attacks. The United States also has mutual military defense treaties with: The United States has responsibility for the defense of the three Compact of Free Association states: Federated States of Micronesia, the Marshall Islands, and Palau. In 1989, the United States also granted five nations the major non-NATO ally status (MNNA), and additions by later presidents have brought the list to 28 nations. Each such state has a unique relationship with the United States, involving various military and economic partnerships and alliances. and lesser agreements with: The U.S. participates in various military - related multi-lateral organizations, including: The U.S. also operates hundreds of military bases around the world. The United States has undertaken unilateral and multilateral military operations throughout its history (see Timeline of United States military operations). In the post-World War II era, the country has had permanent membership and veto power in the United Nations Security Council, allowing it to undertake any military action without formal Security Council opposition. With vast military expenditures, the United States is known as the sole remaining superpower after the collapse of the Soviet Union. The U.S. contributes a relatively small number of personnel for United Nations peacekeeping operations. It sometimes acts through NATO, as with the NATO intervention in Bosnia and Herzegovina, NATO bombing of Yugoslavia, and ISAF in Afghanistan, but often acts unilaterally or in ad - hoc coalitions as with the 2003 invasion of Iraq. The United Nations Charter requires that military operations be either for self - defense or affirmatively approved by the Security Council. Though many of their operations have followed these rules, the United States and NATO have been accused of committing crimes against peace in international law, for example in the 1999 Yugoslavia and 2003 Iraq operations. The U.S. provides military aid through many different channels. Counting the items that appear in the budget as ' Foreign Military Financing ' and ' Plan Colombia ', the U.S. spent approximately $4.5 billion in military aid in 2001, of which $2 billion went to Israel, $1.3 billion went to Egypt, and $1 billion went to Colombia. Since 9 / 11, Pakistan has received approximately $11.5 billion in direct military aid. As of 2004, according to Fox News, the U.S. had more than 700 military bases in 130 different countries. Estimated U.S. foreign military financing and aid by recipient for 2010: According to a 2016 report by the Congressional Research Service, the U.S. topped the market in global weapon sales for 2015, with $40 billion sold. The largest buyers were Qatar, Egypt, Saudi Arabia, South Korea, Pakistan, Israel, the United Arab Emirates and Iraq. The Strategic Defense Initiative (SDI) was a proposal by U.S. President Ronald Reagan on March 23, 1983 to use ground and space - based systems to protect the United States from attack by strategic nuclear ballistic missiles, later dubbed "Star Wars ''. The initiative focused on strategic defense rather than the prior strategic offense doctrine of mutual assured destruction (MAD). Though it was never fully developed or deployed, the research and technologies of SDI paved the way for some anti-ballistic missile systems of today. In February 2007, the U.S. started formal negotiations with Poland and Czech Republic concerning construction of missile shield installations in those countries for a Ground - Based Midcourse Defense system (in April 2007, 57 % of Poles opposed the plan). According to press reports the government of the Czech Republic agreed (while 67 % Czechs disagree) to host a missile defense radar on its territory while a base of missile interceptors is supposed to be built in Poland. Russia threatened to place short - range nuclear missiles on the Russia 's border with NATO if the United States refuses to abandon plans to deploy 10 interceptor missiles and a radar in Poland and the Czech Republic. In April 2007, Putin warned of a new Cold War if the Americans deployed the shield in Central Europe. Putin also said that Russia is prepared to abandon its obligations under an Intermediate - Range Nuclear Forces Treaty of 1987 with the United States. On August 14, 2008, the United States and Poland announced a deal to implement the missile defense system in Polish territory, with a tracking system placed in the Czech Republic. "The fact that this was signed in a period of very difficult crisis in the relations between Russia and the United States over the situation in Georgia shows that, of course, the missile defense system will be deployed not against Iran but against the strategic potential of Russia '', Dmitry Rogozin, Russia 's NATO envoy, said. Keir A. Lieber and Daryl G. Press, argue in Foreign Affairs that U.S. missile defenses are designed to secure Washington 's nuclear primacy and are chiefly directed at potential rivals, such as Russia and China. The authors note that Washington continues to eschew nuclear first strike and contend that deploying missile defenses "would be valuable primarily in an offensive context, not a defensive one; as an adjunct to a US First Strike capability, not as a stand - alone shield '': If the United States launched a nuclear attack against Russia (or China), the targeted country would be left with only a tiny surviving arsenal, if any at all. At that point, even a relatively modest or inefficient missile defense system might well be enough to protect against any retaliatory strikes. This analysis is corroborated by the Pentagon 's 1992 Defense Planning Guidance (DPG), prepared by then Secretary of Defense Richard Cheney and his deputies. The DPG declares that the United States should use its power to "prevent the reemergence of a new rival '' either on former Soviet territory or elsewhere. The authors of the Guidance determined that the United States had to "Field a missile defense system as a shield against accidental missile launches or limited missile strikes by ' international outlaws ' '' and also must "Find ways to integrate the ' new democracies ' of the former Soviet bloc into the U.S. - led system ''. The National Archive notes that Document 10 of the DPG includes wording about "disarming capabilities to destroy '' which is followed by several blacked out words. "This suggests that some of the heavily excised pages in the still - classified DPG drafts may include some discussion of preventive action against threatening nuclear and other WMD programs. '' Finally, Robert David English, writing in Foreign Affairs, observes that in addition to the deployment U.S. missile defenses, the DPG 's second recommendation has also been proceeding on course. "Washington has pursued policies that have ignored Russian interests (and sometimes international law as well) in order to encircle Moscow with military alliances and trade blocs conducive to U.S. interests. '' In United States history, critics have charged that presidents have used democracy to justify military intervention abroad. Critics have also charged that the U.S. helped local militaries overthrow democratically elected governments in Iran, Guatemala, and in other instances. Studies have been devoted to the historical success rate of the U.S. in exporting democracy abroad. Some studies of American intervention have been pessimistic about the overall effectiveness of U.S. efforts to encourage democracy in foreign nations. Until recently, scholars have generally agreed with international relations professor Abraham Lowenthal that U.S. attempts to export democracy have been "negligible, often counterproductive, and only occasionally positive. '' Other studies find U.S. intervention has had mixed results, and another by Hermann and Kegley has found that military interventions have improved democracy in other countries. Professor Paul W. Drake argued that the U.S. first attempted to export democracy in Latin America through intervention from 1912 to 1932. Drake argued that this was contradictory because international law defines intervention as "dictatorial interference in the affairs of another state for the purpose of altering the condition of things. '' The study suggested that efforts to promote democracy failed because democracy needs to develop out of internal conditions, and can not be forcibly imposed. There was disagreement about what constituted democracy; Drake suggested American leaders sometimes defined democracy in a narrow sense of a nation having elections; Drake suggested a broader understanding was needed. Further, there was disagreement about what constituted a "rebellion ''; Drake saw a pattern in which the U.S. State Department disapproved of any type of rebellion, even so - called "revolutions '', and in some instances rebellions against dictatorships. Historian Walter LaFeber stated, "The world 's leading revolutionary nation (the U.S.) in the eighteenth century became the leading protector of the status quo in the twentieth century. '' Mesquita and Downs evaluated 35 U.S. interventions from 1945 to 2004 and concluded that in only one case, Colombia, did a "full fledged, stable democracy '' develop within ten years following the intervention. Samia Amin Pei argued that nation building in developed countries usually unravelled four to six years after American intervention ended. Pei, based on study of a database on worldwide democracies called Polity, agreed with Mesquita and Downs that U.S. intervention efforts usually do n't produce real democracies, and that most cases result in greater authoritarianism after ten years. Professor Joshua Muravchik argued U.S. occupation was critical for Axis power democratization after World War II, but America 's failure to encourage democracy in the third world "prove... that U.S. military occupation is not a sufficient condition to make a country democratic. '' The success of democracy in former Axis countries such as Italy were seen as a result of high national per - capita income, although U.S. protection was seen as a key to stabilization and important for encouraging the transition to democracy. Steven Krasner agreed that there was a link between wealth and democracy; when per - capita incomes of $6,000 were achieved in a democracy, there was little chance of that country ever reverting to an autocracy, according to an analysis of his research in the Los Angeles Times. Tures examined 228 cases of American intervention from 1973 to 2005, using Freedom House data. A plurality of interventions, 96, caused no change in the country 's democracy. In 69 instances, the country became less democratic after the intervention. In the remaining 63 cases, a country became more democratic. However this does not take into account the direction the country would have gone with no U.S. intervention. Hermann and Kegley found that American military interventions designed to protect or promote democracy increased freedom in those countries. Peceny argued that the democracies created after military intervention are still closer to an autocracy than a democracy, quoting Przeworski "while some democracies are more democratic than others, unless offices are contested, no regime should be considered democratic. '' Therefore, Peceny concludes, it is difficult to know from the Hermann and Kegley study whether U.S. intervention has only produced less repressive autocratic governments or genuine democracies. Peceny stated that the United States attempted to export democracy in 33 of its 93 20th - century military interventions. Peceny argued that proliberal policies after military intervention had a positive impact on democracy. A global survey done by Pewglobal indicated that at (as of 2014) least 33 surveyed countries have a positive view (50 % or above) of the United States. With the top ten most positive countries being Philippines (92 %), Israel (84 %), South Korea (82 %), Kenya (80 %), El Salvador (80 %), Italy (78 %), Ghana (77 %), Vietnam (76 %), Bangladesh (76 %), and Tanzania (75 %). While 10 surveyed countries have the most negative view (Below 50 %) of the United States. With the countries being Egypt (10 %), Jordan (12 %), Pakistan (14 %), Turkey (19 %), Russia (23 %), Palestinian Territories (30 %), Greece (34 %), Argentina (36 %), Lebanon (41 %), Tunisia (42 %). Americans ' own view of the United States was viewed at 84 %. International opinion about the US has often changed with different executive administrations. For example in 2009, the French public favored the United States when President Barack Obama (75 % favorable) replaced President George W. Bush (42 %). After President Donald Trump took the helm in 2017, French public opinion about the US fell from 63 % to 46 %. These trends were also seen in other European countries. United States foreign policy also includes covert actions to topple foreign governments that have been opposed to the United States. According to J. Dana Stuster, writing in Foreign Policy, there are seven "confirmed cases '' where the U.S. -- acting principally through the Central Intelligence Agency (CIA), but sometimes with the support of other parts of the U.S. government, including the Navy and State Department -- covertly assisted in the overthrow of a foreign government: Iran in 1953, Guatemala in 1954, Congo in 1960, the Dominican Republic in 1961, South Vietnam in 1963, Brazil in 1964, and Chile in 1973. Stuster states that this list excludes "U.S. - supported insurgencies and failed assassination attempts '' such as those directed against Cuba 's Fidel Castro, as well as instances where U.S. involvement has been alleged but not proven (such as Syria in 1949). In 1953 the CIA, working with the British government, initiated Operation Ajax against the Prime Minister of Iran Mohammad Mossadegh who had attempted to nationalize Iran 's oil, threatening the interests of the Anglo - Persian Oil Company. This had the effect of restoring and strengthening the authoritarian monarchical reign of Shah Mohammad Reza Pahlavi. In 1957, the CIA and Israeli Mossad aided the Iranian government in establishing its intelligence service, SAVAK, later blamed for the torture and execution of the regime 's opponents. A year later, in Operation PBSUCCESS, the CIA assisted the local military in toppling the democratically elected left - wing government of Jacobo Árbenz in Guatemala and installing the military dictator Carlos Castillo Armas. The United Fruit Company lobbied for Árbenz overthrow as his land reforms jeopardized their land holdings in Guatemala, and painted these reforms as a communist threat. The coup triggered a decades long civil war which claimed the lives of an estimated 200,000 people (42,275 individual cases have been documented), mostly through 626 massacres against the Maya population perpetrated by the U.S. - backed Guatemalan military. An independent Historical Clarification Commission found that U.S. corporations and government officials "exercised pressure to maintain the country 's archaic and unjust socio - economic structure, '' and that U.S. military assistance had a "significant bearing on human rights violations during the armed confrontation. '' During the massacre of at least 500,000 alleged communists in 1960s Indonesia, U.S. government officials encouraged and applauded the mass killings while providing covert assistance to the Indonesian military which helped facilitate them. This included the U.S. Embassy in Jakarta supplying Indonesian forces with lists of up to 5,000 names of suspected members of the Communist Party of Indonesia (PKI), who were subsequently killed in the massacres. In 2001, the CIA attempted to prevent the publication of the State Department volume Foreign Relations of the United States, 1964 -- 1968, which documents the U.S. role in providing covert assistance to the Indonesian military for the express purpose of the extirpation of the PKI. In July 2016, an international panel of judges ruled the killings constitute crimes against humanity, and that the US, along with other Western governments, were complicit in these crimes. In 1970, the CIA worked with coup - plotters in Chile in the attempted kidnapping of General René Schneider, who was targeted for refusing to participate in a military coup upon the election of Salvador Allende. Schneider was shot in the botched attempt and died three days later. The CIA later paid the group $35,000 for the failed kidnapping. According to one peer - reviewed study, the U.S. intervened in 81 foreign elections between 1946 and 2000, while the Soviet Union or Russia intervened in 36. Since the 1970s, issues of human rights have become increasingly important in American foreign policy. Congress took the lead in the 1970s. Following the Vietnam War, the feeling that U.S. foreign policy had grown apart from traditional American values was seized upon by Senator Donald M. Fraser (D, MI), leading the Subcommittee on International Organizations and Movements, in criticizing Republican Foreign Policy under the Nixon administration. In the early 1970s, Congress concluded the Vietnam War and passed the War Powers Act. As "part of a growing assertiveness by Congress about many aspects of Foreign Policy, '' Human Rights concerns became a battleground between the Legislative and the Executive branches in the formulation of foreign policy. David Forsythe points to three specific, early examples of Congress interjecting its own thoughts on foreign policy: These measures were repeatedly used by Congress, with varying success, to affect U.S. foreign policy towards the inclusion of Human Rights concerns. Specific examples include El Salvador, Nicaragua, Guatemala and South Africa. The Executive (from Nixon to Reagan) argued that the Cold War required placing regional security in favor of U.S. interests over any behavioral concerns of national allies. Congress argued the opposite, in favor of distancing the United States from oppressive regimes. Nevertheless, according to historian Daniel Goldhagen, during the last two decades of the Cold War, the number of American client states practicing mass murder outnumbered those of the Soviet Union. John Henry Coatsworth, a historian of Latin America and the provost of Columbia University, suggests the number of repression victims in Latin America alone far surpassed that of the USSR and its East European satellites during the period 1960 to 1990. W. John Green contends that the United States was an "essential enabler '' of "Latin America 's political murder habit, bringing out and allowing to flourish some of the region 's worst tendencies. '' On December 6, 2011, Obama instructed agencies to consider LGBT rights when issuing financial aid to foreign countries. He also criticized Russia 's law discriminating against gays, joining other western leaders in the boycott of the 2014 Winter Olympics in Russia. In June 2014, a Chilean court ruled that the United States played a key role in the murders of Charles Horman and Frank Teruggi, both American citizens, shortly after the 1973 Chilean coup d'état. United States foreign policy is influenced by the efforts of the U.S. government to control imports of illicit drugs, including cocaine, heroin, methamphetamine, and cannabis. This is especially true in Latin America, a focus for the U.S. War on Drugs. Those efforts date back to at least 1880, when the U.S. and China completed an agreement that prohibited the shipment of opium between the two countries. Over a century later, the Foreign Relations Authorization Act requires the President to identify the major drug transit or major illicit drug - producing countries. In September 2005, the following countries were identified: Bahamas, Bolivia, Brazil, Burma, Colombia, Dominican Republic, Ecuador, Guatemala, Haiti, India, Jamaica, Laos, Mexico, Nigeria, Pakistan, Panama, Paraguay, Peru and Venezuela. Two of these, Burma and Venezuela are countries that the U.S. considers to have failed to adhere to their obligations under international counternarcotics agreements during the previous 12 months. Notably absent from the 2005 list were Afghanistan, the People 's Republic of China and Vietnam; Canada was also omitted in spite of evidence that criminal groups there are increasingly involved in the production of MDMA destined for the United States and that large - scale cross-border trafficking of Canadian - grown cannabis continues. The U.S. believes that the Netherlands are successfully countering the production and flow of MDMA to the U.S. Critics from the left cite episodes that undercut leftist governments or showed support for Israel. Others cite human rights abuses and violations of international law. Critics have charged that the U.S. presidents have used democracy to justify military intervention abroad. Critics also point to declassified records which indicate that the CIA under Allen Dulles and the FBI under J. Edgar Hoover aggressively recruited more than 1,000 Nazis, including those responsible for war crimes, to use as spies and informants against the Soviet Union in the Cold War. The U.S. has faced criticism for backing right - wing dictators that systematically violated human rights, such as Augusto Pinochet of Chile, Alfredo Stroessner of Paraguay, Efraín Ríos Montt of Guatemala, Jorge Rafael Videla of Argentina, Hissène Habré of Chad Yahya Khan of Pakistan and Suharto of Indonesia. Critics have also accused the United States of facilitating and supporting state terrorism in the Global South during the Cold War, such as Operation Condor, an international campaign of political assassination and state terror organized by right - wing military dictatorships in the Southern Cone of South America. Journalists and human rights organizations have been critical of US - led airstrikes and targeted killings by drones which have in some cases resulted in collateral damage of civilian populations. In early 2017, the U.S. faced criticism from some scholars, activists and media outlets for dropping 26,171 bombs on seven different countries throughout 2016: Syria, Iraq, Afghanistan, Libya, Yemen, Somalia and Pakistan. The U.S. has been accused of complicity in war crimes for backing the Saudi Arabian - led intervention into the Yemeni Civil War (2015 -- present), which has triggered a humanitarian catastrophe, including a cholera outbreak and millions facing starvation. Studies have been devoted to the historical success rate of the U.S. in exporting democracy abroad. Some studies of American intervention have been pessimistic about the overall effectiveness of U.S. efforts to encourage democracy in foreign nations. Some scholars have generally agreed with international relations professor Abraham Lowenthal that U.S. attempts to export democracy have been "negligible, often counterproductive, and only occasionally positive. '' Other studies find U.S. intervention has had mixed results, and another by Hermann and Kegley has found that military interventions have improved democracy in other countries. A 2013 global poll in 68 countries with 66,000 respondents by Win / Gallup found that the U.S. is perceived as the biggest threat to world peace. Regarding support for certain anti-Communist dictatorships during the Cold War, a response is that they were seen as a necessary evil, with the alternatives even worse Communist or fundamentalist dictatorships. David Schmitz says this policy did not serve U.S. interests. Friendly tyrants resisted necessary reforms and destroyed the political center (though not in South Korea), while the ' realist ' policy of coddling dictators brought a backlash among foreign populations with long memories. Many democracies have voluntary military ties with United States. See NATO, ANZUS, Treaty of Mutual Cooperation and Security between the United States and Japan, Mutual Defense Treaty with South Korea, and Major non-NATO ally. Those nations with military alliances with the U.S. can spend less on the military since they can count on U.S. protection. This may give a false impression that the U.S. is less peaceful than those nations. Research on the democratic peace theory has generally found that democracies, including the United States, have not made war on one another. There have been U.S. support for coups against some democracies, but for example Spencer R. Weart argues that part of the explanation was the perception, correct or not, that these states were turning into Communist dictatorships. Also important was the role of rarely transparent United States government agencies, who sometimes mislead or did not fully implement the decisions of elected civilian leaders. Empirical studies (see democide) have found that democracies, including the United States, have killed much fewer civilians than dictatorships. Media may be biased against the U.S. regarding reporting human rights violations. Studies have found that The New York Times coverage of worldwide human rights violations predominantly focuses on the human rights violations in nations where there is clear U.S. involvement, while having relatively little coverage of the human rights violations in other nations. For example, the bloodiest war in recent time, involving eight nations and killing millions of civilians, was the Second Congo War, which was almost completely ignored by the media. Niall Ferguson argues that the U.S. is incorrectly blamed for all the human rights violations in nations they have supported. He writes that it is generally agreed that Guatemala was the worst of the US - backed regimes during the Cold War. However, the U.S. can not credibly be blamed for all the 200,000 deaths during the long Guatemalan Civil War. The U.S. Intelligence Oversight Board writes that military aid was cut for long periods because of such violations, that the U.S. helped stop a coup in 1993, and that efforts were made to improve the conduct of the security services. Today the U.S. states that democratic nations best support U.S. national interests. According to the U.S. State Department, "Democracy is the one national interest that helps to secure all the others. Democratically governed nations are more likely to secure the peace, deter aggression, expand open markets, promote economic development, protect American citizens, combat international terrorism and crime, uphold human and worker rights, avoid humanitarian crises and refugee flows, improve the global environment, and protect human health. '' According to former U.S. President Bill Clinton, "Ultimately, the best strategy to ensure our security and to build a durable peace is to support the advance of democracy elsewhere. Democracies do n't attack each other. '' In one view mentioned by the U.S. State Department, democracy is also good for business. Countries that embrace political reforms are also more likely to pursue economic reforms that improve the productivity of businesses. Accordingly, since the mid-1980s, under President Ronald Reagan, there has been an increase in levels of foreign direct investment going to emerging market democracies relative to countries that have not undertaken political reforms. Leaked cables in 2010 suggested that the "dark shadow of terrorism still dominates the United States ' relations with the world ''. The United States officially maintains that it supports democracy and human rights through several tools Examples of these tools are as follows:
who sings do you mind if i stroke you up
Stroke You up - wikipedia "Stroke You Up '' is a song by American R&B duo Changing Faces which was recorded for their debut album Changing Faces (1994). The song was released as the album 's debut single in June 1994. It was certified platinum by the RIAA and sold 700,000 copies domestically. It features uncredited vocals from R. Kelly. Information taken from Discogs.
i am a celebrity get me out of here
I 'm a Celebrity... Get Me Out of Here! - Wikipedia I 'm a Celebrity... Get Me Out of Here! is a reality TV series in which up to 12 celebrities live together in a jungle environment for a number of weeks. They have no luxuries, and compete to be crowned king or queen of the jungle. The show was originally created in the United Kingdom by the factual programmes department of ITV 's then London franchise, London Weekend Television and developed by a team including James Allen, Natalka Znak, Brent Baker and Stewart Morris. The first episode aired on 25 August 2002. It is now produced by ITV Studios and has been licensed globally to countries including the United States, Germany, France, Hungary, Sweden, the Netherlands, Denmark, Romania, Australia and India. As of 2017, the UK, German, Hungarian and Australian versions are still in production. The UK, the German and the 2003 US versions of the series take place in Australia, at a permanently built up camp at the edge of a sub-tropical rain forest that extends from Numinbah Nature Reserve and Springbrook National Park. However the first series of the show was filmed on a smaller site at King Ranch, near Tully, Queensland. The Australian series is filmed in Kruger National Park, South Africa. Other versions of the show have been filmed in Argentina, Brazil, Costa Rica, Suriname, Indonesia and Malaysia. This series has been criticised by the UK former Secretary of State for Culture, Tessa Jowell. In an interview with the Financial Times during the second UK series, she said, "If they were n't mostly -- save their blushes -- has - been celebrities, there might be more interest (...) I think that if we saw many more programming hours taken over by reality TV, "I hope you 'd begin to see a viewers ' revolt. '' In 2002, CBS, broadcaster of the popular American reality show Survivor, unsuccessfully sued ABC and Granada TV over a planned American version of I 'm a Celebrity... Get Me Out of Here!, alleging similarities. The show 's use of live insects and other living creatures in the bushtucker trials has led to some public criticism of the show and its producers and those involved in the programming. This issue was highlighted during the 2009 UK series, where celebrity chef Gino D'Acampo killed, cooked and ate a rat. The RSPCA Australia investigated the incident and sought to prosecute D'Acampo and actor Stuart Manning for animal cruelty after this episode of the show was aired. ITV was fined £ 1,600 and the two celebrities involved were not prosecuted for animal cruelty despite being charged with the offence by the New South Wales Police. This incident did, however, highlight among certain groups such as Buglife, a British charity for the conservation of insects, and the RSPCA, the controversy surrounding the killing of living creatures for human entertainment. There has been criticism that the producers pretend that the celebrities have to live in "dangerous '' jungle even though they are in a controlled environment, with some of the scenery being artificial, e.g. a pond and a small waterfall. In November 2014, TV presenter Chris Packham wrote an open letter to Ant & Dec asking them and ITV to end the "abuse of animals '' in I 'm A Celebrity... Get Me Out of Here!. He described the trials as "out of date '' and "silly ''. Colour: In production or returning No longer airing Season 1, 2015: Freddie Flintoff Season 2, 2016: Brendan Fevola Season 3, 2017: Casey Donovan Season 4, 2018: Fiona O'Loughlin Season 1, Early 2004: Costa Cordalis Season 2, Late 2004: Désirée Nick Season 3, 2008: Ross Antony Season 4, 2009: Ingrid van Bergen Season 5, 2011: Peer Kusmagk Season 6, 2012: Brigitte Nielsen Season 7, 2013: Joey Heindle Season 8, 2014: Melanie Müller Season 9, 2015: Maren Gilzer Season 10, 2016: Menderes Bağcı Season 11, 2017: Marc Terenzi Season 12, 2018: Jenny Frankhauser Season 1, October 2008: Mariann Falusi Season 2, Oct -- Nov 2008: Andrea Keleti Season 3, October 2014: Zsolt Erdei Season 4, Oct -- Nov 2014: Andrea Molnár Season 5, Nov -- Dec 2017: Péter Kabát Series 1, 2002: Tony Blackburn Series 2, 2003: Phil Tufnell Series 3, Early 2004: Kerry Katona Series 4, Late 2004: Joe Pasquale Series 5, 2005: Carol Thatcher Series 6, 2006: Matt Willis Series 7, 2007: Christopher Biggins Series 8, 2008: Joe Swash Series 9, 2009: Gino D'Acampo Series 10, 2010: Stacey Solomon Series 11, 2011: Dougie Poynter Series 12, 2012: Charlie Brooks Series 13, 2013: Kian Egan Series 14, 2014: Carl "Foggy '' Fogarty Series 15, 2015: Vicky Pattison Series 16, 2016: Scarlett Moffatt Series 17, 2017: Georgia "Toff '' Toffolo Murwillumbah, Australia (2003) Costa Rica (2009) Season 1, 2003: Cris Judd Season 2, 2009: Lou Diamond Phillips John Lehr (2003) Damien Fahey (2009) Myleene Klass (2009)
the united kingdom of great britian and northern ireland
United Kingdom - wikipedia -- in Europe (green & dark grey) -- in the European Union (green) The United Kingdom of Great Britain and Northern Ireland, commonly known as the United Kingdom (UK) and colloquially Great Britain (GB) or simply Britain, is a sovereign country in western Europe. Lying off the north - western coast of the European mainland, the United Kingdom includes the island of Great Britain, the north - eastern part of the island of Ireland and many smaller islands. Northern Ireland is the only part of the United Kingdom that shares a land border with another sovereign state‍ -- ‌the Republic of Ireland. Apart from this land border, the United Kingdom is surrounded by the Atlantic Ocean, with the North Sea to its east, the English Channel to its south and the Celtic Sea to its south - south - west, giving it the 12th - longest coastline in the world. The Irish Sea lies between Great Britain and Ireland. With an area of 242,500 square kilometres (93,600 sq mi), the United Kingdom is the 78th - largest sovereign state in the world and the 11th - largest in Europe. It is also the 21st-most populous country, with an estimated 65.1 million inhabitants. Together, this makes it the fourth-most densely populated country in the European Union (EU). The United Kingdom is a constitutional monarchy with a parliamentary democracy. The monarch is Queen Elizabeth II, who has reigned since 6 February 1952. The capital of the United Kingdom and its largest city is London, a global city and financial centre with an urban area population of 10.3 million, the fourth - largest in Europe and second - largest in the European Union. Other major urban areas in the United Kingdom include the conurbations centred on Birmingham, Leeds, Glasgow, Liverpool and Manchester. The United Kingdom consists of four countries -- England, Scotland, Wales and Northern Ireland. The last three have devolved administrations, each with varying powers, based in their capitals, Edinburgh, Cardiff and Belfast, respectively. The nearby Isle of Man, Bailiwick of Guernsey and Bailiwick of Jersey are not part of the United Kingdom, being Crown dependencies with the British Government responsible for defence and international representation. The relationships among the countries of the UK have changed over time. Wales was annexed by the Kingdom of England under the Laws in Wales Acts 1535 and 1542. A treaty between England and Scotland resulted in 1707 in a unified Kingdom of Great Britain, which merged in 1801 with the Kingdom of Ireland to form the United Kingdom of Great Britain and Ireland. Five - sixths of Ireland seceded from the UK in 1922, leaving the present formulation of the United Kingdom of Great Britain and Northern Ireland. There are fourteen British Overseas Territories. These are the remnants of the British Empire which, at its height in the 1920s, encompassed almost a quarter of the world 's land mass and was the largest empire in history. British influence can be observed in the language, culture and legal systems of many of its former colonies. The United Kingdom is a developed country and has the world 's fifth - largest economy by nominal GDP and ninth - largest economy by purchasing power parity. The UK is considered to have a high - income economy and is categorised as very high in the Human Development Index, ranking 16th in the world. It was the world 's first industrialised country and the world 's foremost power during the 19th and early 20th centuries. The UK remains a great power with considerable economic, cultural, military, scientific and political influence internationally. It is a recognised nuclear weapons state and is seventh in military expenditure in the world. The UK has been a permanent member of the United Nations Security Council since its first session in 1946. It has been a leading member state of the EU and its predecessor, the European Economic Community (EEC), since 1973. However, on 23 June 2016, a national referendum on the UK 's membership of the EU resulted in a decision to leave, and its exit from the EU is currently being negotiated. The UK is also a member of the Commonwealth of Nations, the Council of Europe, the G7 finance ministers, the G7 forum, the G20, NATO, the Organisation for Economic Co-operation and Development (OECD), and the World Trade Organization (WTO). The 1707 Acts of Union declared that the kingdoms of England and Scotland were "United into One Kingdom by the Name of Great Britain '', though the new state is also referred to in the Acts as the "Kingdom of Great Britain '', "United Kingdom of Great Britain '' and "United Kingdom ''. However, the term "United Kingdom '' is only found in informal use during the 18th century and the country was only occasionally referred to as the "United Kingdom of Great Britain '' -- its full official name, from 1707 to 1800, being merely "Great Britain '', without a "long form ''. The Acts of Union 1800 united the Kingdom of Great Britain and the Kingdom of Ireland in 1801, forming the United Kingdom of Great Britain and Ireland. Following the partition of Ireland and the independence of the Irish Free State in 1922, which left Northern Ireland as the only part of the island of Ireland within the United Kingdom, the name "United Kingdom of Great Britain and Northern Ireland '' was adopted. Although the United Kingdom, as a sovereign state, is a country, England, Scotland, Wales and, to a lesser degree, Northern Ireland, are also regarded as countries, though they are not sovereign states. Scotland, Wales and Northern Ireland have devolved self - government. The British Prime Minister 's website has used the phrase "countries within a country '' to describe the United Kingdom. Some statistical summaries, such as those for the twelve NUTS 1 regions of the United Kingdom, also refer to Scotland, Wales and Northern Ireland as "regions ''. Northern Ireland is also referred to as a "province ''. With regard to Northern Ireland, the descriptive name used "can be controversial, with the choice often revealing one 's political preferences ''. The term "Britain '' is often used as synonym for the United Kingdom. The term "Great Britain '', by contrast, refers conventionally to the island of Great Britain, or politically to England, Scotland and Wales in combination. However, it is sometimes used as a loose synonym for the United Kingdom as a whole. GB and GBR are the standard country codes for the United Kingdom (see ISO 3166 - 2 and ISO 3166 - 1 alpha - 3) and are consequently used by international organisations to refer to the United Kingdom. Additionally, the United Kingdom 's Olympic team competes under the name "Great Britain '' or "Team GB ''. The adjective "British '' is commonly used to refer to matters relating to the United Kingdom. The term has no definite legal connotation, but is used in law to refer to United Kingdom citizenship and matters to do with nationality. People of the United Kingdom use a number of different terms to describe their national identity and may identify themselves as being British; or as being English, Scottish, Welsh, Northern Irish, or Irish; or as being both. In 2006, a new design of British passport was introduced. Its first page shows the long form name of the state in English, Welsh and Scottish Gaelic. In Welsh, the long form name of the state is "Teyrnas Unedig Prydain Fawr a Gogledd Iwerddon '', with "Teyrnas Unedig '' being used as a short form name on government websites. However, it is usually abbreviated to "DU '' for the mutated form "Y Deyrnas Unedig ''. In Scottish Gaelic, the long form is "Rìoghachd Aonaichte Bhreatainn is Èireann a Tuath '' and the short form "Rìoghachd Aonaichte ''. Settlement by anatomically modern humans of what was to become the United Kingdom occurred in waves beginning by about 30,000 years ago. By the end of the region 's prehistoric period, the population is thought to have belonged, in the main, to a culture termed Insular Celtic, comprising Brythonic Britain and Gaelic Ireland. The Roman conquest, beginning in 43 AD, and the 400 - year rule of southern Britain, was followed by an invasion by Germanic Anglo - Saxon settlers, reducing the Brythonic area mainly to what was to become Wales and the historic Kingdom of Strathclyde. Most of the region settled by the Anglo - Saxons became unified as the Kingdom of England in the 10th century. Meanwhile, Gaelic - speakers in north - west Britain (with connections to the north - east of Ireland and traditionally supposed to have migrated from there in the 5th century) united with the Picts to create the Kingdom of Scotland in the 9th century. In 1066, the Normans invaded England from France and after its conquest, seized large parts of Wales, conquered much of Ireland and were invited to settle in Scotland, bringing to each country feudalism on the Northern French model and Norman - French culture. The Norman elites greatly influenced, but eventually assimilated with, each of the local cultures. Subsequent medieval English kings completed the conquest of Wales and made an unsuccessful attempt to annex Scotland. Following the Declaration of Arbroath, Scotland maintained its independence, albeit in near - constant conflict with England. The English monarchs, through inheritance of substantial territories in France and claims to the French crown, were also heavily involved in conflicts in France, most notably the Hundred Years War, while the Kings of Scots were in an alliance with the French during this period. The early modern period saw religious conflict resulting from the Reformation and the introduction of Protestant state churches in each country. Wales was fully incorporated into the Kingdom of England, and Ireland was constituted as a kingdom in personal union with the English crown. In what was to become Northern Ireland, the lands of the independent Catholic Gaelic nobility were confiscated and given to Protestant settlers from England and Scotland. In 1603, the kingdoms of England, Scotland and Ireland were united in a personal union when James VI, King of Scots, inherited the crowns of England and Ireland and moved his court from Edinburgh to London; each country nevertheless remained a separate political entity and retained its separate political, legal, and religious institutions. In the mid-17th century, all three kingdoms were involved in a series of connected wars (including the English Civil War) which led to the temporary overthrow of the monarchy and the establishment of the short - lived unitary republic of the Commonwealth of England, Scotland and Ireland. During the 17th and 18th centuries, British sailors were involved in acts of piracy (privateering), attacking and stealing from ships off the coast of Europe and the Caribbean. Although the monarchy was restored, the Interregnum ensured (along with the Glorious Revolution of 1688 and the subsequent Bill of Rights 1689, and the Claim of Right Act 1689) that, unlike much of the rest of Europe, royal absolutism would not prevail, and a professed Catholic could never accede to the throne. The British constitution would develop on the basis of constitutional monarchy and the parliamentary system. With the founding of the Royal Society in 1660, science was greatly encouraged. During this period, particularly in England, the development of naval power (and the interest in voyages of discovery) led to the acquisition and settlement of overseas colonies, particularly in North America. On 1 May 1707, the united Kingdom of Great Britain came into being, the result of Acts of Union being passed by the parliaments of England and Scotland to ratify the 1706 Treaty of Union and so unite the two kingdoms. In the 18th century, cabinet government developed under Robert Walpole, in practice the first prime minister (1721 -- 1742). A series of Jacobite Uprisings sought to remove the Protestant House of Hanover from the British throne and restore the Catholic House of Stuart. The Jacobites were finally defeated at the Battle of Culloden in 1746, after which the Scottish Highlanders were brutally suppressed. The British colonies in North America that broke away from Britain in the American War of Independence became the United States of America, recognised by Britain in 1783. British imperial ambition turned elsewhere, particularly to India. During the 18th century, Britain was involved in the Atlantic slave trade. British ships transported an estimated two million slaves from Africa to the West Indies before banning the trade in 1807, banning slavery in 1833, and taking a leading role in the movement to abolish slavery worldwide by pressing other nations to end their trade with a series of treaties, and then formed the world 's oldest international human rights organisation, Anti-Slavery International, in London in 1839. The term "United Kingdom '' became official in 1801 when the parliaments of Britain and Ireland each passed an Act of Union, uniting the two kingdoms and creating the United Kingdom of Great Britain and Ireland. In the early 19th century, the British - led Industrial Revolution began to transform the country. Gradually political power shifted away from the old Tory and Whig landowning classes towards the new industrialists. An alliance of merchants and industrialists with the Whigs would lead to a new party, the Liberals, with an ideology of free trade and laissez - faire. In 1832 Parliament passed the Great Reform Act, which began the transfer of political power from the aristocracy to the middle classes. In the countryside, enclosure of the land was driving small farmers out. Towns and cities began to swell with a new urban working class. Few ordinary workers had the vote, and they created their own organisations in the form of trade unions. After the defeat of France at the end of the Revolutionary and Napoleonic Wars (1792 -- 1815), Great Britain emerged as the principal naval and imperial power of the 19th century (with London the largest city in the world from about 1830). Unchallenged at sea, British dominance was later described as Pax Britannica ("British Peace ''), a period of relative peace in Europe and the world (1815 -- 1914) during which the British Empire became the global hegemon and adopted the role of global policeman. By the time of the Great Exhibition of 1851, Britain was described as the "workshop of the world ''. The British Empire was expanded to include India, large parts of Africa and many other territories throughout the world. Alongside the formal control it exerted over its own colonies, British dominance of much of world trade meant that it effectively controlled the economies of many regions, such as Asia and Latin America. Domestically, political attitudes favoured free trade and laissez - faire policies and a gradual widening of the voting franchise. During the century, the population increased at a dramatic rate, accompanied by rapid urbanisation, causing significant social and economic stresses. To seek new markets and sources of raw materials, the Conservative Party under Disraeli launched a period of imperialist expansion in Egypt, South Africa, and elsewhere. Canada, Australia, and New Zealand became self - governing dominions. After the turn of the century, Britain 's industrial dominance was challenged by the United States and Germany. Social reform and home rule for Ireland were important domestic issues after 1900. The Labour Party emerged from an alliance of trade unions and small socialist groups in 1900, and suffragettes campaigned for women 's right to vote before 1914. Britain fought alongside France, Russia and (after 1917) the United States, against Germany and its allies in the First World War (1914 -- 1918). British armed forces were engaged across much of the British Empire and in several regions of Europe, particularly on the Western front. The high fatalities of trench warfare caused the loss of much of a generation of men, with lasting social effects in the nation and a great disruption in the social order. After the war, Britain received the League of Nations mandate over a number of former German and Ottoman colonies. The British Empire reached its greatest extent, covering a fifth of the world 's land surface and a quarter of its population. However, Britain had suffered 2.5 million casualties and finished the war with a huge national debt. The rise of Irish nationalism, and disputes within Ireland over the terms of Irish Home Rule, led eventually to the partition of the island in 1921. The Irish Free State became independent with Dominion status in 1922. Northern Ireland remained part of the United Kingdom. A wave of strikes in the mid-1920s culminated in the General Strike of 1926. Britain had still not recovered from the effects of the war when the Great Depression (1929 -- 1932) occurred. This led to considerable unemployment and hardship in the old industrial areas, as well as political and social unrest in the 1930s, with rising membership in communist and socialist parties. A coalition government was formed in 1931. Britain entered the Second World War by declaring war on Nazi Germany in 1939, after it had invaded Poland. Winston Churchill became prime minister and head of a coalition government in 1940. Despite the defeat of its European allies in the first year of the war, Britain and its Empire continued the fight alone against Germany. In 1940, the Royal Air Force defeated the German Luftwaffe in a struggle for control of the skies in the Battle of Britain. Urban areas suffered heavy bombing during the Blitz. There were also eventual hard - fought victories in the Battle of the Atlantic, the North Africa campaign and the Burma campaign. British forces played an important role in the Normandy landings of 1944, achieved with its United States ally. After the end of the Second World War in 1945, the UK was one of the Big Four powers (the Soviet Union, the United Kingdom, the US and China) who met to plan the post-war world; it was an original signatory to the Declaration of the United Nations. The UK became one of the five permanent members of the United Nations Security Council. However, the war left the UK severely weakened and depending financially on the Marshall Plan. In the immediate post-war years, the Labour government initiated a radical programme of reforms, which had a significant effect on British society in the following decades. Major industries and public utilities were nationalised, a welfare state was established, and a comprehensive, publicly funded healthcare system, the National Health Service, was created. The rise of nationalism in the colonies coincided with Britain 's now much - diminished economic position, so that a policy of decolonisation was unavoidable. Independence was granted to India and Pakistan in 1947. Over the next three decades, most colonies of the British Empire gained their independence. Many became members of the Commonwealth of Nations. Although the UK was the third country to develop a nuclear weapons arsenal (with its first atomic bomb test in 1952), the new post-war limits of Britain 's international role were illustrated by the Suez Crisis of 1956. The international spread of the English language ensured the continuing international influence of its literature and culture. As a result of a shortage of workers in the 1950s, the government encouraged immigration from Commonwealth countries. In the following decades, the UK became a more multi-ethnic society than before. Despite rising living standards in the late 1950s and 60s, the UK 's economic performance was less successful than many of its main competitors such as France, West Germany and Japan. In the decade - long process of European integration, the UK was a founding member of the alliance called the Western European Union, established with the London and Paris Conferences in 1954. In 1960 the UK was one of the seven founding members of the European Free Trade Association (EFTA), but in 1973 it left to join the European Communities (EC). When the EC became the European Union (EU) in 1992, the UK was one of the 12 founding members. The Treaty of Lisbon was signed in 2007, which forms the constitutional basis of the European Union since then. From the late 1960s, Northern Ireland suffered communal and paramilitary violence (sometimes affecting other parts of the UK) conventionally known as the Troubles. It is usually considered to have ended with the Belfast "Good Friday '' Agreement of 1998. Following a period of widespread economic slowdown and industrial strife in the 1970s, the Conservative government of the 1980s under Margaret Thatcher initiated a radical policy of monetarism, deregulation, particularly of the financial sector (for example, Big Bang in 1986) and labour markets, the sale of state - owned companies (privatisation), and the withdrawal of subsidies to others. This resulted in high unemployment and social unrest, but ultimately also economic growth, particularly in the services sector. From 1984, the economy was helped by the inflow of substantial North Sea oil revenues. Around the end of the 20th century there were major changes to the governance of the UK with the establishment of devolved administrations for Scotland, Wales and Northern Ireland. The statutory incorporation followed acceptance of the European Convention on Human Rights. The UK is still a key global player diplomatically and militarily. It plays leading roles in the EU, UN and NATO. However, controversy surrounds some of Britain 's overseas military deployments, particularly in Afghanistan and Iraq. The 2008 global financial crisis severely affected the UK economy. The coalition government of 2010 introduced austerity measures intended to tackle the substantial public deficits which resulted. In 2014 the Scottish Government held a referendum on Scottish independence, with 55 % of voters rejecting the independence proposal and opting to remain within the United Kingdom. In 2016, the United Kingdom voted to leave the European Union. The legal process of leaving the EU began on 29 March 2017, with the UK 's invocation of Article 50 of the Treaty of Lisbon, formally notifying the EU of the UK 's intention to leave. The article stipulates that the negotiations to leave will last at least two years. The UK remains a full member of the EU during this time. The total area of the United Kingdom is approximately 243,610 square kilometres (94,060 sq mi). The country occupies the major part of the British Isles archipelago and includes the island of Great Britain, the north - eastern one - sixth of the island of Ireland and some smaller surrounding islands. It lies between the North Atlantic Ocean and the North Sea with the south - east coast coming within 22 miles (35 km) of the coast of northern France, from which it is separated by the English Channel. In 1993 10 % of the UK was forested, 46 % used for pastures and 25 % cultivated for agriculture. The Royal Greenwich Observatory in London is the defining point of the Prime Meridian. The United Kingdom lies between latitudes 49 ° to 61 ° N, and longitudes 9 ° W to 2 ° E. Northern Ireland shares a 224 - mile (360 km) land boundary with the Republic of Ireland. The coastline of Great Britain is 11,073 miles (17,820 km) long. It is connected to continental Europe by the Channel Tunnel, which at 31 miles (50 km) (24 miles (38 km) underwater) is the longest underwater tunnel in the world. England accounts for just over half of the total area of the UK, covering 130,395 square kilometres (50,350 sq mi). Most of the country consists of lowland terrain, with mountainous terrain north - west of the Tees - Exe line; including the Cumbrian Mountains of the Lake District, the Pennines, Exmoor and Dartmoor. The main rivers and estuaries are the Thames, Severn and the Humber. England 's highest mountain is Scafell Pike (978 metres (3,209 ft)) in the Lake District. Its principal rivers are the Severn, Thames, Humber, Tees, Tyne, Tweed, Avon, Exe and Mersey. Scotland accounts for just under a third of the total area of the UK, covering 78,772 square kilometres (30,410 sq mi) and including nearly eight hundred islands, predominantly west and north of the mainland; notably the Hebrides, Orkney Islands and Shetland Islands. Scotland is the most mountainous country in the UK and its topography is distinguished by the Highland Boundary Fault -- a geological rock fracture -- which traverses Scotland from Arran in the west to Stonehaven in the east. The fault separates two distinctively different regions; namely the Highlands to the north and west and the lowlands to the south and east. The more rugged Highland region contains the majority of Scotland 's mountainous land, including Ben Nevis which at 1,343 metres (4,406 ft) is the highest point in the British Isles. Lowland areas -- especially the narrow waist of land between the Firth of Clyde and the Firth of Forth known as the Central Belt -- are flatter and home to most of the population including Glasgow, Scotland 's largest city, and Edinburgh, its capital and political centre, although upland and mountainous terrain lies within the Southern Uplands. Wales accounts for less than a tenth of the total area of the UK, covering 20,779 square kilometres (8,020 sq mi). Wales is mostly mountainous, though South Wales is less mountainous than North and mid Wales. The main population and industrial areas are in South Wales, consisting of the coastal cities of Cardiff, Swansea and Newport, and the South Wales Valleys to their north. The highest mountains in Wales are in Snowdonia and include Snowdon (Welsh: Yr Wyddfa) which, at 1,085 metres (3,560 ft), is the highest peak in Wales. The 14, or possibly 15, Welsh mountains over 3,000 feet (910 metres) high are known collectively as the Welsh 3000s. Wales has over 2,704 kilometres (1,680 miles) of coastline. Several islands lie off the Welsh mainland, the largest of which is Anglesey (Ynys Môn) in the north - west. Northern Ireland, separated from Great Britain by the Irish Sea and North Channel, has an area of 14,160 square kilometres (5,470 sq mi) and is mostly hilly. It includes Lough Neagh which, at 388 square kilometres (150 sq mi), is the largest lake in the British Isles by area. The highest peak in Northern Ireland is Slieve Donard in the Mourne Mountains at 852 metres (2,795 ft). The United Kingdom has a temperate climate, with plentiful rainfall all year round. The temperature varies with the seasons seldom dropping below − 11 ° C (12 ° F) or rising above 35 ° C (95 ° F). The prevailing wind is from the south - west and bears frequent spells of mild and wet weather from the Atlantic Ocean, although the eastern parts are mostly sheltered from this wind since the majority of the rain falls over the western regions the eastern parts are therefore the driest. Atlantic currents, warmed by the Gulf Stream, bring mild winters; especially in the west where winters are wet and even more so over high ground. Summers are warmest in the south - east of England, being closest to the European mainland, and coolest in the north. Heavy snowfall can occur in winter and early spring on high ground, and occasionally settles to great depth away from the hills. There is no consistent system of administrative or geographic demarcation across the United Kingdom. Each country of the United Kingdom has its own arrangements, whose origins often pre-date the UK 's formation. Until the 19th century there was little change to those arrangements, but there has since been a constant evolution of role and function, most significantly the devolution of powers to Scotland, Wales and Northern Ireland. The organisation of local government in England is complex, with the distribution of functions varying according to local arrangements. Legislation concerning local government in England is the responsibility of the UK 's parliament and the government, as England has no devolved legislature. The upper - tier subdivisions of England are the nine regions, now used primarily for statistical purposes. One region, Greater London, has had a directly elected assembly and mayor since 2000 following popular support for the proposal in a referendum. It was intended that other regions would also be given their own elected regional assemblies, but a proposed assembly in the North East region was rejected by a referendum in 2004. Below the regional tier, some parts of England have county councils and district councils and others have unitary authorities; while London consists of 32 London boroughs and the City of London. Councillors are elected by the first - past - the - post system in single - member wards or by the multi-member plurality system in multi-member wards. For local government purposes, Scotland is divided into 32 council areas, with wide variation in both size and population. The cities of Glasgow, Edinburgh, Aberdeen and Dundee are separate council areas, as is the Highland Council which includes a third of Scotland 's area but only just over 200,000 people. Local councils are made up of elected councillors, of whom there are 1,223; they are paid a part - time salary. Elections are conducted by single transferable vote in multi-member wards that elect either three or four councillors. Each council elects a Provost, or Convenor, to chair meetings of the council and to act as a figurehead for the area. Councillors are subject to a code of conduct enforced by the Standards Commission for Scotland. The representative association of Scotland 's local authorities is the Convention of Scottish Local Authorities (COSLA). Local government in Wales consists of 22 unitary authorities. These include the cities of Cardiff, Swansea and Newport which are unitary authorities in their own right. Elections are held every four years under the first - past - the - post system. The most recent elections were held in May 2012, except for the Isle of Anglesey. The Welsh Local Government Association represents the interests of local authorities in Wales. Local government in Northern Ireland has since 1973 been organised into 26 district councils, each elected by single transferable vote. Their powers are limited to services such as collecting waste, controlling dogs and maintaining parks and cemeteries. On 13 March 2008 the executive agreed on proposals to create 11 new councils and replace the present system. The next local elections were postponed until 2016 to facilitate this. The United Kingdom has sovereignty over seventeen territories which do not form part of the United Kingdom itself: fourteen British Overseas Territories and three Crown dependencies. The fourteen British Overseas Territories are: Anguilla; Bermuda; the British Antarctic Territory; the British Indian Ocean Territory; the British Virgin Islands; the Cayman Islands; the Falkland Islands; Gibraltar; Montserrat; Saint Helena, Ascension and Tristan da Cunha; the Turks and Caicos Islands; the Pitcairn Islands; South Georgia and the South Sandwich Islands; and Akrotiri and Dhekelia on the island of Cyprus. British claims in Antarctica are not universally recognised. Collectively Britain 's overseas territories encompass an approximate land area of 1,727,570 square kilometres (667,018 sq mi) and a population of approximately 260,000 people. They are the last remaining remnants of the British Empire and a 1999 UK government white paper stated that: "(The) Overseas Territories are British for as long as they wish to remain British. Britain has willingly granted independence where it has been requested; and we will continue to do so where this is an option. '' Self - determination is also enshrined into the constitutions of several overseas territories and three have specifically voted to remain under British sovereignty (Bermuda in 1995, Gibraltar in 2002 and the Falkland Islands in 2013). The Crown dependencies are possessions of the Crown, as opposed to overseas territories of the UK. They comprise three independently administered jurisdictions: the Channel Islands of Jersey and Guernsey in the English Channel, and the Isle of Man in the Irish Sea. By mutual agreement, the British Government manages the islands ' foreign affairs and defence and the UK Parliament has the authority to legislate on their behalf. However, internationally, they are regarded as "territories for which the United Kingdom is responsible ''. The power to pass legislation affecting the islands ultimately rests with their own respective legislative assemblies, with the assent of the Crown (Privy Council or, in the case of the Isle of Man, in certain circumstances the Lieutenant - Governor). Since 2005 each Crown dependency has had a Chief Minister as its head of government. The British dependencies use a varied assortment of currencies. These include the British pound, US dollar, New Zealand dollar, euro or their own currencies, which may be pegged to either. The United Kingdom is a unitary state under a constitutional monarchy. Queen Elizabeth II is the monarch and head of state of the UK, as well as Queen of fifteen other independent Commonwealth countries. The monarch has "the right to be consulted, the right to encourage, and the right to warn ''. The Constitution of the United Kingdom is uncodified and consists mostly of a collection of disparate written sources, including statutes, judge - made case law and international treaties, together with constitutional conventions. As there is no technical difference between ordinary statutes and "constitutional law '', the UK Parliament can perform "constitutional reform '' simply by passing Acts of Parliament, and thus has the political power to change or abolish almost any written or unwritten element of the constitution. However, no Parliament can pass laws that future Parliaments can not change. The UK has a parliamentary government based on the Westminster system that has been emulated around the world: a legacy of the British Empire. The parliament of the United Kingdom meets in the Palace of Westminster and has two houses: an elected House of Commons and an appointed House of Lords. All bills passed are given Royal Assent before becoming law. The position of prime minister, the UK 's head of government, belongs to the person most likely to command the confidence of the House of Commons; this individual is typically the leader of the political party or coalition of parties that holds the largest number of seats in that chamber. The prime minister chooses a cabinet and its members are formally appointed by the monarch to form Her Majesty 's Government. By convention, the monarch respects the prime minister 's decisions of government. The cabinet is traditionally drawn from members of the prime minister 's party or coalition and mostly from the House of Commons but always from both legislative houses, the cabinet being responsible to both. Executive power is exercised by the prime minister and cabinet, all of whom are sworn into the Privy Council of the United Kingdom, and become Ministers of the Crown. The current Prime Minister is Theresa May, who has been in office since 13 July 2016. May is also the leader of the Conservative Party. For elections to the House of Commons, the UK is divided into 650 constituencies, each electing a single member of parliament (MP) by simple plurality. General elections are called by the monarch when the prime minister so advises. Prior to the Fixed - term Parliaments Act 2011, the Parliament Acts 1911 and 1949 required that a new election must be called no later than five years after the previous general election. The Conservative Party, the Labour Party and the Liberal Democrats (formerly as the Liberal Party) have, in modern times, been considered the UK 's three major political parties, representing the British traditions of conservatism, socialism and liberalism, respectively. However, in the 2017 general election the Scottish National Party was the third - largest party by number of seats won, ahead of the Liberal Democrats. Most of the remaining seats were won by parties that contest elections only in one part of the UK: Plaid Cymru (Wales only); and the Democratic Unionist Party and Sinn Féin (Northern Ireland only). In accordance with party policy, no elected Sinn Féin members of parliament have ever attended the House of Commons to speak on behalf of their constituents because of the requirement to take an oath of allegiance to the monarch. Scotland, Wales and Northern Ireland each have their own government or executive, led by a First Minister (or, in the case of Northern Ireland, a diarchal First Minister and deputy First Minister), and a devolved unicameral legislature. England, the largest country of the United Kingdom, has no such devolved executive or legislature and is administered and legislated for directly by the UK 's government and parliament on all issues. This situation has given rise to the so - called West Lothian question which concerns the fact that members of parliament from Scotland, Wales and Northern Ireland can vote, sometimes decisively, on matters that only affect England. The McKay Commission reported on this matter in March 2013 recommending that laws affecting only England should need support from a majority of English members of parliament. The Scottish Government and Parliament have wide - ranging powers over any matter that has not been specifically reserved to the UK Parliament, including education, healthcare, Scots law and local government. At the 2011 elections the Scottish National Party won re-election and achieved an overall majority in the Scottish Parliament, with its leader, Alex Salmond, as First Minister of Scotland. In 2012, the UK and Scottish governments signed the Edinburgh Agreement setting out the terms for a referendum on Scottish independence in 2014, which was defeated 55.3 % to 44.7 %. The Welsh Government and the National Assembly for Wales have more limited powers than those devolved to Scotland. The Assembly is able to legislate on devolved matters through Acts of the Assembly, which require no prior consent from Westminster. The 2011 elections resulted in a minority Labour administration led by Carwyn Jones. The Northern Ireland Executive and Assembly have powers similar to those devolved to Scotland. The Executive is led by a diarchy representing unionist and nationalist members of the Assembly. Arlene Foster (Democratic Unionist Party) and Martin McGuinness (Sinn Féin) were First Minister and deputy First Minister respectively until 2017. Devolution to Northern Ireland is contingent on participation by the Northern Ireland administration in the North - South Ministerial Council, where the Northern Ireland Executive cooperates and develops joint and shared policies with the Government of Ireland. The British and Irish governments co-operate on non-devolved matters affecting Northern Ireland through the British -- Irish Intergovernmental Conference, which assumes the responsibilities of the Northern Ireland administration in the event of its non-operation. The UK does not have a codified constitution and constitutional matters are not among the powers devolved to Scotland, Wales or Northern Ireland. Under the doctrine of parliamentary sovereignty, the UK Parliament could, in theory, therefore, abolish the Scottish Parliament, Welsh Assembly or Northern Ireland Assembly. Indeed, in 1972, the UK Parliament unilaterally prorogued the Parliament of Northern Ireland, setting a precedent relevant to contemporary devolved institutions. In practice, it would be politically difficult for the UK Parliament to abolish devolution to the Scottish Parliament and the Welsh Assembly, given the political entrenchment created by referendum decisions. The political constraints placed upon the UK Parliament 's power to interfere with devolution in Northern Ireland are even greater than in relation to Scotland and Wales, given that devolution in Northern Ireland rests upon an international agreement with the Government of Ireland. The United Kingdom does not have a single legal system, as Article 19 of the 1706 Treaty of Union provided for the continuation of Scotland 's separate legal system. Today the UK has three distinct systems of law: English law, Northern Ireland law and Scots law. A new Supreme Court of the United Kingdom came into being in October 2009 to replace the Appellate Committee of the House of Lords. The Judicial Committee of the Privy Council, including the same members as the Supreme Court, is the highest court of appeal for several independent Commonwealth countries, the British Overseas Territories and the Crown Dependencies. Both English law, which applies in England and Wales, and Northern Ireland law are based on common - law principles. The essence of common law is that, subject to statute, the law is developed by judges in courts, applying statute, precedent and common sense to the facts before them to give explanatory judgements of the relevant legal principles, which are reported and binding in future similar cases (stare decisis). The courts of England and Wales are headed by the Senior Courts of England and Wales, consisting of the Court of Appeal, the High Court of Justice (for civil cases) and the Crown Court (for criminal cases). The Supreme Court is the highest court in the land for both criminal and civil appeal cases in England, Wales and Northern Ireland and any decision it makes is binding on every other court in the same jurisdiction, often having a persuasive effect in other jurisdictions. Scots law is a hybrid system based on both common - law and civil - law principles. The chief courts are the Court of Session, for civil cases, and the High Court of Justiciary, for criminal cases. The Supreme Court of the United Kingdom serves as the highest court of appeal for civil cases under Scots law. Sheriff courts deal with most civil and criminal cases including conducting criminal trials with a jury, known as sheriff solemn court, or with a sheriff and no jury, known as sheriff summary Court. The Scots legal system is unique in having three possible verdicts for a criminal trial: "guilty '', "not guilty '' and "not proven ''. Both "not guilty '' and "not proven '' result in an acquittal. Crime in England and Wales increased in the period between 1981 and 1995, though since that peak there has been an overall fall of 66 % in recorded crime from 1995 to 2015, according to crime statistics. The prison population of England and Wales has increased to 86,000, giving England and Wales the highest rate of incarceration in Western Europe at 148 per 100,000. Her Majesty 's Prison Service, which reports to the Ministry of Justice, manages most of the prisons within England and Wales. The murder rate in England and Wales has stabilised in the first half of the 2010s with a murder rate around 1 per 100,000 which is half the peak in 2002 and similar to the rate in the 1980s. More sexual offences have been reported to the police since 2002. Crime in Scotland fell slightly in 2014 / 2015 to its lowest level in 39 years in with 59 killings for a murder rate of 1.1 per 100,000. Scotland 's prisons are overcrowded but the prison population is shrinking. The UK is a permanent member of the United Nations Security Council, a member of NATO, the Commonwealth of Nations, the G7 finance ministers, the G7 forum (previously the G8 forum), the G20, the OECD, the WTO, the Council of Europe, the OSCE. It is also a member state of the European Union in the process of withdrawal. The UK is said to have a "Special Relationship '' with the United States and a close partnership with France -- the "Entente cordiale '' -- and shares nuclear weapons technology with both countries. The UK is also closely linked with the Republic of Ireland; the two countries share a Common Travel Area and co-operate through the British - Irish Intergovernmental Conference and the British - Irish Council. Britain 's global presence and influence is further amplified through its trading relations, foreign investments, official development assistance and military engagements. The armed forces of the United Kingdom -- officially, Her Majesty 's Armed Forces -- consist of three professional service branches: the Royal Navy and Royal Marines (forming the Naval Service), the British Army and the Royal Air Force. The forces are managed by the Ministry of Defence and controlled by the Defence Council, chaired by the Secretary of State for Defence. The Commander - in - Chief is the British monarch, Elizabeth II, to whom members of the forces swear an oath of allegiance. The Armed Forces are charged with protecting the UK and its overseas territories, promoting the UK 's global security interests and supporting international peacekeeping efforts. They are active and regular participants in NATO, including the Allied Rapid Reaction Corps, as well as the Five Power Defence Arrangements, RIMPAC and other worldwide coalition operations. Overseas garrisons and facilities are maintained in Ascension Island, Belize, Brunei, Canada, Cyprus, Diego Garcia, the Falkland Islands, Germany, Gibraltar, Kenya, Qatar and Singapore. The British armed forces played a key role in establishing the British Empire as the dominant world power in the 18th, 19th and early 20th centuries. Throughout its unique history the British forces have seen action in a number of major wars, such as the Seven Years ' War, the Napoleonic Wars, the Crimean War, the First World War and the Second World War -- as well as many colonial conflicts. By emerging victorious from such conflicts, Britain has often been able to decisively influence world events. Since the end of the British Empire, the UK has nonetheless remained a major military power. Following the end of the Cold War, defence policy has a stated assumption that "the most demanding operations '' will be undertaken as part of a coalition. Setting aside the intervention in Sierra Leone, recent UK military operations in Bosnia, Kosovo, Afghanistan, Iraq and, most recently, Libya, have followed this approach. The last occasion on which the British military fought alone was the Falklands War of 1982. According to various sources, including the Stockholm International Peace Research Institute and the International Institute for Strategic Studies, the United Kingdom has the fourth - or fifth - highest military expenditure in the world. Total defence spending amounts to 2.0 % of national GDP. The UK has a partially regulated market economy. Based on market exchange rates, the UK is today the fifth - largest economy in the world and the second - largest in Europe after Germany. HM Treasury, led by the Chancellor of the Exchequer, is responsible for developing and executing the government 's public finance policy and economic policy. The Bank of England is the UK 's central bank and is responsible for issuing notes and coins in the nation 's currency, the pound sterling. Banks in Scotland and Northern Ireland retain the right to issue their own notes, subject to retaining enough Bank of England notes in reserve to cover their issue. The pound sterling is the world 's third - largest reserve currency (after the US dollar and the euro). Since 1997 the Bank of England 's Monetary Policy Committee, headed by the Governor of the Bank of England, has been responsible for setting interest rates at the level necessary to achieve the overall inflation target for the economy that is set by the Chancellor each year. The UK service sector makes up around 73 % of GDP. London is one of the three "command centres '' of the global economy (alongside New York City and Tokyo), it is the world 's largest financial centre alongside New York, and it has the largest city GDP in Europe. Edinburgh is also one of the largest financial centres in Europe. Tourism is very important to the British economy; with over 27 million tourists arriving in 2004, the United Kingdom is ranked as the sixth major tourist destination in the world and London has the most international visitors of any city in the world. The creative industries accounted for 7 % GVA in 2005 and grew at an average of 6 % per annum between 1997 and 2005. The Industrial Revolution started in the UK with an initial concentration on the textile industry, followed by other heavy industries such as shipbuilding, coal mining and steelmaking. British merchants, shippers and bankers developed overwhelming advantage over those of other nations allowing the UK to dominate international trade in the 19th century. As other nations industrialised, coupled with economic decline after two world wars, the United Kingdom began to lose its competitive advantage and heavy industry declined, by degrees, throughout the 20th century. Manufacturing remains a significant part of the economy but accounted for only 16.7 % of national output in 2003. The automotive industry is a significant part of the UK manufacturing sector and employs around 800,000 people, with a turnover in 2015 of some £ 70 billion, generating £ 34.6 billion of exports (11.8 % of the UK 's total export goods). In 2015, the UK produced around 1.6 million passenger vehicles and 94,500 commercial vehicles. The UK is a major centre for engine manufacturing and in 2015 around 2.4 million engines were produced in the country. The UK has a significant presence in motor racing and the UK motorsport industry employs around 41,000 people, comprises around 4,500 companies and has an annual turnover of around £ 6 billion. The aerospace industry of the UK is the second - or third - largest national aerospace industry in the world depending upon the method of measurement and has an annual turnover of around £ 30 billion. In 2016, the global market opportunity for UK aerospace manufacturers over the next two decades was estimated to be £ 3.5 trillion. The wings for the Airbus A380 and the A350 XWB are designed and manufactured at Airbus UK 's world - leading Broughton facility, whilst over a quarter of the value of the Boeing 787 comes from UK manufacturers including Eaton (fuel subsystem pumps), Messier - Bugatti - Dowty (the landing gear) and Rolls - Royce (the engines). Other key names include GKN Aerospace -- an expert in metallic and composite aerostructures that 's involved in almost every civil and military fixed and rotary wing aircraft in production and development today. BAE Systems plays a critical role in some of the world 's biggest defence aerospace projects. The company makes large sections of the Typhoon Eurofighter at its sub-assembly plant in Samlesbury and assembles the aircraft for the Royal Air Force at its Warton Plant, near Preston. It is also a principal subcontractor on the F35 Joint Strike Fighter -- the world 's largest single defence project -- for which it designs and manufactures a range of components including the aft fuselage, vertical and horizontal tail and wing tips and fuel system. As well as this it manufactures the Hawk, the world 's most successful jet training aircraft. Airbus UK also manufactures the wings for the A400 m military transporter. Rolls - Royce, is the world 's second - largest aero - engine manufacturer. Its engines power more than 30 types of commercial aircraft and it has more than 30,000 engines in service in the civil and defence sectors. Rolls - Royce is forecast to have more than 50 % of the widebody market share by 2016, ahead of General Electric. Agusta Westland designs and manufactures complete helicopters in the UK. The UK space industry was worth £ 9.1 bn in 2011 and employed 29,000 people. It is growing at a rate of 7.5 % annually, according to its umbrella organisation, the UK Space Agency. Government strategy is for the space industry to be a £ 40bn business for the UK by 2030, capturing a 10 % share of the $250 bn world market for commercial space technology. On 16 July 2013, the British Government pledged £ 60 m to the Skylon project: this investment will provide support at a "crucial stage '' to allow a full - scale prototype of the SABRE engine to be built. On 2 November 2015, BAE Systems announced they have bought a 20 % stake in Reaction Engines ltd. The working partnership will draw on BAE Systems ' extensive aerospace technology development and project management expertise and will provide Reaction Engines with access to critical industrial, technical and capital resources to help progress the development of the SABRE engine. The pharmaceutical industry plays an important role in the UK economy and the country has the third - highest share of global pharmaceutical R&D expenditures (after the United States and Japan). Agriculture is intensive, highly mechanised and efficient by European standards, producing about 60 % of food needs with less than 1.6 % of the labour force (535,000 workers). Around two - thirds of production is devoted to livestock, one - third to arable crops. Farmers are subsidised by the EU 's Common Agricultural Policy. The UK retains a significant, though much reduced fishing industry. It is also rich in a number of natural resources including coal, petroleum, natural gas, tin, limestone, iron ore, salt, clay, chalk, gypsum, lead, silica and an abundance of arable land. In the final quarter of 2008, as a result of the Great Recession, the UK economy officially entered recession for the first time since 1991. Unemployment increased from 5.2 % in May 2008 to 7.6 % in May 2009 and by January 2012 the unemployment rate among 18 - to 24 - year - olds had risen from 11.9 % to 22.5 %, the highest since current records began in 1992, although it had fallen to 14.2 % by November 2015. Total UK government debt rose quickly from 44.4 % of GDP in 2007 to 82.9 % of GDP in 2011, then increased more slowly to 87.5 % of GDP in 2015. Following the likes of the United States, France and many major economies, in February 2013, the UK lost its top AAA credit rating for the first time since 1978 with Moodys and Fitch credit agency while, unlike the other major economies retained their triple A rating with the largest agency Standard & Poor 's. However, by the end of 2014, UK growth was the fastest in both the G7 and in Europe, and by September 2015, the unemployment rate was down to a seven - year low of 5.3 %. As a direct result of the Great Recession between 2010 and the third quarter of 2012 wages in the UK fell by 3.2 %, but by 2015 real wages were growing by 3 %, having grown faster than inflation since 2014. Since the 1980s, UK economic inequality, like Canada, Australia and the United States has grown faster than in other developed countries. The poverty line in the UK is commonly defined as being 60 % of the median household income. In 2007 -- 2008 13.5 million people, or 22 % of the population, lived below this line. This is a higher level of relative poverty than all but four other EU members. In the same year 4.0 million children, 31 % of the total, lived in households below the poverty line after housing costs were taken into account. This is a decrease of 400,000 children since 1998 -- 1999. The UK imports 40 % of its food supplies. The Office for National Statistics has estimated that in 2011, 14 million people were at risk of poverty or social exclusion, and that one person in 20 (5.1 %) was now experiencing "severe material depression '', up from 3 million people in 1977. The UK has an external debt of $9.6 trillion dollars which is second highest in the world after the US which has an external debt of 18.5 trillion dollars. As a percentage of GDP, external debt is 408 % which is third highest in the world after Luxembourg and Iceland. The combination of the UK 's relatively lax regulatory regime and London 's financial institutions providing sophisticated methods to launder proceeds from criminal activity around the world, including those from drug trade, makes the City of London a global hub for illicit finance and the UK a safe haven for the world 's major - league tax dodgers, according to research papers and reports published in the mid-2010s. The reports on the Panama papers published in April 2016 singled out the UK as being "at the heart of super-rich tax - avoidance network. '' England and Scotland were leading centres of the Scientific Revolution from the 17th century and the United Kingdom led the Industrial Revolution from the 18th century, and has continued to produce scientists and engineers credited with important advances. Major theorists from the 17th and 18th centuries include Isaac Newton, whose laws of motion and illumination of gravity have been seen as a keystone of modern science; from the 19th century Charles Darwin, whose theory of evolution by natural selection was fundamental to the development of modern biology, and James Clerk Maxwell, who formulated classical electromagnetic theory; and more recently Stephen Hawking, who has advanced major theories in the fields of cosmology, quantum gravity and the investigation of black holes. Major scientific discoveries from the 18th century include hydrogen by Henry Cavendish; from the 20th century penicillin by Alexander Fleming, and the structure of DNA, by Francis Crick and others. Famous British engineers and inventors of the Industrial Revolution include James Watt, George Stephenson, Richard Arkwright, Robert Stephenson and Isambard Kingdom Brunel. Other major engineering projects and applications by people from the UK include the steam locomotive, developed by Richard Trevithick and Andrew Vivian; from the 19th century the electric motor by Michael Faraday, the incandescent light bulb by Joseph Swan, and the first practical telephone, patented by Alexander Graham Bell; and in the 20th century the world 's first working television system by John Logie Baird and others, the jet engine by Frank Whittle, the basis of the modern computer by Alan Turing, and the World Wide Web by Tim Berners - Lee. Scientific research and development remains important in British universities, with many establishing science parks to facilitate production and co-operation with industry. Between 2004 and 2008 the UK produced 7 % of the world 's scientific research papers and had an 8 % share of scientific citations, the third and second highest in the world (after the United States and China, respectively). Scientific journals produced in the UK include Nature, the British Medical Journal and The Lancet. A radial road network totals 29,145 miles (46,904 km) of main roads, 2,173 miles (3,497 km) of motorways and 213,750 miles (344,000 km) of paved roads. The M25, encircling London, is the largest and busiest bypass in the world. In 2009 there were a total of 34 million licensed vehicles in Great Britain. The UK has a railway network of 10,072 miles (16,209 km) in Great Britain and 189 miles (304 km) in Northern Ireland. Railways in Northern Ireland are operated by NI Railways, a subsidiary of state - owned Translink. In Great Britain, the British Rail network was privatised between 1994 and 1997, which was followed by a rapid rise in passenger numbers following years of decline, although the factors behind this are disputed. Network Rail owns and manages most of the fixed assets (tracks, signals etc.). About 20 privately owned Train Operating Companies operate passenger trains, which carried 1.68 billion passengers in 2015. There are also some 1,000 freight trains in daily operation. The British Government is to spend £ 30 billion on a new high - speed railway line, HS2, to be operational by 2026. Crossrail, under construction in London, is Europe 's largest construction project with a £ 15 billion projected cost. In the year from October 2009 to September 2010 UK airports handled a total of 211.4 million passengers. In that period the three largest airports were London Heathrow Airport (65.6 million passengers), Gatwick Airport (31.5 million passengers) and London Stansted Airport (18.9 million passengers). London Heathrow Airport, located 15 miles (24 km) west of the capital, has the most international passenger traffic of any airport in the world and is the hub for the UK flag carrier British Airways, as well as Virgin Atlantic. In 2006, the UK was the world 's ninth - largest consumer of energy and the 15th - largest producer. The UK is home to a number of large energy companies, including two of the six oil and gas "supermajors '' -- BP and Royal Dutch Shell -- and BG Group. In 2011, 40 % of the UK 's electricity was produced by gas, 30 % by coal, 19 % by nuclear power and 4.2 % by wind, hydro, biofuels and wastes. In 2013, the UK produced 914 thousand barrels per day (bbl / d) of oil and consumed 1,507 thousand bbl / d. Production is now in decline and the UK has been a net importer of oil since 2005. In 2010 the UK had around 3.1 billion barrels of proven crude oil reserves, the largest of any EU member state. In 2009, 66.5 % of the UK 's oil supply was imported. In 2009, the UK was the 13th - largest producer of natural gas in the world and the largest producer in the EU. Production is now in decline and the UK has been a net importer of natural gas since 2004. In 2009, half of British gas was supplied from imports as domestic reserves are depleted. Coal production played a key role in the UK economy in the 19th and 20th centuries. In the mid-1970s, 130 million tonnes of coal was being produced annually, not falling below 100 million tonnes until the early 1980s. During the 1980s and 1990s the industry was scaled back considerably. In 2011, the UK produced 18.3 million tonnes of coal. In 2005 it had proven recoverable coal reserves of 171 million tons. The UK Coal Authority has stated there is a potential to produce between 7 billion tonnes and 16 billion tonnes of coal through underground coal gasification (UCG) or ' fracking ', and that, based on current UK coal consumption, such reserves could last between 200 and 400 years. However, environmental and social concerns have been raised over chemicals getting into the water table and minor earthquakes damaging homes. In the late 1990s, nuclear power plants contributed around 25 % of total annual electricity generation in the UK, but this has gradually declined as old plants have been shut down and ageing - related problems affect plant availability. In 2012, the UK had 16 reactors normally generating about 19 % of its electricity. All but one of the reactors will be retired by 2023. Unlike Germany and Japan, the UK intends to build a new generation of nuclear plants from about 2018. The total of all renewable electricity sources provided for 14.9 % of the electricity generated in the United Kingdom in 2013, reaching 53.7 TWh of electricity generated. The UK is one of the best sites in Europe for wind energy, and wind power production is its fastest growing supply, in 2014 it generated 9.3 % of the UK 's total electricity. Access to improved water supply and sanitation in the UK is universal. It is estimated that 96.7 % of households are connected to the sewer network. According to the Environment Agency, total water abstraction for public water supply in the UK was 16,406 megalitres per day in 2007. In England and Wales the economic regulator of water companies is the Water Services Regulation Authority (Ofwat). The Environment Agency is responsible for environmental regulation, and the Drinking Water Inspectorate for regulating drinking water quality. The economic water industry regulator in Scotland is the Water Industry Commission for Scotland and the environmental regulator is the Scottish Environment Protection Agency. Drinking water standards and wastewater discharge standards in the UK, as in other countries of the European Union, are determined by the EU (see Water supply and sanitation in the European Union). In England and Wales water and sewerage services are provided by 10 private regional water and sewerage companies and 13 mostly smaller private "water only '' companies. In Scotland water and sewerage services are provided by a single public company, Scottish Water. In Northern Ireland water and sewerage services are also provided by a single public entity, Northern Ireland Water. A census is taken simultaneously in all parts of the UK every ten years. The Office for National Statistics is responsible for collecting data for England and Wales, the General Register Office for Scotland and the Northern Ireland Statistics and Research Agency each being responsible for censuses in their respective countries. In the 2011 census the total population of the United Kingdom was 63,181,775. It is the third - largest in the European Union, the fifth - largest in the Commonwealth and the 22nd - largest in the world. In mid-2014 and mid-2015 net long - term international migration contributed more to population growth. In mid-2012 and mid-2013 natural change contributed the most to population growth. Between 2001 and 2011 the population increased by an average annual rate of approximately 0.7 %. This compares to 0.3 % per year in the period 1991 to 2001 and 0.2 % in the decade 1981 to 1991. The 2011 census also confirmed that the proportion of the population aged 0 -- 14 has nearly halved (31 % in 1911 compared to 18 in 2011) and the proportion of older people aged 65 and over has more than tripled (from 5 to 16 %). It has been estimated that the number of people aged 100 or over will rise steeply to reach over 626,000 by 2080. England 's population in 2011 was found to be 53 million. It is one of the most densely populated countries in the world, with 420 people resident per square kilometre in mid-2015. with a particular concentration in London and the south - east. The 2011 census put Scotland 's population at 5.3 million, Wales at 3.06 million and Northern Ireland at 1.81 million. In percentage terms England has had the fastest growing population of any country of the UK in the period from 2001 to 2011, with an increase of 7.9 %. In 2012 the average total fertility rate (TFR) across the UK was 1.92 children per woman. While a rising birth rate is contributing to current population growth, it remains considerably below the ' baby boom ' peak of 2.95 children per woman in 1964, below the replacement rate of 2.1, but higher than the 2001 record low of 1.63. In 2012, Scotland had the lowest TFR at only 1.67, followed by Wales at 1.88, England at 1.94, and Northern Ireland at 2.03. In 2011, 47.3 % of births in the UK were to unmarried women. The Office for National Statistics published an "Experimental Official Statistics '' bulletin in 2015 showing that, out of the UK population aged 16 and over, 1.7 % identify as lesbian, gay, or bisexual (2.0 % of males and 1.5 % of females). 4.5 % of respondents responded with "other '', "I do n't know '', or did not respond. Greater Manchester Urban Area West Yorkshire Urban Area Historically, indigenous British people were thought to be descended from the various ethnic groups that settled there before the 11th century: the Celts, Romans, Anglo - Saxons, Norse and the Normans. Welsh people could be the oldest ethnic group in the UK. A 2006 genetic study shows that more than 50 % of England 's gene pool contains Germanic Y chromosomes. Another 2005 genetic analysis indicates that "about 75 % of the traceable ancestors of the modern British population had arrived in the British isles by about 6,200 years ago, at the start of the British Neolithic or Stone Age '', and that the British broadly share a common ancestry with the Basque people. The UK has a history of small - scale non-white immigration, with Liverpool having the oldest Black population in the country dating back to at least the 1730s during the period of the African slave trade. During this period it is estimated the Afro - Caribbean population of Great Britain was 10,000 to 15,000 which later declined due to the abolition of slavery. The UK also has the oldest Chinese community in Europe, dating to the arrival of Chinese seamen in the 19th century. In 1950 there were probably fewer than 20,000 non-white residents in Britain, almost all born overseas. In 1951 there were an estimated 94,500 people living in Britain who had been born in South Asia, China, Africa and the Caribbean, just under 0.2 % of the UK population. By 1961 this number had more than quadrupled to 384,000, just over 0.7 % of the United Kingdom population. Since 1948 substantial immigration from Africa, the Caribbean and South Asia has been a legacy of ties forged by the British Empire. Migration from new EU member states in Central and Eastern Europe since 2004 has resulted in growth in these population groups, although some of this migration has been temporary. Since the 1990s, there has been substantial diversification of the immigrant population, with migrants to the UK coming from a much wider range of countries than previous waves, which tended to involve larger numbers of migrants coming from a relatively small number of countries. Academics have argued that the ethnicity categories employed in British national statistics, which were first introduced in the 1991 census, involve confusion between the concepts of ethnicity and race. In 2011, 87.2 % of the UK population identified themselves as white, meaning 12.8 % of the UK population identify themselves as of one of number of ethnic minority groups. In the 2001 census, this figure was 7.9 % of the UK population. Because of differences in the wording of the census forms used in England and Wales, Scotland and Northern Ireland, data on the Other White group is not available for the UK as a whole, but in England and Wales this was the fastest growing group between the 2001 and 2011 censuses, increasing by 1.1 million (1.8 percentage points). Amongst groups for which comparable data is available for all parts of the UK level, there was considerable growth in the size of the Other Asian category, which increased from 0.4 to 1.4 % of the population between 2001 and 2011. There was also considerable growth in the Mixed category. In 2001, people in this category accounted for 1.2 % of the UK population; by 2011, the proportion was 2 %. Ethnic diversity varies significantly across the UK. 30.4 % of London 's population and 37.4 % of Leicester 's was estimated to be non-white in 2005, whereas less than 5 % of the populations of North East England, Wales and the South West were from ethnic minorities, according to the 2001 census. In 2016, 31.4 % of primary and 27.9 % of secondary pupils at state schools in England were members of an ethnic minority. The UK 's de facto official language is English. It is estimated that 95 % of the UK 's population are monolingual English speakers. 5.5 % of the population are estimated to speak languages brought to the UK as a result of relatively recent immigration. South Asian languages, including Punjabi, Urdu, Hindi, Bengali, Tamil and Gujarati, are the largest grouping and are spoken by 2.7 % of the UK population. According to the 2011 census, Polish has become the second - largest language spoken in England and has 546,000 speakers. Four Celtic languages are spoken in the UK: Welsh, Irish, Scottish Gaelic and Cornish. All are recognised as regional or minority languages, subject to specific measures of protection and promotion under the European Charter for Regional or Minority Languages and the Framework Convention for the Protection of National Minorities. In the 2001 Census over a fifth (21 %) of the population of Wales said they could speak Welsh, an increase from the 1991 Census (18 %). In addition it is estimated that about 200,000 Welsh speakers live in England. In the same census in Northern Ireland 167,487 people (10.4 %) stated that they had "some knowledge of Irish '' (see Irish language in Northern Ireland), almost exclusively in the nationalist (mainly Catholic) population. Over 92,000 people in Scotland (just under 2 % of the population) had some Gaelic language ability, including 72 % of those living in the Outer Hebrides. The number of schoolchildren being taught through Welsh, Scottish Gaelic and Irish is increasing. Among emigrant - descended populations some Scottish Gaelic is still spoken in Canada (principally Nova Scotia and Cape Breton Island), and Welsh in Patagonia, Argentina. Scots, a language descended from early northern Middle English, has limited recognition alongside its regional variant, Ulster Scots in Northern Ireland, without specific commitments to protection and promotion. It is compulsory for pupils to study a second language up to the age of 14 in England, and up to age 16 in Scotland. French and German are the two most commonly taught second languages in England and Scotland. All pupils in Wales are taught Welsh as a second language up to age 16, or are taught in Welsh. Forms of Christianity have dominated religious life in what is now the United Kingdom for over 1400 years. Although a majority of citizens still identify with Christianity in many surveys, regular church attendance has fallen dramatically since the middle of the 20th century, while immigration and demographic change have contributed to the growth of other faiths, most notably Islam. This has led some commentators to variously describe the UK as a multi-faith, secularised, or post-Christian society. In the 2001 census 71.6 % of all respondents indicated that they were Christians, with the next largest faiths being Islam (2.8 %), Hinduism (1.0 %), Sikhism (0.6 %), Judaism (0.5 %), Buddhism (0.3 %) and all other religions (0.3 %). 15 % of respondents stated that they had no religion, with a further 7 % not stating a religious preference. A Tearfund survey in 2007 showed only one in ten Britons actually attend church weekly. Between the 2001 and 2011 census there was a decrease in the amount of people who identified as Christian by 12 %, whilst the percentage of those reporting no religious affiliation doubled. This contrasted with growth in the other main religious group categories, with the number of Muslims increasing by the most substantial margin to a total of about 5 %. The Muslim population has increased from 1.6 million in 2001 to 2.7 million in 2011, making it the second - largest religious group in the United Kingdom. In a 2016 survey conducted by BSA (British Social Attitudes) on religious affiliation; 53 % of respondents indicated ' no religion ', while 41 % indicated they were Christians, followed by 6 % who affiliated with other religions (e.g. Islam, Hinduism, Judaism, etc.). Among Christians, adherents to the Church of England constituted 15 %, Roman Catholic Church -- 9 %, other Christians (including Presbyterians, Methodists, other Protestants, as well as Eastern Orthodox) -- 17 %. 71 % of young people aged 18 -- 24 said they had no religion. The Church of England is the established church in England. It retains a representation in the UK Parliament and the British monarch is its Supreme Governor. In Scotland, the Church of Scotland is recognised as the national church. It is not subject to state control, and the British monarch is an ordinary member, required to swear an oath to "maintain and preserve the Protestant Religion and Presbyterian Church Government '' upon his or her accession. The Church in Wales was disestablished in 1920 and, as the Church of Ireland was disestablished in 1870 before the partition of Ireland, there is no established church in Northern Ireland. Although there are no UK - wide data in the 2001 census on adherence to individual Christian denominations, it has been estimated that 62 % of Christians are Anglican, 13.5 % Catholic, 6 % Presbyterian, 3.4 % Methodist with small numbers of other Protestant denominations such as Open Brethren, and Orthodox churches. The United Kingdom has experienced successive waves of migration. The Great Famine in Ireland, then part of the United Kingdom, resulted in perhaps a million people migrating to Great Britain. Throughout the 19th century a small population of German immigrants built up, numbering 28,644 in England and Wales in 1861. London held around half of this population, and other small communities existed in Manchester, Bradford and elsewhere. The German immigrant community was the largest group until 1891, when it became second only to Russian Jews. England has had small Jewish communities for many centuries, subject to occasional expulsions, but British Jews numbered fewer than 10,000 at the start of the 19th century. After 1881 Russian Jews suffered bitter persecutions, and, out of some 2,000,000 who left Russia by 1914, around 120,000 settled permanently in Britain, overtaking the Germans to be the largest ethnic minority from outside the British Isles. The population increasing to 370,000 in 1938. Unable to return to Poland at the end of World War II, over 120,000 Polish veterans remained in the UK permanently. After World War II, there was significant immigration from the colonies and newly independent former colonies, partly as a legacy of empire and partly driven by labour shortages. Many of these migrants came from the Caribbean and the Indian subcontinent. In 1841, 0.25 % of the population of England and Wales was born in a foreign country. In 1901, 1.5 % of the population was foreign born. By 1931, this figure had risen to 2.6 %, and by 1951 it was 4.4 %. In 2014 the net increase was 318,000: immigration was 641,000, up from 526,000 in 2013, while the number of people emigrating (for more than 12 months) was 323,000. One of the more recent trends in migration has been the arrival of workers from the new EU member states in Eastern Europe, known as the A8 countries. In 2010, there were 7.0 million foreign - born residents in the UK, corresponding to 11.3 % of the total population. Of these, 4.76 million (7.7 %) were born outside the EU and 2.24 million (3.6 %) were born in another EU Member State. The proportion of foreign - born people in the UK remains slightly below that of many other European countries. However, immigration is now contributing to a rising population with arrivals and UK - born children of migrants accounting for about half of the population increase between 1991 and 2001. Analysis of Office for National Statistics (ONS) data shows that a net total of 2.3 million migrants moved to the UK in the 15 years from 1991 to 2006. In 2008 it was predicted that migration would add 7 million to the UK population by 2031, though these figures are disputed. The ONS reported that net migration rose from 2009 to 2010 by 21 % to 239,000. In 2013, approximately 208,000 foreign citizens were naturalised as British citizens, the highest number since records began in 1962. This figure fell to around 125,800 in 2014. Between 2009 and 2013, the average number of people granted British citizenship per year was 195,800. The main countries of previous nationality of those naturalised in 2014 were India, Pakistan, the Philippines, Nigeria, Bangladesh, Nepal, China, South Africa, Poland and Somalia. The total number of grants of settlement, which confers permanent residence in the UK without granting British citizenship, was approximately 154,700 in 2013, compared to 241,200 in 2010 and 129,800 in 2012. Over a quarter (27.0 %) of live births in 2014 were to mothers born outside the UK, according to official statistics released in 2015. Citizens of the European Union, including those of the UK, have the right to live and work in any EU member state. The UK applied temporary restrictions to citizens of Romania and Bulgaria, which joined the EU in January 2007. Research conducted by the Migration Policy Institute for the Equality and Human Rights Commission suggests that, between May 2004 and September 2009, 1.5 million workers migrated from the new EU member states to the UK, two - thirds of them Polish, but that many subsequently returned home, resulting in a net increase in the number of nationals of the new member states in the UK of some 700,000 over that period. The late - 2000s recession in the UK reduced the economic incentive for Poles to migrate to the UK, the migration becoming temporary and circular. In 2009, for the first time since enlargement, more nationals of the eight central and eastern European states that had joined the EU in 2004 left the UK than arrived. In 2011, citizens of the new EU member states made up 13 % of the immigrants entering the country. The British Government has introduced a points - based immigration system for immigration from outside the European Economic Area to replace former schemes, including the Scottish Government 's Fresh Talent Initiative. In June 2010 the government introduced a temporary limit of 24,000 on immigration from outside the EU, aiming to discourage applications before a permanent cap was imposed in April 2011. Emigration was an important feature of British society in the 19th century. Between 1815 and 1930 around 11.4 million people emigrated from Britain and 7.3 million from Ireland. Estimates show that by the end of the 20th century some 300 million people of British and Irish descent were permanently settled around the globe. Today, at least 5.5 million UK - born people live abroad, mainly in Australia, Spain, the United States and Canada. Education in the United Kingdom is a devolved matter, with each country having a separate education system. About 38 percent of the United Kingdom population has a university or college degree, which is the highest percentage in Europe, and among the highest percentages in the world. Whilst education in England is the responsibility of the Secretary of State for Education, the day - to - day administration and funding of state schools is the responsibility of local authorities. Universally free of charge state education was introduced piecemeal between 1870 and 1944. Education is now mandatory from ages five to sixteen, and in England youngsters must stay in education or training until they are 18. In 2011, the Trends in International Mathematics and Science Study (TIMSS) rated 13 -- 14 - year - old pupils in England and Wales 10th in the world for maths and 9th for science. The majority of children are educated in state - sector schools, a small proportion of which select on the grounds of academic ability. Two of the top ten performing schools in terms of GCSE results in 2006 were state - run grammar schools. In 2010, over half of places at the University of Oxford and the University of Cambridge were taken by students from state schools, while the proportion of children in England attending private schools is around 7 % which rises to 18 % of those over 16. England has the two oldest universities in English - speaking world, Universities of Oxford and Cambridge (jointly known as "Oxbridge '') with history of over eight centuries. The United Kingdom trails only the United States in terms of representation on lists of top 100 universities. Education in Scotland is the responsibility of the Cabinet Secretary for Education and Lifelong Learning, with day - to - day administration and funding of state schools the responsibility of Local Authorities. Two non-departmental public bodies have key roles in Scottish education. The Scottish Qualifications Authority is responsible for the development, accreditation, assessment and certification of qualifications other than degrees which are delivered at secondary schools, post-secondary colleges of further education and other centres. The Learning and Teaching Scotland provides advice, resources and staff development to education professionals. Scotland first legislated for compulsory education in 1496. The proportion of children in Scotland attending private schools is just over 4 %, and it has been rising slowly in recent years. Scottish students who attend Scottish universities pay neither tuition fees nor graduate endowment charges, as fees were abolished in 2001 and the graduate endowment scheme was abolished in 2008. The Welsh Government has responsibility for education in Wales. A significant number of Welsh students are taught either wholly or largely in the Welsh language; lessons in Welsh are compulsory for all until the age of 16. There are plans to increase the provision of Welsh - medium schools as part of the policy of creating a fully bilingual Wales. Education in Northern Ireland is the responsibility of the Minister of Education and the Minister for Employment and Learning, although responsibility at a local level is administered by five education and library boards covering different geographical areas. The Council for the Curriculum, Examinations & Assessment (CCEA) is the body responsible for advising the government on what should be taught in Northern Ireland 's schools, monitoring standards and awarding qualifications. A government commission 's report in 2014 found that privately educated people comprise 7 % of the general population of the UK but much larger percentages of the top professions, the most extreme case quoted being 71 % of senior judges. Healthcare in the United Kingdom is a devolved matter and each country has its own system of private and publicly funded health care, together with alternative, holistic and complementary treatments. Public healthcare is provided to all UK permanent residents and is mostly free at the point of need, being paid for from general taxation. The World Health Organization, in 2000, ranked the provision of healthcare in the United Kingdom as fifteenth best in Europe and eighteenth in the world. Regulatory bodies are organised on a UK - wide basis such as the General Medical Council, the Nursing and Midwifery Council and non-governmental - based, such as the Royal Colleges. However, political and operational responsibility for healthcare lies with four national executives; healthcare in England is the responsibility of the British Government; healthcare in Northern Ireland is the responsibility of the Northern Ireland Executive; healthcare in Scotland is the responsibility of the Scottish Government; and healthcare in Wales is the responsibility of the Welsh Government. Each National Health Service has different policies and priorities, resulting in contrasts. Since 1979 expenditure on healthcare has been increased significantly to bring it closer to the European Union average. The UK spends around 8.4 % of its gross domestic product on healthcare, which is 0.5 percentage points below the Organisation for Economic Co-operation and Development average and about one percentage point below the average of the European Union. The culture of the United Kingdom has been influenced by many factors including: the nation 's island status; its history as a western liberal democracy and a major power; as well as being a political union of four countries with each preserving elements of distinctive traditions, customs and symbolism. As a result of the British Empire, British influence can be observed in the language, culture and legal systems of many of its former colonies including Australia, Canada, India, Ireland, New Zealand, Pakistan, South Africa and the United States. The substantial cultural influence of the United Kingdom has led it to be described as a "cultural superpower ''. ' British literature ' refers to literature associated with the United Kingdom, the Isle of Man and the Channel Islands. Most British literature is in the English language. In 2005, some 206,000 books were published in the United Kingdom and in 2006 it was the largest publisher of books in the world. The English playwright and poet William Shakespeare is widely regarded as the greatest dramatist of all time, and his contemporaries Christopher Marlowe and Ben Jonson have also been held in continuous high esteem. More recently the playwrights Alan Ayckbourn, Harold Pinter, Michael Frayn, Tom Stoppard and David Edgar have combined elements of surrealism, realism and radicalism. Notable pre-modern and early - modern English writers include Geoffrey Chaucer (14th century), Thomas Malory (15th century), Sir Thomas More (16th century), John Bunyan (17th century) and John Milton (17th century). In the 18th century Daniel Defoe (author of Robinson Crusoe) and Samuel Richardson were pioneers of the modern novel. In the 19th century there followed further innovation by Jane Austen, the gothic novelist Mary Shelley, the children 's writer Lewis Carroll, the Brontë sisters, the social campaigner Charles Dickens, the naturalist Thomas Hardy, the realist George Eliot, the visionary poet William Blake and romantic poet William Wordsworth. 20th century English writers include the science - fiction novelist H.G. Wells; the writers of children 's classics Rudyard Kipling, A.A. Milne (the creator of Winnie - the - Pooh), Roald Dahl and Enid Blyton; the controversial D.H. Lawrence; the modernist Virginia Woolf; the satirist Evelyn Waugh; the prophetic novelist George Orwell; the popular novelists W. Somerset Maugham and Graham Greene; the crime writer Agatha Christie (the best - selling novelist of all time); Ian Fleming (the creator of James Bond); the poets T.S. Eliot, Philip Larkin and Ted Hughes; the fantasy writers J.R.R. Tolkien, C.S. Lewis and J.K. Rowling; the graphic novelists Alan Moore and Neil Gaiman. Scotland 's contributions include the detective writer Arthur Conan Doyle (the creator of Sherlock Holmes), romantic literature by Sir Walter Scott, the children 's writer J.M. Barrie, the epic adventures of Robert Louis Stevenson and the celebrated poet Robert Burns. More recently the modernist and nationalist Hugh MacDiarmid and Neil M. Gunn contributed to the Scottish Renaissance. A more grim outlook is found in Ian Rankin 's stories and the psychological horror - comedy of Iain Banks. Scotland 's capital, Edinburgh, was UNESCO 's first worldwide City of Literature. Britain 's oldest known poem, Y Gododdin, was composed in Yr Hen Ogledd (The Old North), most likely in the late 6th century. It was written in Cumbric or Old Welsh and contains the earliest known reference to King Arthur. From around the seventh century, the connection between Wales and the Old North was lost, and the focus of Welsh - language culture shifted to Wales, where Arthurian legend was further developed by Geoffrey of Monmouth. Wales 's most celebrated medieval poet, Dafydd ap Gwilym (fl. 1320 -- 1370), composed poetry on themes including nature, religion and especially love. He is widely regarded as one of the greatest European poets of his age. Until the late 19th century the majority of Welsh literature was in Welsh and much of the prose was religious in character. Daniel Owen is credited as the first Welsh - language novelist, publishing Rhys Lewis in 1885. The best - known of the Anglo - Welsh poets are both Thomases. Dylan Thomas became famous on both sides of the Atlantic in the mid-20th century. He is remembered for his poetry -- his "Do not go gentle into that good night; Rage, rage against the dying of the light '' is one of the most quoted couplets of English language verse -- and for his "play for voices '', Under Milk Wood. The influential Church in Wales "poet - priest '' and Welsh nationalist R.S. Thomas was nominated for the Nobel Prize in Literature in 1996. Leading Welsh novelists of the twentieth century include Richard Llewellyn and Kate Roberts. Authors of other nationalities, particularly from Commonwealth countries, the Republic of Ireland and the United States, have lived and worked in the UK. Significant examples through the centuries include Jonathan Swift, Oscar Wilde, Bram Stoker, George Bernard Shaw, Joseph Conrad, T.S. Eliot, Ezra Pound and more recently British authors born abroad such as Kazuo Ishiguro and Sir Salman Rushdie. Various styles of music are popular in the UK from the indigenous folk music of England, Wales, Scotland and Northern Ireland to heavy metal. Notable composers of classical music from the United Kingdom and the countries that preceded it include William Byrd, Henry Purcell, Sir Edward Elgar, Gustav Holst, Sir Arthur Sullivan (most famous for working with the librettist Sir W.S. Gilbert), Ralph Vaughan Williams and Benjamin Britten, pioneer of modern British opera. Sir Harrison Birtwistle is one of the foremost living composers. The UK is also home to world - renowned symphonic orchestras and choruses such as the BBC Symphony Orchestra and the London Symphony Chorus. Notable conductors include Sir Simon Rattle, Sir John Barbirolli and Sir Malcolm Sargent. Some of the notable film score composers include John Barry, Clint Mansell, Mike Oldfield, John Powell, Craig Armstrong, David Arnold, John Murphy, Monty Norman and Harry Gregson - Williams. George Frideric Handel became a naturalised British citizen and wrote the British coronation anthem, while some of his best works, such as Messiah, were written in the English language. Andrew Lloyd Webber is a prolific composer of musical theatre. His works have dominated London 's West End since the late 20th century and have also been a commercial success worldwide. The Beatles have international sales of over one billion units and are the biggest - selling and most influential band in the history of popular music. Other prominent British contributors to have influenced popular music over the last 50 years include; The Rolling Stones, Pink Floyd, Queen, Led Zeppelin, the Bee Gees, and Elton John, all of whom have worldwide record sales of 200 million or more. The Brit Awards are the BPI 's annual music awards, and some of the British recipients of the Outstanding Contribution to Music award include; The Who, David Bowie, Eric Clapton, Rod Stewart and The Police. More recent UK music acts that have had international success include Coldplay, Radiohead, Oasis, Spice Girls, Robbie Williams, Amy Winehouse and Adele. A number of UK cities are known for their music. Acts from Liverpool have had 54 UK chart number one hit singles, more per capita than any other city worldwide. Glasgow 's contribution to music was recognised in 2008 when it was named a UNESCO City of Music, one of only three cities in the world to have this honour. The history of British visual art forms part of western art history. Major British artists include: the Romantics William Blake, John Constable, Samuel Palmer and J.M.W. Turner; the portrait painters Sir Joshua Reynolds and Lucian Freud; the landscape artists Thomas Gainsborough and L.S. Lowry; the pioneer of the Arts and Crafts Movement William Morris; the figurative painter Francis Bacon; the Pop artists Peter Blake, Richard Hamilton and David Hockney; the collaborative duo Gilbert and George; the abstract artist Howard Hodgkin; and the sculptors Antony Gormley, Anish Kapoor and Henry Moore. During the late 1980s and 1990s the Saatchi Gallery in London helped to bring to public attention a group of multi-genre artists who would become known as the "Young British Artists '': Damien Hirst, Chris Ofili, Rachel Whiteread, Tracey Emin, Mark Wallinger, Steve McQueen, Sam Taylor - Wood and the Chapman Brothers are among the better - known members of this loosely affiliated movement. The Royal Academy in London is a key organisation for the promotion of the visual arts in the United Kingdom. Major schools of art in the UK include: the six - school University of the Arts London, which includes the Central Saint Martins College of Art and Design and Chelsea College of Art and Design; Goldsmiths, University of London; the Slade School of Fine Art (part of University College London); the Glasgow School of Art; the Royal College of Art; and The Ruskin School of Drawing and Fine Art (part of the University of Oxford). The Courtauld Institute of Art is a leading centre for the teaching of the history of art. Important art galleries in the United Kingdom include the National Gallery, National Portrait Gallery, Tate Britain and Tate Modern (the most - visited modern art gallery in the world, with around 4.7 million visitors per year). The United Kingdom has had a considerable influence on the history of the cinema. The British directors Alfred Hitchcock, whose film Vertigo is considered by some critics as the best film of all time, and David Lean are among the most critically acclaimed of all - time. Other important directors including Charlie Chaplin, Michael Powell, Carol Reed, Edgar Wright, Christopher Nolan, and Ridley Scott. Many British actors have achieved international fame and critical success, including: Julie Andrews, Richard Burton, Michael Caine, Colin Firth, Gary Oldman, Ben Kingsley, Ian McKellen, Liam Neeson, Charlie Chaplin, Sean Connery, Vivien Leigh, David Niven, Laurence Olivier, Peter Sellers, Kate Winslet, Anthony Hopkins, and Daniel Day - Lewis. Some of the most commercially successful films of all time have been produced in the United Kingdom, including two of the highest - grossing film franchises (Harry Potter and James Bond). Ealing Studios has a claim to being the oldest continuously working film studio in the world. Despite a history of important and successful productions, the industry has often been characterised by a debate about its identity and the level of American and European influence. British producers are active in international co-productions and British actors, directors and crew feature regularly in American films. Many successful Hollywood films have been based on British people, stories or events, including Titanic, The Lord of the Rings, Pirates of the Caribbean. In 2009, British films grossed around $2 billion worldwide and achieved a market share of around 7 % globally and 17 % in the United Kingdom. UK box - office takings totalled £ 944 million in 2009, with around 173 million admissions. The British Film Institute has produced a poll ranking of what it considers to be the 100 greatest British films of all time, the BFI Top 100 British films. The annual British Academy Film Awards are hosted by the British Academy of Film and Television Arts. The BBC, founded in 1922, is the UK 's publicly funded radio, television and Internet broadcasting corporation, and is the oldest and largest broadcaster in the world. It operates numerous television and radio stations in the UK and abroad and its domestic services are funded by the television licence. Other major players in the UK media include ITV plc, which operates 11 of the 15 regional television broadcasters that make up the ITV Network, and News Corporation, which owns a number of national newspapers through News International such as the most popular tabloid The Sun and the longest - established daily "broadsheet '' The Times, as well as holding a large stake in satellite broadcaster British Sky Broadcasting. London dominates the media sector in the UK: national newspapers and television and radio are largely based there, although Manchester is also a significant national media centre. Edinburgh and Glasgow, and Cardiff, are important centres of newspaper and broadcasting production in Scotland and Wales respectively. The UK publishing sector, including books, directories and databases, journals, magazines and business media, newspapers and news agencies, has a combined turnover of around £ 20 billion and employs around 167,000 people. In 2009, it was estimated that individuals viewed a mean of 3.75 hours of television per day and 2.81 hours of radio. In that year the main BBC public service broadcasting channels accounted for an estimated 28.4 % of all television viewing; the three main independent channels accounted for 29.5 % and the increasingly important other satellite and digital channels for the remaining 42.1 %. Sales of newspapers have fallen since the 1970s and in 2010 41 % of people reported reading a daily national newspaper. In 2010, 82.5 % of the UK population were Internet users, the highest proportion amongst the 20 countries with the largest total number of users in that year. The United Kingdom is famous for the tradition of ' British Empiricism ', a branch of the philosophy of knowledge that states that only knowledge verified by experience is valid, and ' Scottish Philosophy ', sometimes referred to as the ' Scottish School of Common Sense '. The most famous philosophers of British Empiricism are John Locke, George Berkeley and David Hume; while Dugald Stewart, Thomas Reid and William Hamilton were major exponents of the Scottish "common sense '' school. Two Britons are also notable for a theory of moral philosophy utilitarianism, first used by Jeremy Bentham and later by John Stuart Mill in his short work Utilitarianism. Other eminent philosophers from the UK and the unions and countries that preceded it include Duns Scotus, John Lilburne, Mary Wollstonecraft, Sir Francis Bacon, Adam Smith, Thomas Hobbes, William of Ockham, Bertrand Russell and A.J. "Freddie '' Ayer. Foreign - born philosophers who settled in the UK include Isaiah Berlin, Karl Marx, Karl Popper and Ludwig Wittgenstein. Major sports, including association football, tennis, rugby union, rugby league, golf, boxing, netball, rowing and cricket, originated or were substantially developed in the UK and the states that preceded it. With the rules and codes of many modern sports invented and codified in late 19th century Victorian Britain, in 2012, the President of the IOC, Jacques Rogge, stated; "This great, sports - loving country is widely recognized as the birthplace of modern sport. It was here that the concepts of sportsmanship and fair play were first codified into clear rules and regulations. It was here that sport was included as an educational tool in the school curriculum ''. In most international competitions, separate teams represent England, Scotland and Wales. Northern Ireland and the Republic of Ireland usually field a single team representing all of Ireland, with notable exceptions being association football and the Commonwealth Games. In sporting contexts, the English, Scottish, Welsh and Irish / Northern Irish teams are often referred to collectively as the Home Nations. There are some sports in which a single team represents the whole of United Kingdom, including the Olympics, where the UK is represented by the Great Britain team. The 1908, 1948 and 2012 Summer Olympics were held in London, making it the first city to host the games three times. Britain has participated in every modern Olympic Games to date and is third in the medal count. A 2003 poll found that football is the most popular sport in the United Kingdom. England is recognised by FIFA as the birthplace of club football, and The Football Association is the oldest of its kind, with the rules of football first drafted in 1863 by Ebenezer Cobb Morley. Each of the Home Nations has its own football association, national team and league system. The English top division, the Premier League, is the most watched football league in the world. The first - ever international football match was contested by England and Scotland on 30 November 1872. England, Scotland, Wales and Northern Ireland compete as separate countries in international competitions. A Great Britain Olympic football team was assembled for the first time to compete in the London 2012 Olympic Games. However, the Scottish, Welsh and Northern Irish football associations declined to participate, fearing that it would undermine their independent status -- a fear confirmed by FIFA. In 2003, rugby union was ranked the second most popular sport in the UK. The sport was created in Rugby School, Warwickshire, and the first rugby international took place on 27 March 1871 between England and Scotland. England, Scotland, Wales, Ireland, France and Italy compete in the Six Nations Championship; the premier international tournament in the northern hemisphere. Sport governing bodies in England, Scotland, Wales and Ireland organise and regulate the game separately. If any of the British teams or the Irish team beat the other three in a tournament, then it is awarded the Triple Crown. Cricket was invented in England, and its laws were established by Marylebone Cricket Club in 1788. The England cricket team, controlled by the England and Wales Cricket Board, is the only national team in the UK with Test status. Team members are drawn from the main county sides, and include both English and Welsh players. Cricket is distinct from football and rugby where Wales and England field separate national teams, although Wales had fielded its own team in the past. Irish and Scottish players have played for England because neither Scotland nor Ireland have Test status and have only recently started to play in One Day Internationals. Scotland, England (and Wales), and Ireland (including Northern Ireland) have competed at the Cricket World Cup, with England reaching the finals on three occasions. There is a professional league championship in which clubs representing 17 English counties and 1 Welsh county compete. The modern game of tennis originated in Birmingham, England, in the 1860s, before spreading around the world. The world 's oldest tennis tournament, the Wimbledon championships, first occurred in 1877, and today the event takes place over two weeks in late June and early July. Thoroughbred racing, which originated under Charles II of England as the "sport of kings '', is popular throughout the UK with world - famous races including the Grand National, the Epsom Derby, Royal Ascot and the Cheltenham National Hunt Festival (including the Cheltenham Gold Cup). The UK has proved successful in the international sporting arena in rowing. The UK is closely associated with motorsport. Many teams and drivers in Formula One (F1) are based in the UK, and the country has won more drivers ' and constructors ' titles than any other. The UK hosted the first F1 Grand Prix in 1950 at Silverstone, the current location of the British Grand Prix held each year in July. The UK hosts legs of the Grand Prix motorcycle racing, World Rally Championship and FIA World Endurance Championship. The premier national auto racing event is the British Touring Car Championship. Motorcycle road racing has a long tradition with races such as the Isle of Man TT and the North West 200. Golf is the sixth most popular sport, by participation, in the UK. Although The Royal and Ancient Golf Club of St Andrews in Scotland is the sport 's home course, the world 's oldest golf course is actually Musselburgh Links ' Old Golf Course. In 1764, the standard 18 - hole golf course was created at St Andrews when members modified the course from 22 to 18 holes. The oldest golf tournament in the world, and the first major championship in golf, The Open Championship, is played annually on the weekend of the third Friday in July. Rugby league originated in Huddersfield, West Yorkshire in 1895 and is generally played in Northern England. A single ' Great Britain Lions ' team had competed in the Rugby League World Cup and Test match games, but this changed in 2008 when England, Scotland and Ireland competed as separate nations. Great Britain is still retained as the full national team. Super League is the highest level of professional rugby league in the UK and Europe. It consists of 11 teams from Northern England, 1 from London, 1 from Wales and 1 from France. The ' Queensberry rules ', the code of general rules in boxing, was named after John Douglas, 9th Marquess of Queensberry in 1867, that formed the basis of modern boxing. Snooker is another of the UK 's popular sporting exports, with the world championships held annually in Sheffield. In Northern Ireland Gaelic football and hurling are popular team sports, both in terms of participation and spectating, and Irish expatriates in the UK and the US also play them. Shinty (or camanachd) is popular in the Scottish Highlands. Highland games are held in spring and summer in Scotland, celebrating Scottish and celtic culture and heritage, especially that of the Scottish Highlands. The flag of the United Kingdom is the Union Flag (also referred to as the Union Jack). It was created in 1606 by the superimposition of the Flag of England on the Flag of Scotland and updated in 1801 with the addition of Saint Patrick 's Flag. Wales is not represented in the Union Flag, as Wales had been conquered and annexed to England prior to the formation of the United Kingdom. The possibility of redesigning the Union Flag to include representation of Wales has not been completely ruled out. The national anthem of the United Kingdom is "God Save the King '', with "King '' replaced with "Queen '' in the lyrics whenever the monarch is a woman. Britannia is a national personification of the United Kingdom, originating from Roman Britain. Britannia is symbolised as a young woman with brown or golden hair, wearing a Corinthian helmet and white robes. She holds Poseidon 's three - pronged trident and a shield, bearing the Union Flag. Sometimes she is depicted as riding on the back of a lion. Since the height of the British Empire in the late 19th century, Britannia has often been associated with British maritime dominance, as in the patriotic song "Rule, Britannia! ''. Up until 2008, the lion symbol was depicted behind Britannia on the British fifty pence coin and on the back of the British ten pence coin. It is also used as a symbol on the non-ceremonial flag of the British Army. A second, less used, personification of the nation is the character John Bull. The bulldog is sometimes used as a symbol of the United Kingdom and has been associated with Winston Churchill 's defiance of Nazi Germany. The following are international rankings of the United Kingdom, including those measuring life quality, health care quality, stability, press freedom and income. United Kingdom -- Wikipedia book The full title of this country is ' the United Kingdom of Great Britain and Northern Ireland '. Great Britain is made up of England, Scotland and Wales. The United Kingdom (UK) is made up of England, Scotland, Wales and Northern Ireland. ' Britain ' is used informally, usually meaning the United Kingdom. The Channel Islands and the Isle of Man are not part of the UK. Click on a coloured area to see an article about English in that country or region Coordinates: 55 ° N 3 ° W  /  55 ° N 3 ° W  / 55; - 3
what episode does arizona come on grey's
Arizona Robbins - wikipedia Arizona Robbins, M.D. is a fictional character on the ABC television series Grey 's Anatomy, portrayed by Jessica Capshaw. She was introduced in the show 's fifth season as an attending surgeon and the new chief of pediatric surgery. Originally contracted to appear in three episodes, Capshaw 's contract was extended to the remainder of the fifth season, with her becoming a series regular in the sixth season. Robbins has been characterized as "quirky '' and "perky '', and is well known for wearing wheely sneakers and a Holly Hobbie pink scrub cap, designed to appeal to her younger patients. She was established as a love - interest for orthopedic resident Callie Torres (Sara Ramirez) after the Torres ' storyline with Erica Hahn (Brooke Smith) was cut short due to what series creator Shonda Rhimes called "a lack of chemistry ''. Shonda Rhimes was in contrast pleased with the chemistry between Robbins and Torres, citing the addition of Capshaw to the cast as an element of the season of which she was most proud. Initial media reaction to the character was positive. Matt Mitovich of TV Guide described her as a "fan favorite '', and Chris Monfette for IGN praised the addition of "fresh, new characters '', such as Robbins over the course of the season. Following the death of Dr. Jordan Kenley (John Sloman), Chief Webber (James Pickens, Jr.) replaces his head of pediatric surgery with Dr. Arizona Robbins, a graduate of the Johns Hopkins School of Medicine. Robbins has a romantic interest in orthopedic 5 year resident Callie Torres, (Sara Ramirez) later goes on to kiss her. The two embark on a relationship, but when Torres ' father, Carlos (Héctor Elizondo), learns of the relationship, he threatens to cut her off financially unless she returns home with him. When Torres ' father returns to Seattle and continues to reject his daughter 's Human female sexuality, Robbins is able to convince him to reconsider. She tells Mr. Torres that her father was able to accept her own sexuality, as she promised him she was still the "good man in a storm '' he raised her to be, and that Torres is still the same person he raised. Torres is dismayed to learn that Robbins does n't want children, and the two come to a conclusion that they can not continue their relationship, since they both want different things. However, after a shooter enters Seattle Grace with a vendetta for Derek Shepherd (Patrick Dempsey), Lexie Grey (Chyler Leigh), and Richard Webber (James Pickens, Jr.), they are in lockdown together, and the two reconcile. Robbins receives word that she has been given a grant to go to Malawi, and become a doctor there. In the end, Torres is shown to have accepted this as well and has decided to leave with Robbins. However, a fight at the airport results in Robbins leaving for Malawi without Torres, ending their relationship. She returns, hoping to rekindle her relationship with Torres, but is initially rejected. Eventually, Torres reveals that she is pregnant with Mark Sloan (Eric Dane) 's baby. Robbins accepts the situation, and she and Torres restart their relationship. Torres gifts Robbins with a weekend getaway, and Robbins proposes to Torres. After proposing, the two get in a car crash leaving Torres in critical condition. A series of surgeries follows, including the delivery of her premature baby, along with emotional breakdowns of both Sloan and Robbins. Upon the awakening of Torres, she accepts her marriage proposal, and the two are married by Miranda Bailey (Chandra Wilson). As the fifth year residents are coming close to the end of their residency, Robbins urges Alex Karev (Justin Chambers) to work under her. At the end of the Grey 's Anatomy (season 8), Robbins is hurt badly in a plane crash, resulting in her left leg being amputated. In the aftermath of the plane accident, in which Sloan and Lexie Grey were killed, the hospital is sued and eventually found guilty of negligence. Each victim including Shepherd, Meredith Grey (Ellen Pompeo), Cristina Yang (Sandra Oh), and Robbins herself must receive $15 million of compensation, which leads the hospital to a near bankruptcy as the insurances refuse to pay. Those doctors and Callie buy the hospital with the help of the Harper - Avery Foundation to prevent it from closing, and each become members of the new directing board. Robbins is initially cold towards Callie because she was the one who decided for an amputation. She also struggles with body image issues, but they slowly reconcile as Robbins tries to adapt to her new life. When Dr. Lauren Boswell (Hilarie Burton) arrives at the hospital to reconstruct the face of a baby and flirts with Robbins, she is flattered that a stranger still finds her attractive despite knowing about her injury and the two have a one - night stand. After finding out that Arizona cheated on her with Lauren, Callie kicks Arizona out of their apartment. They also let out their true feelings about the accident and more is revealed about how they actually have felt. Callie initially agrees to couples therapy, but she shows up at the office only to tell Arizona that she is n't going in. Arizona gets drunk with April for a laugh while Callie, who is at the fundraiser, tells people that Arizona died in the plane crash. Arizona is led to believe that she and Leah slept together, however all they did was dance and make grilled cheese sandwiches after watching Derek perform surgery on film. Arizona pursues a sexual relationship with Leah but cuts ties with her when Callie asks her to come back home. It is revealed that Arizona became pregnant via a sperm donor prior to sleeping with Lauren. They decided to try again for a second child, but after agreeing that Callie would carry it, Callie went to see an OB / GYN and discovered that she had developed adhesions in her uterus since Sofia 's birth, meaning she ca n't carry any more children. When she told Arizona, Arizona offered to carry the baby, but Callie decided that since they 're still on unsteady footing that if something goes wrong, they wo n't make it and she does n't want to put them in that position. They agree to postpone their plans to have another baby. However, in the season finale of Season 10 it is implied that Callie and Arizona 's dream of having other child may come by means of a surrogate. At the beginning of Season 11, Callie and Arizona decide to have a baby by surrogate and Arizona applies for a fetal surgery fellowship with Dr. Herman. Arizona, with a heavy workload because of the fellowship, and Callie have an argument in the waiting room, and they choose to go to therapy together, resulting in a 30 - day break. Arizona believes that the break strengthened their relationship and made her realize that she needs Callie - Callie on the other hand declares that the break gave her a taste of the freedom that she has been missing. Callie walks away and the two get a divorce later. Callie wants to take Sofia and move to New York with Penny but Arizona does not want to be separated from her daughter so she hires a lawyer and they go to court for a custody battle. After a long case, Arizona wins sole custody of Sofia. but in the end she ends up sharing Sofia with Callie because she thinks that; ' both of Sofia 's moms deserve to be happy '. It was first reported in December 2008 that Capshaw would be joining the cast of Grey 's Anatomy as pediatric surgeon Arizona Robbins, for a multi-episode arc. Initially scheduled to appear in three episodes of the show 's fifth season, series creator Shonda Rhimes later extended Capshaw 's contract to appear in all of the season 's remaining episodes, becoming a series regular in the sixth season. Speaking of the new addition, Rhimes said: I love Jessica Capshaw, and when I say love I mean love. She could n't be a more wonderful person, and I feel like the chemistry Arizona and Callie have feels like the Meredith and Derek chemistry to me. I find them delightful to watch. This promotion saw Robbins become the only lesbian series regular on primetime TV. Robbins is described as "quirky (and) perky '' by TV Guide 's Matt Mitovich, and "a clear and rational surgeon who is not ruled by her emotions '' by Kris De Leon of BuddyTV. William Harper, writer of the episode "Beat Your Heart Out '' in which Robbins and Torres kiss for the first time, has deemed Robbins: "genuinely, positively interested in people, in the most selfless way. '' Commenting on Robbins ' confidence, Capshaw commented: "she never thinks she 's wrong and you do n't hate her for it. There 's no ego though, she just always thinks she 's right and she is. '' She is portrayed as having "wacky tendencies '', including wearing roller shoes to work. The American Broadcasting Company (ABC) characterized Robbins as "confident '', "ambitious '', and "cheerful ''. Shortly after her arrival in the show, Robbins became a love - interest for Torres (Ramirez). The relationship between the two is referred to by fans by the portmanteau "Calzona ''. Torres ' previous girlfriend Erica Hahn (Brooke Smith) was written out of Grey 's Anatomy in 2008, due to a lack of chemistry between the characters. Rhimes praised the chemistry between Robbins and Torres in contrast, comparing it to that between the show 's primary couple Meredith Grey (Ellen Pompeo) and Shepherd (Dempsey), and stating: "They have that little thing that makes you want to watch them. '' Rhimes named the addition of Capshaw to the cast as one of the elements of the season she was most proud of, commenting that she wished she could have found Callie a love interest that "sparkled '' sooner, but was pleased with eventually having found one in Robbins. When Robbins turned Torres down in the episode "An Honest Mistake '' due to her inexperience with women, series writer Peter Nowalk offered the insight: I totally understand why Arizona would n't want to date a newborn. It 's like getting a Freshman as your Physics lab partner even though you 're a Senior who not only knows the Laws of Motion but has mastered them in ways that would rock that Freshman 's world. Which is not to say the Freshman wo n't grow to be really good at Physics, or that Callie wo n't catch up to Arizona on the lesbian front, it 's simply that Arizona might not have the patience to wait that long. Although the characters go on to begin a relationship, the show 's one - hundredth episode "What a Difference a Day Makes '' sees them experience difficulties as a result of Torres ' father rejecting her for her sexuality. Rhimes commented on their ultimate reconciliation: "I love (Callie) with Arizona. (...) I like that they make me feel hopeful about love. '' Rhimes has said of their relationship in the sixth season: "I would like to see Callie happily in a long - term relationship. We have so much to explore with them, because we barely know anything about (Arizona). '' Capshaw has characterized the relationship as "incredibly understanding and compassionate and sensitive ''. She described the sixth season as being about: "cementing a very mature and grounded relationship and taking it forward. This is a drama, of course; there will be conflict, but for the time being, they 're enjoying being in a relationship that seems stable. '' Asked whether Robbins and Torres might marry in the future, Capshaw replied: "There 's probably a lot more stuff that has to happen before that happens. (...) I do n't think they 're going to get married just to get married. As Arizona goes, I think she has incredible discipline and she does, as you said, have a very strict moral compass and marriage would not be something she would jump into without giving it a great amount of thought. '' Discussing Robbins ' relationship with Torres ' former lover Sloan, Capshaw divulged: "Whenever there 's been a chance to play that I am intimidated by him or being standoffish, I 've always chosen to make it very playful. It 's much more Arizona 's style to find it very amusing. '' Robbins ranked seventh in a top ten list of gay characters on TV compiled by Jane Boursaw of TV Squad. Boursaw wrote: "She 's a mix of ironies - a pediatrician who glides around the hospital on wheelies, impulsively kisses Callie, then tells her she does n't have time to teach a newbie how to be gay. Still, she 's more interesting than the other gays on this show, which are dwindling in numbers since Erica left for parts unknown. '' Commenting on Hahn 's abrupt departure from the show, Dorothy Snarker, writing for LGBT website AfterEllen.com observed of Robbins and Torres ' relationship: "I (...) ca n't help but be wary of how the Grey 's writers will handle this relationship. Jessica has proven lovely and likable in her brief screen time so far. But it 's not how the romance starts, but what happens next that really matters. '' AfterEllen.com also included Robbins in their poll of the Top 50 Lesbian and Bisexual Characters, ranking her at No. 3 and in their Top 50 Favorite Female TV Characters, placing her at No. 2. Matt Mitovich of TV Guide noted that Robbins "quickly established herself as a fan favorite '', describing her as: "a breath of fresh air in the often angsty halls of Seattle Grace. '' Chris Monfette for IGN has opinionated that the fifth season of Grey 's Anatomy was an improvement on the previous two seasons, attributing this in part to the introduction of "fresh, new characters '' Robbins and Owen Hunt (Kevin McKidd). Monfette wrote that Arizona 's ultimate contribution to the season was "introducing the element of childcare to Seattle Grace '', which in turn gave Miranda Bailey (Chandra Wilson) "a great arc ''. Jennifer Godwin for E! Online approved of Arizona 's season six promotion to a series regular, particularly as it meant the continuation of her relationship with Callie. The Los Angeles Times 's Carina MacKenzie wrote of the sixth - season episode "Invasion '': "By far the best moment in this episode was Robbins ' scene with Torres ' father, Carlos. Jessica Capshaw has an incredible ability to take even the most melodramatic of "Grey 's '' speeches and deliver them with a subtlety and an honesty that makes them come off as sincere instead of overwrought. "I was named for a battleship, '' she said, and in the powerful monologue that followed, she calmly and carefully explained that Torres was still the woman Carlos raised. Specific General
i second is equal to how many millisecond
Millisecond - wikipedia A millisecond (from milli - and second; symbol: ms) is a thousandth (0.001 or 10 or /) of a second. 10 milliseconds (a hundredth of a second, 10) are called a centisecond. 100 milliseconds (one tenth of a second, 10) are called a decisecond. To help compare orders of magnitude of different times, this page lists times between 10 seconds and 10 seconds (1 milli second and one second). See also times of other orders of magnitude.
how did the trade organizations (hanseatic league) facilitate commercial growth
Hanseatic League - wikipedia The Hanseatic League (also known as the Hanse or Hansa; Middle Low German: Hanse, Deutsche Hanse, Hansa, Hansa Teutonica or Liga Hanseatica); Dutch: Hanze, was a commercial and defensive confederation of merchant guilds and their market towns. Growing from a few North German towns in the late 1100s, the league came to dominate Baltic maritime trade for three centuries along the coast of Northern Europe. It stretched from the Baltic to the North Sea and inland during the Late Middle Ages and declined slowly after 1450. Hanse, later spelled as Hansa, was the Middle Low German word for a convoy, and this word was applied to bands of merchants traveling between the Hanseatic cities whether by land or by sea. The league was created to protect the guilds ' economic interests and diplomatic privileges in their affiliated cities and countries, as well as along the trade routes the merchants visited. The Hanseatic cities had their own legal system and furnished their own armies for mutual protection and aid. Despite this, the organization was not a state, nor could it be called a confederation of city - states; only a very small number of the cities within the league enjoyed autonomy and liberties comparable to those of a free imperial city. Historians generally trace the origins of the Hanseatic League to the rebuilding of the north German town of Lübeck in 1159 by the powerful Henry the Lion, Duke of Saxony and Bavaria, after he had captured the area from Adolf II, Count of Schauenburg and Holstein. Exploratory trading adventures, raids, and piracy had occurred earlier throughout the Baltic region -- the sailors of Gotland sailed up rivers as far away as Novgorod, for example -- but the scale of international trade in the Baltic area remained insignificant before the growth of the Hanseatic League. German cities achieved domination of trade in the Baltic with striking speed during the 13th century, and Lübeck became a central node in the seaborne trade that linked the areas around the North and Baltic seas. The hegemony of Lübeck peaked during the 15th century. Lübeck became a base for merchants from Saxony and Westphalia trading eastward and northward. Well before the term Hanse appeared in a document in 1267, merchants in different cities began to form guilds, or Hansa, with the intention of trading with towns overseas, especially in the economically less - developed eastern Baltic. This area was a source of timber, wax, amber, resins, and furs, along with rye and wheat brought down on barges from the hinterland to port markets. The towns raised their own armies, with each guild required to provide levies when needed. The Hanseatic cities came to the aid of one another, and commercial ships often had to be used to carry soldiers and their arms. Visby functioned as the leading centre in the Baltic before the Hansa. Sailing east, Visby merchants established a trading post at Novgorod called Gutagard (also known as Gotenhof) in 1080. Merchants from northern Germany also stayed in the early period of the Gotlander settlement. Later they established their own trading station in Novgorod, known as Peterhof, which was further up river, in the first half of the 13th century. In 1229, German merchants at Novgorod were granted certain privileges that made their positions more secure. Hansa societies worked to remove restrictions to trade for their members. Before the official foundation of the league in 1356, the word Hanse did not occur in the Baltic language. Gotlanders used the word varjag. The earliest remaining documentary mention, although without a name, of a specific German commercial federation is from London in 1157. That year, the merchants of the Hansa in Cologne convinced Henry II, King of England, to free them from all tolls in London and allow them to trade at fairs throughout England. The "Queen of the Hansa '', Lübeck, where traders were required to trans - ship goods between the North Sea and the Baltic, gained imperial privileges to become a free imperial city in 1227, as its potential trading partner Hamburg had in 1189. In 1241, Lübeck, which had access to the Baltic and North seas ' fishing grounds, formed an alliance -- a precursor to the league -- with Hamburg, another trading city, that controlled access to salt - trade routes from Lüneburg. The allied cities gained control over most of the salt - fish trade, especially the Scania Market; Cologne joined them in the Diet of 1260. In 1266, Henry III of England granted the Lübeck and Hamburg Hansa a charter for operations in England, and the Cologne Hansa joined them in 1282 to form the most powerful Hanseatic colony in London. Much of the drive for this co-operation came from the fragmented nature of existing territorial governments, which failed to provide security for trade. Over the next 50 years the Hansa itself emerged with formal agreements for confederation and co-operation covering the west and east trade routes. The principal city and linchpin remained Lübeck; with the first general diet of the Hansa held there in 1356, the Hanseatic League acquired an official structure. Lübeck 's location on the Baltic provided access for trade with Scandinavia and Kievan Rus ', putting it in direct competition with the Scandinavians who had previously controlled most of the Baltic trade routes. A treaty with the Visby Hansa put an end to this competition: through this treaty the Lübeck merchants also gained access to the inland Russian port of Novgorod, where they built a trading post or Kontor (literally: "office ''). Although such alliances formed throughout the Holy Roman Empire, the league never became a closely managed formal organisation. Assemblies of the Hanseatic towns met irregularly in Lübeck for a Hansetag (Hanseatic diet), from 1356 onwards, but many towns chose not to attend nor to send representatives and decisions were not binding on individual cities. Over the period, a network of alliances grew to include a flexible roster of 70 to 170 cities. The league succeeded in establishing additional Kontors in Bruges (Flanders), Bergen (Norway), and London (England). These trading posts became significant enclaves. The London Kontor, established in 1320, stood west of London Bridge near Upper Thames Street, the site now occupied by Cannon Street station. It grew into a significant walled community with its own warehouses, weighhouse, church, offices and houses, reflecting the importance and scale of trading activity on the premises. The first reference to it as the Steelyard (der Stahlhof) occurs in 1422. Starting with trade in coarse woollen fabrics, the Hanseatic League had the effect of bringing both commerce and industry to northern Germany. As trade increased, newer and finer woollen and linen fabrics, and even silks, were manufactured in northern Germany. The same refinement of products out of cottage industry occurred in other fields, e.g. etching, wood carving, armour production, engraving of metals, and wood - turning. The century - long monopolization of sea navigation and trade by the Hanseatic League ensured that the Renaissance arrived in northern Germany long before the rest of Europe. In addition to the major Kontors, individual Hanseatic ports had a representative merchant and warehouse. In England this happened in Boston, Bristol, Bishop 's Lynn (now King 's Lynn, which features the sole remaining Hanseatic warehouse in England), Hull, Ipswich, Norwich, Yarmouth (now Great Yarmouth), and York. The league primarily traded timber, furs, resin (or tar), flax, honey, wheat, and rye from the east to Flanders and England with cloth (and, increasingly, manufactured goods) going in the other direction. Metal ore (principally copper and iron) and herring came southwards from Sweden. German colonists in the 12th and 13th centuries settled in numerous cities on and near the east Baltic coast, such as Elbing (Elbląg), Thorn (Toruń), Reval (Tallinn), Riga, and Dorpat (Tartu), which became members of the Hanseatic League, and some of which still retain many Hansa buildings and bear the style of their Hanseatic days. Most were granted Lübeck law (Lübisches Recht), after the league 's most prominent town. The law provided that they had to appeal in all legal matters to Lübeck 's city council. The Livonian Confederation incorporated modern - day Estonia and parts of Latvia and had its own Hanseatic parliament (diet); all of its major towns became members of the Hanseatic League. The dominant language of trade was Middle Low German, a dialect with significant impact for countries involved in the trade, particularly the larger Scandinavian languages, Estonian, and Latvian. The league had a fluid structure, but its members shared some characteristics; most of the Hansa cities either started as independent cities or gained independence through the collective bargaining power of the league, though such independence remained limited. The Hanseatic free cities owed allegiance directly to the Holy Roman Emperor, without any intermediate family tie of obligation to the local nobility. Another similarity involved the cities ' strategic locations along trade routes. At the height of its power in the late 14th century, the merchants of the Hanseatic League succeeded in using their economic clout, and sometimes their military might -- trade routes required protection and the league 's ships sailed well - armed -- to influence imperial policy. The league also wielded power abroad. Between 1361 and 1370, it waged war against Denmark. Initially unsuccessful, Hanseatic towns in 1368 allied in the Confederation of Cologne, sacked Copenhagen and Helsingborg, and forced Valdemar IV, King of Denmark, and his son - in - law Haakon VI, King of Norway, to grant the league 15 % of the profits from Danish trade in the subsequent peace treaty of Stralsund in 1370, thus gaining an effective trade and economic monopoly in Scandinavia. This favourable treaty marked the height of Hanseatic power. After the Danish - Hanseatic War (1426 -- 1435) and the Bombardment of Copenhagen (1428), the commercial privileges were renewed in the Treaty of Vordingborg in 1435. The Hansa also waged a vigorous campaign against pirates. Between 1392 and 1440, maritime trade of the league faced danger from raids of the Victual Brothers and their descendants, privateers hired in 1392 by Albert of Mecklenburg, King of Sweden, against Margaret I, Queen of Denmark. In the Dutch -- Hanseatic War (1438 -- 41), the merchants of Amsterdam sought and eventually won free access to the Baltic and broke the Hanseatic monopoly. As an essential part of protecting their investment in the ships and their cargoes, the League trained pilots and erected lighthouses. Most foreign cities confined the Hanseatic traders to certain trading areas and to their own trading posts. They seldom interacted with the local inhabitants, except when doing business. Many locals, merchant and noble alike, envied the power of the league and tried to diminish it. For example, in London, the local merchants exerted continuing pressure for the revocation of privileges. The refusal of the Hansa to offer reciprocal arrangements to their English counterparts exacerbated the tension. King Edward IV of England reconfirmed the league 's privileges in the Treaty of Utrecht (1474) despite the latent hostility, in part thanks to the significant financial contribution the league made to the Yorkist side during the Wars of the Roses. In 1597, Queen Elizabeth I of England expelled the league from London, and the Steelyard closed the following year. Ivan III of Russia closed the Hanseatic Kontor at Novgorod in 1494. The very existence of the league and its privileges and monopolies created economic and social tensions that often crept over into rivalries between league members. The economic crises of the late 15th century did not spare the Hansa. Nevertheless, its eventual rivals emerged in the form of the territorial states, whether new or revived, and not just in the west: Poland triumphed over the Teutonic Knights in 1466; Ivan III, Grand Prince of Moscow, ended the entrepreneurial independence of Hansa 's Novgorod Kontor in 1478 -- it closed completely and finally in 1494. New vehicles of credit were imported from Italy, where double - entry booking was invented in 1492, and outpaced the Hansa economy, in which silver coins changed hands rather than bills of exchange. In the 15th century, tensions between the Prussian region and the "Wendish '' cities (Lübeck and its eastern neighbours) increased. Lübeck was dependent on its role as centre of the Hansa, being on the shore of the sea without a major river. It was on the entrance of the land route to Hamburg, but this land route could be bypassed by sea travel around Denmark and through the Kattegat. Prussia 's main interest, on the other hand, was the export of bulk products like grain and timber, which were very important for England, the Low Countries, and, later on, also for Spain and Italy. In 1454, the year of the marriage of Elisabeth of Austria to the Jagiellonian king, the towns of the Prussian Confederation rose up against the dominance of the Teutonic Order and asked Casimir IV, King of Poland, for help. Gdańsk (Danzig), Thorn and Elbing became part of the Kingdom of Poland, (from 1466 -- 1569 referred to as Royal Prussia, region of Poland) by the Second Peace of Thorn (1466). Poland in turn was heavily supported by the Holy Roman Empire through family connections and by military assistance under the Habsburgs. Kraków, then the capital of Poland, had a loose association with the Hansa. The lack of customs borders on the River Vistula after 1466 helped to gradually increase Polish grain exports, transported to the sea down the Vistula, from 10,000 short tons (9,100 t) per year, in the late 15th century, to over 200,000 short tons (180,000 t) in the 17th century. The Hansa - dominated maritime grain trade made Poland one of the main areas of its activity, helping Danzig to become the Hansa 's largest city. The member cities took responsibility for their own protection. In 1567, a Hanseatic League agreement reconfirmed previous obligations and rights of league members, such as common protection and defense against enemies. The Prussian Quartier cities of Thorn, Elbing, Königsberg and Riga and Dorpat also signed. When pressed by the King of Poland -- Lithuania, Danzig remained neutral and would not allow ships running for Poland into its territory. They had to anchor somewhere else, such as at Pautzke (Puck). A major economic advantage for the Hansa was its control of the shipbuilding market, mainly in Lübeck and in Danzig. The Hansa sold ships everywhere in Europe, including Italy. They drove out the Dutch, because Holland wanted to favour Bruges as a huge staple market at the end of a trade route. When the Dutch started to become competitors of the Hansa in shipbuilding, the Hansa tried to stop the flow of shipbuilding technology from Hanseatic towns to Holland. Danzig, a trading partner of Amsterdam, attempted to forestall the decision. Dutch ships sailed to Danzig to take grain from the city directly, to the dismay of Lübeck. Hollanders also circumvented the Hanseatic towns by trading directly with north German princes in non-Hanseatic towns. Dutch freight costs were much lower than those of the Hansa, and the Hansa were excluded as middlemen. When Bruges, Antwerp and Holland all became part of the Duchy of Burgundy they actively tried to take over the monopoly of trade from the Hansa, and the staples market from Bruges was transferred to Amsterdam. The Dutch merchants aggressively challenged the Hansa and met with much success. Hanseatic cities in Prussia, Livonia, supported the Dutch against the core cities of the Hansa in northern Germany. After several naval wars between Burgundy and the Hanseatic fleets, Amsterdam gained the position of leading port for Polish and Baltic grain from the late 15th century onwards. The Dutch regarded Amsterdam 's grain trade as the mother of all trades (Moedernegotie). Nuremberg in Franconia developed an overland route to sell formerly Hansa - monopolised products from Frankfurt via Nuremberg and Leipzig to Poland and Russia, trading Flemish cloth and French wine in exchange for grain and furs from the east. The Hansa profited from the Nuremberg trade by allowing Nurembergers to settle in Hanseatic towns, which the Franconians exploited by taking over trade with Sweden as well. The Nuremberger merchant Albrecht Moldenhauer was influential in developing the trade with Sweden and Norway, and his sons Wolf Moldenhauer and Burghard Moldenhauer established themselves in Bergen and Stockholm, becoming leaders of the local Hanseatic activities. At the start of the 16th century, the league found itself in a weaker position than it had known for many years. The rising Swedish Empire had taken control of much of the Baltic Sea. Denmark had regained control over its own trade, the Kontor in Novgorod had closed, and the Kontor in Bruges had become effectively moribund. The individual cities making up the league had also started to put self - interest before their common Hanseatic interests. Finally, the political authority of the German princes had started to grow, constraining the independence of the merchants and Hanseatic towns. The league attempted to deal with some of these issues: it created the post of Syndic in 1556 and elected Heinrich Sudermann as a permanent official with legal training, who worked to protect and extend the diplomatic agreements of the member towns. In 1557 and 1579 revised agreements spelled out the duties of towns and some progress was made. The Bruges Kontor moved to Antwerp and the Hansa attempted to pioneer new routes. However the league proved unable to prevent the growing mercantile competition, and so a long decline commenced. The Antwerp Kontor closed in 1593, followed by the London Kontor in 1598. The Bergen Kontor continued until 1754; of all the Kontore, only its buildings, the Bryggen, survive. The gigantic Adler von Lübeck warship was constructed for military use against Sweden during the Northern Seven Years ' War (1563 -- 70) but was never put to military use, epitomizing the vain attempts of Lübeck to uphold its long - privileged commercial position in a changing economic and political climate. By the late 16th century, the league had imploded and could no longer deal with its own internal struggles. The social and political changes that accompanied the Protestant Reformation included the rise of Dutch and English merchants and the incursion of the Ottoman Empire upon the Holy Roman Empire and its trade routes. Only nine members attended the last formal meeting in 1669 and only three (Lübeck, Hamburg and Bremen) remained as members until its demise in 1862, with the creation of the German Empire under Kaiser Wilhelm I. Hence, only Lübeck, Hamburg, and Bremen retain the words "Hanseatic City '' in their official German titles. Despite its collapse, several cities still maintained the link to the Hanseatic League. Dutch cities including Groningen, Deventer, Kampen, Zutphen and Zwolle, and a number of German cities including Bremen, Demmin, Greifswald, Hamburg, Lübeck, Lüneburg, Rostock, Stade, Stralsund and Wismar still call themselves Hanse cities (their car license plates are prefixed H, e.g. -- HB -- for "Hansestadt Bremen ''). Hamburg and Bremen continue to style themselves officially as "free Hanseatic cities '', with Lübeck named "Hanseatic City '' (Rostock 's football team is named F.C. Hansa Rostock in memory of the city 's trading past). For Lübeck in particular, this anachronistic tie to a glorious past remained especially important in the 20th century. In 1937, the Nazi Party removed this privilege through the Greater Hamburg Act possibly because the Senat of Lübeck did not permit Adolf Hitler to speak in Lübeck during his 1932 election campaign. He held the speech in Bad Schwartau, a small village on the outskirts of Lübeck. Subsequently, he referred to Lübeck as "the small city close to Bad Schwartau. '' After the EU enlargement to the East in May 2004 there were some experts who wrote about the resurrection of the Baltic Hansa. The legacy of the Hansa is remembered today in several names: the German airline Lufthansa (i.e., "Air Hansa ''); F.C. Hansa Rostock; the Hanze University of Applied Sciences, Groningen, in the Netherlands; the Hanze oil production platform (also in the Netherlands); the Hansa Brewery in Bergen; the Hansabank in the Baltic states (now known as Swedbank); and the Hanse Sail in Rostock. DDG Hansa was a major German shipping company from 1881 until its bankruptcy in 1980. There are two museums in Europe dedicated specifically to the history of the Hanseatic League: the European Hansemuseum in Lübeck and the Hanseatic Museum and Schøtstuene in Bergen. The members of the Hanseatic League were Low German merchants, whose towns were, with the exception of Dinant, where these merchants held citizenship. Not all towns with Low German merchant communities were members of the league (e.g., Emden, Memel (today Klaipėda), Viborg (today Vyborg) and Narva never joined). However, Hanseatic merchants could also come from settlements without German town law -- the premise for league membership was birth to German parents, subjection to German law, and a commercial education. The league served to advance and defend the common interests of its heterogeneous members: commercial ambitions such as enhancement of trade, and political ambitions such as ensuring maximum independence from the noble territorial rulers. Decisions and actions of the Hanseatic League were the consequence of a consensus - based procedure. If an issue arose, the league 's members were invited to participate in a central meeting, the Tagfahrt ("meeting ride '', sometimes also referred to as Hansetag, since 1358). The member communities then chose envoys (Ratssendeboten) to represent their local consensus on the issue at the Tagfahrt. Not every community sent an envoy, delegates were often entitled to represent a set of communities. Consensus - building on local and Tagfahrt levels followed the Low Saxon tradition of Einung, where consensus was defined as absence of protest: after a discussion, the proposals which gained sufficient support were dictated aloud to the scribe and passed as binding Rezess if the attendees did not object; those favouring alternative proposals unlikely to get sufficient support were obliged to remain silent during this procedure. If consensus could not be established on a certain issue, it was found instead in the appointment of a number of league members who were then empowered to work out a compromise. The Hanseatic Kontore each had their own treasury, court and seal. Like the guilds, the Kontore were led by Ältermänner ("eldermen '', or English aldermen). The Stalhof Kontor, as a special case, had a Hanseatic and an English Ältermann. In 1347, the Kontor of Brussels modified its statute to ensure an equal representation of the league 's members. To that end, member communities from different regions were pooled into three circles (Drittel ("third (part) ''): the Wendish and Saxon Drittel, the Westphalian and Prussian Drittel as well as the Gothlandian, Livonian and Swedish Drittel. The merchants from their respective Drittel would then each choose two Ältermänner and six members of the Eighteen Men 's Council (Achtzehnmännerrat) to administer the Kontor for a set period of time. In 1356, during a Hanseatic meeting in preparation of the first Tagfahrt, the league confirmed this statute. The division into Drittel was gradually adopted and institutionalized by the league in general (see table). The Tagfahrt or Hansetag were the only central institutions of the Hanseatic League. However, with the division in Drittel, the members of the respective subdivisions frequently held a Dritteltage ("Drittel meeting '') to work out common positions which could then be presented at a Tagfahrt. On a more local level, league members also met, and while such regional meetings were never formalized into a Hanseatic institution, they gradually gained importance in the process of preparing and implementing Tagfahrt decisions. From 1554, the division into Drittel was modified to reduce the circles ' heterogeneity, enhance the collaboration of the members on a local level and thus make the league 's decision - making process more efficient. The number of circles rose to four, so they were called Quartiere (quarters): This division was however not adopted by the Kontore, who, for their purposes (like Ältermänner elections), grouped the league members in different ways (e.g., the division adopted by the Stahlhof in London in 1554 grouped the league members into Dritteln, whereby Lübeck merchants represented the Wendish, Pomeranian Saxon and several Westphalian towns, Cologne merchants represented the Cleves, Mark, Berg and Dutch towns, while Danzig merchants represented the Prussian and Livonian towns). The names of the Quarters have been abbreviated in the following table: Kontor: The Kontore were foreign trading posts of the League, not cities that were Hanseatic members, and are set apart in a separate table below. The remaining column headings are as follows: In 1980, former Hanseatic League members established a "new Hanse '' in Zwolle, the "City League The Hanse ''. This league is open to all former Hanseatic League members and cities that once hosted a Hanseatic kontor. The latter include twelve Russian cities, most notably Novgorod, which was a major Russian trade partner of the Hansa in the Middle Ages. The "new Hanse '' fosters and develops business links, tourism and cultural exchange. The headquarters of the New Hansa is in Lübeck, Germany. The current President of the Hanseatic League of New Time is Bernd Saxe, Mayor of Lübeck. Each year one of the member cities of the New Hansa hosts the Hanseatic Days of New Time international festival. In 2006 King 's Lynn became the first English member of the newly formed modern Hanseatic League. Hull also joined and Boston, Lincolnshire was considering an application in early 2013. Europe in 1097 Europe in 1430 Europe in 1470 Carta marina of the Baltic Sea region (1539)